I am set to keep 1 days worth of work on hand and have not had problems keeping SETI work on hand.
I currently have two work units totaling 17 hours waiting to be sent in and 3 data units 30 hours of work to process. I am running a MAC and have a 50/50 split on my time with world community grid.
I am not seeing a real problem from southern California but my needs are small because I can only earn about 400 credits a day.

One more thing, and this is just as important: we don't want EVERYONE doing this or we will be the problem. Maybe 5 - 10 people in different geographic regions running the test at different times. Maybe the first person runs on the hour, second person at 10 minutes after and so on.

I agree, can't even image how many requests are trying to get in as is.

To do away with DNS resolution, can use -d on tracert for a quicker output:

One more thing, and this is just as important: we don't want EVERYONE doing this or we will be the problem. Maybe 5 - 10 people in different geographic regions running the test at different times. Maybe the first person runs on the hour, second person at 10 minutes after and so on.

I agree, can't even image how many requests are trying to get in as is.

To do away with DNS resolution, can use -d on tracert for a quicker output:

Just for fun, Last night at 22:44:44 UTC I was able to upload and report all mine. I also got a bunch of new work. Most of those were done again overnight. Since then I have one ready to report and around 24 waiting to upload. (BOINC did this on it's own, no button pushing on my part.)
____________

I am glad to see that folks onsite believe they found the communications culprit and replaced a failing switch (not the first time that has happened either).

Assuming that to be the *only* problem source (and this may well be an overoptimistic assumption), I suppose we can anticipate the post 5 day outage traffic jam to persist for quite a while (perhaps until Tuesday's maintenance outage along with ITS post outage traffic jam).
____________

I am glad to see that folks onsite believe they found the communications culprit and replaced a failing switch (not the first time that has happened either).

Assuming that to be the *only* problem source (and this may well be an overoptimistic assumption), I suppose we can anticipate the post 5 day outage traffic jam to persist for quite a while (perhaps until Tuesday's maintenance outage along with ITS post outage traffic jam).

Where did you read that they are replacing a failing switch? If it was in the thead in "news" "The connectivity problem noted yesterday turned out to be one of our switches and not the router. We swapped in a replacement switch today and connectivity was restored.", that was Posted 27 Nov 2009.