Hello again. Happy President's Day - we had the Monday off, plus I took the whole previous week off to go hang out in Kauai. First real vacation in a while, and last for the foreseeable future.

So what did I miss? Looks like the upload/scheduling servers have been clogged a while due to a swarm of short-runners (workunits the complete quickly due to excessive noise). This should simmer down in due time. Plus we're having the usual outage today so there will be painful recovery from that as well. And things were running a little late today as a permissions problem held up the start of the outage. Patience.

While we did finally get the science database back in working order, we were finding the server still didn't have enough resources to meet our demands. So a new plan is being put into action over the coming weeks: instead of having both SETI@home and Astropulse reside on one server (thumper) and both replicated to another (bambi) - we're going to have SETI@home live on thumper and Astropulse live on bambi, both without replication. This will keep painfully long Astropulse analysis queries from clobbering the SETI@home project (which has been happening a lot lately). We may implement some form of our own replication, but we do back up the database regularly (and store those backups off site), so the replica doesn't buy us that much, especially considering we could double our database power by converting it to another primary server.

- Matt

____________
-- BOINC/SETI@home network/web/science/development person
-- "Any idiot can have a good idea. What is hard is to do it." - Jeanne-Claude

So what did I miss? Looks like the upload/scheduling servers have been clogged a while due to a swarm of short-runners (workunits the complete quickly due to excessive noise). This should simmer down in due time.

I think it may be more than that- upload & down load traffic was very light before the outage, and even if extremely heavy before an outage it's still possible to upload results during the outage.
From the looks of things i've been unable to upload for about 24 hours, and still no joy.
BOINC 6.6.41
____________
Grant
Darwin NT.

So what did I miss? Looks like the upload/scheduling servers have been clogged a while due to a swarm of short-runners (workunits the complete quickly due to excessive noise). This should simmer down in due time.

I think it may be more than that- upload & down load traffic was very light before the outage, and even if extremely heavy before an outage it's still possible to upload results during the outage.
From the looks of things i've been unable to upload for about 24 hours, and still no joy.
BOINC 6.6.41

Cricket graphs stll showing low traffic volumes. Although after the last couple of outages it took a couple of hours for things to pick up fully once they came back on line so i'll see how it is in 4-6 hours.
If they're still down, then there's more of a problem than just the uploads.
____________
Grant
Darwin NT.

Cricket graphs stll showing low traffic volumes. Although after the last couple of outages it took a couple of hours for things to pick up fully once they came back on line so i'll see how it is in 4-6 hours.
If they're still down, then there's more of a problem than just the uploads.

Every client I have running has stuck WUs waiting for upload.
Normally, I dont worry about it but, I cant seem to get a single thing uploaded now.
The servers mostly look ok ( a few are down ).. Thumper and Bambi are running OK tho and yet, nothing moving.

i to am having the same comunication problems. i have so many completed work units building up and unable to upload back to the server(project backoff, server may be down). hope the problen is sorted soon