It's not that the splitters have failed, it's that there's nothing for them to work on - and I'm wondering about that... have we used up all of SETI's "banked" data? Is Aracibo shut down again? Did someone forget to "hang a tape/some tapes"? :-)

and if you would have looked 2.5 hours sooner, you would have seen the same thing as me, lots of RED.

you might choose to look at it as they are out of work to split, me FAILED.

There was still a lot of red (in the splitters area...) when I looked... but red for a splitter means that that splitter ran out of work, or has otherwise failed: to see which, see the right-hand column, labeled "Splitter Status" - if there's only one "tape" shown there, (as was the case when I looked Friday morning) then work has run out!

BTW, the red "Not running" means (according to the definition below the "server Status" column) "Program failed or ran out of work" - with splitters, the second condition is most commonly the case...

Dittoing the others that are having problems uploading, reporting, and getting new work....

Results ready to send 54,476 (much more than last days)
Results received in last hour 6,037 (about 10% of the normal number)
Results returned and awaiting validation 5,808,203 (much more than normal number)
Workunits waiting for assimilation 550,799 (highest number than usual)

[As of 13 Mar 2010 15:50:19 UTC]

I think this is the same router problem we have seen last end of february...
Has someone try to pathping the router?

In server status page we have the same situation of the last outage...

Uploading results, downloading new work units and even updating one's project has become such a mission that one is forced to suggest shutting the project down until it can be made to work reliably at least some of the time. While I do appreciate the efforts of the project leaders, let us not forget that the contributors to the project are the ones who are paying to make it run through increased electricity bills - something we are no doubt happy to do, but I think users would be happier contributing to a project that actually works more often than not and bears some fruit.

Uploading results, downloading new work units and even updating one's project has become such a mission that one is forced to suggest shutting the project down until it can be made to work reliably at least some of the time. While I do appreciate the efforts of the project leaders, let us not forget that the contributors to the project are the ones who are paying to make it run through increased electricity bills - something we are no doubt happy to do, but I think users would be happier contributing to a project that actually works more often than not and bears some fruit.

To which more damage to the project might happen if it were to close down since things more than likely won't change. To make the project more reliable, you'd need more hardware and more manpower thrown at the problem, both of which costs money. Since the project has neither, it won't be any better after it's shut down.

Further, if users are unhappy about paying their electric bill for a distributed computing project that is having problems, there are two answers to rectify the situation more realistically: 1) join another project to keep those CPUs busy and thus not "waste" any electricity or failing that 2) accept that the project is doing the best they can and there will be periods of understandable and unavoidable downtime given the resources they have, and simply accept that if you don't have any work, your wasted electric bill is on you given the 1st option.

Uploading results, downloading new work units and even updating one's project has become such a mission that one is forced to suggest shutting the project down until it can be made to work reliably at least some of the time. While I do appreciate the efforts of the project leaders, let us not forget that the contributors to the project are the ones who are paying to make it run through increased electricity bills - something we are no doubt happy to do, but I think users would be happier contributing to a project that actually works more often than not and bears some fruit.

To which more damage to the project might happen if it were to close down since things more than likely won't change. To make the project more reliable, you'd need more hardware and more manpower thrown at the problem, both of which costs money. Since the project has neither, it won't be any better after it's shut down.

Further, if users are unhappy about paying their electric bill for a distributed computing project that is having problems, there are two answers to rectify the situation more realistically: 1) join another project to keep those CPUs busy and thus not "waste" any electricity or failing that 2) accept that the project is doing the best they can and there will be periods of understandable and unavoidable downtime given the resources they have, and simply accept that if you don't have any work, your wasted electric bill is on you given the 1st option.

I find it very interesting reading the messages this am (14 Mar 10).
They seem to indicate problems (perhaps personal and maybe "panicky"-if there is such a term) with uploading completed work. And yet, I see no condenscending/patronizing response from staff members or member as it were.

Abundant thanks to that person who gave instruction on how to reach the Q & A
Section. As previously stated, it was most helpful.

From my reading of the posts this am, I can safely ascertain that I am not the only one dealing with "upload" problems.

[quote]...with splitters, the second condition is most commonly the case...[quote]

this project is failed more times then not lately. it wouldn't be such a problem if the bonic client was coded proporly, yes u heard me the bonic client is all f#%@ up. it doesn't request the right amount of work for the cpu or gpu, and then sometimes it gorges!

looks like the project completly went off line last night at 24:00, humm...

it doesn't request the right amount of work for the cpu or gpu, and then sometimes it gorges!

What version of BOINC are you using?

I'd certainly have to agree with ^^ statement. And the problem seems to be in the 6.10.xx series, after 6.10.18. At least for me.

With 7 GPU's running in one cruncher (not using CPU's to crunch)... I usually kept about 3,000 work units in the queue. One morning I awoke to find almost 6,000 WU's in the queue. And I had the "Addtional Work Buffer" set to only 5.95 days.

That was in BOINC 6.10.36. It suddenly gorged overnight. I dropped back to 6.10.29 just an hour before the latest SETI problems started so I have no idea if 6.10.29 will handle the work queue right.