Let's see.. we're *still* waiting for the RAID resync's to finish and likewise the pulse table rebuild. Another day or two? Meanwhile, I cleared off enough space on the workunit machine such that we can keep producing/sending out work. We still can't assimilate very much until the pulse table rebuild is over, but at least the people can do science and get credit. I'm worried about mysql bloat with the large result table (over 2 million waiting for assimilation), but we've been here many times before and lived.

Lost in the chaos of outage recovery yesterday was a bunch of "make science status page" processes piling up on top of each other, causing extra stress on the science database, and eventually making the splitters jam up. Oops. I killed all those this morning and that particular dam broke. Now that we're catching up on satisfying workunit demand I think we'll be maxed out traffic-wise for a while, which isn't the worst of problems (that means work *is* flowing as fast as we can send it).

Lots of code walkthroughs with Jeff today regarding the NTPCker. It's getting to be a mature piece of code. Scoring mechanisms are almost all in place (though they still may need major tuning once we sift through enough real data). We're still concerned about our ability to actually keep it running "near time," i.e. will the database be able to handle the load? We shall see. A lot of database improvements to help this have unfortunately been blocked on the last couple of weeks' worth of problems with thumper.

Happy April Fool's Day! Don't believe anything anybody says! Actually that's good advice regardless of the day of the year.-- BOINC/SETI@home network/web/science/development person
-- "Any idiot can have a good idea. What is hard is to do it." - Jeanne-Claude

Thanks Matt, is the 'extra CUDA-LOAD', troublesome or does it work well, if you look at the amount work done.
Anyway, thanks for your UPdate, my keyboard is sluggish sometime, (think) it's the CUDA use.Only, when I want to 'burn' an 'image' on CD/DVD, a will stop BOINC completeley.
There seems to be little CUDA MB WU's but also 'normal (6.03)MB WU's .

Lost in the chaos of outage recovery yesterday was a bunch of "make science status page" processes piling up on top of each other, causing extra stress on the science database, and eventually making the splitters jam up. Oops. I killed all those this morning and that particular dam broke. Now that we're catching up on satisfying workunit demand I think we'll be maxed out traffic-wise for a while, which isn't the worst of problems (that means work *is* flowing as fast as we can send it).

So the status page was causing the slow workunit creation last night? Will the page remain frozen until things settle down?

..yeah there was some additional mounting/network clogging gobbledygook that was blocking the regular server status page for a while. Separate problem, and I'm fixing that now. The science status page will be continue to be stuck on hold for the near term...

- Matt-- BOINC/SETI@home network/web/science/development person
-- "Any idiot can have a good idea. What is hard is to do it." - Jeanne-Claude

Matt turned on the splitters full tilt on weds night, and the splitters are currently pouring out the WUs as fast as the 100Mb bandwidth from the Site will take, at last sight around 40+ per second, maybe more by now. It will take a while - probably a couple of days or more to refill everyone out there.

An earlier post put it nicely "Form an orderly queue at the Router :) "