It looked like we got beyond the current deluge without too much intervention. Good. Then our bandwidth spiked again. Bad. But then it recovered once more. Good. Oh well, whatever. We're still just in "wait and see if it gets better on its own" mode around here - if we hit our bandwidth limits (and we understand why) there's not much else we can do.

Spent a chunk of the day tracking down current donation processing issues. What a pain. I really need to document the whole crazy donation system so other people around here can fix these problems when they arise. Maybe I'll do that later today. Other than that, just some data pipeline/sysadmin type stuff.

A note about the server status page: Every 10 minutes a BOINC script runs which does several things including: 1. start/restart servers that aren't running but should be, and 2. run a bunch of "task" scripts, like the one that generates the server status page. Since this status page script runs once every ten minutes, it is only a snapshot in time - not a continuum. It also could take several minutes to run its course, as it is scanning many heavily loaded servers. So the data towards the top of the page is representative of a minute or two earlier than the data towards the bottom. And server processes, like ap_validator, hiccup from time to time and get restarted every 10 minutes, then maybe process a few hundred workunits, but fail again a second before the status page checks its status. So even though it was running the past couple of minutes it shows up as "Not Running." In short, don't trust anything on that page at first glance.

- Matt

____________
-- BOINC/SETI@home network/web/science/development person
-- "Any idiot can have a good idea. What is hard is to do it." - Jeanne-Claude

Hello Matt...
I'm still stuck on this issue about bandwidth... my bad.

Here's a crazy idea.
What about compressing the WU's after being created and sending then in that state. When the clients get the data, they could decompress before crunching. Afterwards, compress the results for the trip home.
I know.. more work recoding the apps but... even a 1/10 ratio effectively multiplies the bandwidth by 10. Another issue is, it would put even more burdon on the closets computers. With all the different version of the core clients, some kind of bootstrap (if thats the right term) would need to be created to sit between BOINC and the core client (if thats even possible).

Hello Matt...
I'm still stuck on this issue about bandwidth... my bad.

Here's a crazy idea.
What about compressing the WU's after being created and sending then in that state. When the clients get the data, they could decompress before crunching. Afterwards, compress the results for the trip home.
I know.. more work recoding the apps but... even a 1/10 ratio effectively multiplies the bandwidth by 10. Another issue is, it would put even more burdon on the closets computers. With all the different version of the core clients, some kind of bootstrap (if thats the right term) would need to be created to sit between BOINC and the core client (if thats even possible).

Thanks for making my computers busy again. HOORAH!

Are you certain that this is not already being done? There is a facility built into BOINC to do precisely this.
____________BOINC WIKI