I particularly admire their timing. With production limited to about 60000 MB and 570 AP tasks per hour for more than 2 days, many hosts reached the in progress limit. So turning up the creation rate did not lead to hordes of hosts trying to download 300 Mbps through a 100 Mbps pipe.

If things go well over the weekend, I suppose the in progress limits will be boosted Monday. The system reaction will be interesting to watch.

One more minor update: We continue to beat up on oscar - long story short we're finding our biggest hurdle in utilizing the server to its maximum potential is probably the stripe size on the raid subsystem (which is set to the factory default as it's hard to predict these bottlenecks until everything is turned on). I think I can adjust it live - and we'll try these sort of tests/updates early next week. In the meantime, more testing with what we got...

- Matt

If you can change it live, that's quite a trick.

What?!!!

Changing stripes live?!!!!

Errr... Do a backup, make the change, and then write a sysop paper for the Oscar tweakings?

;-)

Good luck for the tweaks,

Happy crunchin',
MartinSee new freedom: Mageia5
See & try out for yourself: Linux Voice
The Future is what We all make IT (GPLv3)

Scheduler request completed: got 0 new tasks
Message from server: Project has no tasks available

The Server Status page is now closing on 24 hours since it last updated. I bet no one goes in on the weekend to fix that, especially since everything else does seem to be working. I keep sending in results and getting new work, and the ones I've returned and were validated have even been purged already (I thought they were supposed to wait a day before being purged).

It's good to be doing work again, and even better to be doing it without crashing.

DavidDavidSitting on my butt while others boldly go,
Waiting for a message from a small furry creature from Alpha Centauri.

FYI, a disk failure on worf (the raw data storage server) which locked up the RAID requiring a cold reboot (and nobody is around/able to get up to the lab to do so). Only public downside is we ran out of work to send. This'll get fixed Monday morning.

- Matt-- BOINC/SETI@home network/web/science/development person
-- "Any idiot can have a good idea. What is hard is to do it." - Jeanne-Claude

FYI, a disk failure on worf (the raw data storage server) which locked up the RAID requiring a cold reboot (and nobody is around/able to get up to the lab to do so). Only public downside is we ran out of work to send. This'll get fixed Monday morning.

- Matt

No sweat Matt. Let the weekend be what it is supposed to be. We can wait until Monday, heck I can wait much longer than that if needed :-)This is a test of the Emergency Moron System. Had there been a real moron in the room, there would've been a small mushroom cloud in the place where the idiot had been standing.

Thanks Matt,I'm right on the wire for running out of work. Bet I run out before you get it fixed! :-) Only joking. Glad you know what the trouble is and that it won't be hard to fix. So it was Worf causing all the trouble? Never did trust that Klingon!!

Thank you for giving us an update on this Sunday afternoon / evening. And thank you for bothering to check or giving a flip on December 12 with all sorts of Christmas events and rush and things-to-do competing with the project.

Personally, I'm glad to know that the project isn't so all-consuming in your (and others') life that someone felt like they had to go to the campus and flip a switch in a right now, tonight, oh-my-gosh emergency, "how will we live?" panic.

The two year mission is eleven and a half years old. You've been on it for what, thirteen or fourteen years?

Speaking only for myself, I can hunt for alien transmissions orders of magnitude faster than I could way back when. So, on the one hand, the time lost to crunch means a lot more data isn't being crunched per hour, but we can do a month's worth of 1999 crunching in what, a day?

I can only imagine that this starts feeling like a tremendous burden at some point. Don't get to that point. I don't want to hear you scream, "I mean, down here are literally hundreds and thousands of blinking, beeping, and flashing lights, blinking and beeping and flashing - they're *flashing* and they're *beeping*. I can't stand it anymore! They're *blinking* and *beeping* and *flashing*! Why doesn't somebody pull the plug!"

FYI, a disk failure on worf (the raw data storage server) which locked up the RAID requiring a cold reboot (and nobody is around/able to get up to the lab to do so). Only public downside is we ran out of work to send. This'll get fixed Monday morning.

- Matt

Thanks for the update, Matt.
Monday morning is plenty soon enough for me, I've got enough work to last until Tuesday.Donald
Infernal Optimist / Submariner, retired

Sorry to be such a pain, but I still cannot get work units. I uninstalled and reinstalled SETI severasl times and still I can get zero work units. What might I be doing wrong? Thanks in advance for any and all help!