Today had our weekly outage for mysql database backup, maintenance, etc. This week we are recreating the replica database from scratch using the dump from the master. This is to ensure that the crash last week didn't leave any secret lingering corruption. That's all happening now as I type this and the project is revving back up to speed.

Had a conference call with our Overland Storage connections to clean up a couple cosmetic issues with their new beta server. That's been working well and is already half full of raw data. Once the splitters start acting on those files the other raw data storage server will breathe a major sigh of relief. I was also set to (finally) bump up the workunit storage space yesterday using their new expansion unit - but waited until their procedure confirmation today lest I did anything silly and blew away millions of workunit files by accident. The good news is that I increased this storage by almost a terabyte today, with more to come. We have officially broken that dam.

I also noticed this morning the high load on bruno (the upload server) may be partially due to an old, old cronjob that checks "last upload" time and alert us accordingly. This process was mounting the upload directories over NFS and doing long directory listings, etc. which might have been slowing down that filesystem in general from time to time. I cleaned all that up - we'll see if it has any positive effect.

Jeff's been hard at work on the NTPCker. It's actually chewing on the beta database now in test mode. We did find that an "order by" clause in the code was causing the informix database engine to lock out all other queries. This may have been the problem we've been experiencing at random over the past months. Maybe informix needs more scratch space to do these sorts, and it locks the database in some kind of internal management panic if it can't find enough. Something to add to the list of "things to address in the new year."

- Matt-- BOINC/SETI@home network/web/science/development person
-- "Any idiot can have a good idea. What is hard is to do it." - Jeanne-Claude

If any of the problems such as upload, running out of WUâ€™s, bandwidth limits has anything to do with CUDA being released, why not limit the number of CUDA WUâ€™s you can request per day to a lower number, until thing sort themselves out.

Only allow 20, 40, 60 WUâ€™s a day per computer.

I donâ€™t know if this is possible, you only allow a certain number of Wuâ€™s per CPU on a computer. Why not only allow a certain number of WUâ€™s per NIVIDIA card? Something LOW for now, then turn it up slowly over time.

Matt - I read and appreciate the technical updates. I can only imagine the hundres of problems, large and small, you and your team handle every day. But I'm confised: I read the project is coming back up, see there are 55,818 workunits to be downloaded, but when I request more workunits, the status returned is 'no work available'. What's up? THX.

I read the project is coming back up, see there are 55,818 workunits to be downloaded, but when I request more workunits, the status returned is 'no work available'. What's up? THX.

What's probably happening here is that, yes, there is work "available." However, when your client requests work from our scheduling server, the scheduler process looks at the "feeder" which holds at any given time the names of 100 available workunits to send out. So the feeder process has to constantly fill its tiny cache, and to do so queries the database every two seconds to see if there's more work available. Long after the project comes back up the database is quite overloaded, so it may not respond very fast. In fact, it sometimes takes many minutes for it to cough up results to the feeder, during which clients get "no work available."

In other words, the feeder is like a single cashier in a large department store. Sometime the cashier needs to make change, which holds up the entire line, even though there's plenty of money kept elsewhere behind the counter.

- Matt-- BOINC/SETI@home network/web/science/development person
-- "Any idiot can have a good idea. What is hard is to do it." - Jeanne-Claude

I read the project is coming back up, see there are 55,818 workunits to be downloaded, but when I request more workunits, the status returned is 'no work available'. What's up? THX.

What's probably happening here is that, yes, there is work "available." However, when your client requests work from our scheduling server, the scheduler process looks at the "feeder" which holds at any given time the names of 100 available workunits to send out. So the feeder process has to constantly fill its tiny cache, and to do so queries the database every two seconds to see if there's more work available. Long after the project comes back up the database is quite overloaded, so it may not respond very fast. In fact, it sometimes takes many minutes for it to cough up results to the feeder, during which clients get "no work available."

In other words, the feeder is like a single cashier in a large department store. Sometime the cashier needs to make change, which holds up the entire line, even though there's plenty of money kept elsewhere behind the counter.

- Matt

. . . now that's clear and to the Point - Thanks for the Update and have a Good Holiday Sir!

It seems I am not experiencing any faster computation speed from the latest Boinc Client 6.4.5 using the
CUDA add on. My project is Seti@Home

OS: Windows XP Media Center (Pro) 32 bit w/ Sp3

Video Card: NVidia GeForce 8800 GT

Intel Core2 Quad CPU Q6600 @ 2.4 GHz
2.0 GB RAM

driver version: 180.48

I down loaded the the new Boinc client.

Downloaded the latest NVidia Driver (ver.180.48)

Restarted Client stated that it did have CUDA Compatible components.
The option for the GPU is enabled on my settings.
But I am not seeing any benefit.
I am not getting any error messages what so ever.

First, it depends on what you mean by "projects". If by "projects" you mean tasks or workunits, then you need a CPU (logical, virtual or real) for each task you want to run. For instance, if you have a dual core CPU, or a dual CPU machine, or a CPU with Intel Hyperthreading, you can run at most two tasks. If you have a quad core CPU, or a quad CPU system, or a dual core with Intel Hyperthreading, you can run at most four tasks at once. And so on and so forth.

If by "projects" you mean different BOINC projects, you must first attach your computer to additional projects to download their work, but note the same CPU limitations apply for CPUs in your machine as stated above (i.e. you cannot run more tasks than you have CPUs in your machine). Also, you cannot explicitly state which task or project runs on which CPU as this is handled by BOINC.