I think I was a little tired yesterday, and SOME of the threads are am\\making more sense today.

After what happened with the cut and paste cc_config.xml last night, I must say I'm a little wary of doing it again, but "Nothing ventured, nothing gained".

Here's some free advice. If the projects set the % of CPU required for the GPU application to the right value, BOINC *should* manage the number of WU's of the correct type for you. By this, I mean that you shouldn't have to set NCPU's in the config file to anything higher than the actual number of CPUs/cores. I have seen 5 WU's running on my quad with a single GPU card. However, for some projects (GPUGRID, for example), the amount of CPU required to keep the GPU app fed takes 60% of a core, and so trying to run a CPU WU with a GPU WU isn't worth it. This only occurs on Windows - the Linux apps take only about 10-20% of a core to keep the GPUGRID app fed.

So, my advice is to stay away from the config XML file, and run more than one project so that you can easily keep all GPU and CPU cores running.

However, when your client requests work from our scheduling server, the scheduler process looks at the "feeder" which holds at any given time the names of 100 available workunits to send out.

Does the "100" amount pertain only to SETI or is this defaulted BOINCwide?

Quoting from the sched_shmem.h file in the BOINC source code:

// Default number of work items in shared mem.
// You can configure this in config.xml (<shmem_work_items>)
// If you increase this above 100,
// you may exceed the max shared-memory segment size
// on some operating systems.
//
#define MAX_WU_RESULTS 100