i'm running both clients right now and i notice that my SMP client is slowing down...should i just run the GPU client instead of the SMP or can i run both but have some option that would enable me to run both without either interfering with one another?

You have a dual core processor, meaning they will be fighting one of the cores to work. The trick is to make sure that the drop off in both the GPU and SMP clients are worth it in terms of total points production. You will have to measure to see if the smaller points output from both client is still bigger than just the GPU client alone.

The Model M is not for the faint of heart. You either like them or hate them.

All my systems are win xp, dual core and I have only gotten an smp/gpu combo to work on the systems with ati gpu's. On the nvidia systems, smp/gpu refuses to cooperate, stopping with errors. The point production from smp/gpu on my dual core system with 3870 card was less than just running the gpu so I quit out on the smp. I tried affinity/load preference to no avail. However, on my system with 2600xt card, the smp/GPU combo procuces about 400 more PPD than just the gpu running alone. I will chuck the 2600xt in the trash soon and put in an 8800gs to gain an additional 3500ppd.

I run a 9600GSO (nVidia GPU) and a VMWare'd Ubuntu Linux SMP client on my Opteron 165 (i.e. X2 with bigger cache), overclocked from 1800 to 2556 MHz, on XP Pro. The way I got them to produce best was to run a little utility every once in a while to make sure the VMWare executable is at Below Normal priority, and to fix the messed up GPU client code which only gives the GPU core one core of the CPU. And I also increase the GPU core's priority to Normal, too.

Rather than do any real work, as usual a search engine allowed me to download the utility which can change any program's priority and its processor affinity. It's called process.exe, and I got it from here. Since the GPU client kills and respawns the GPU core every time a new WU is downloaded, and that happens every 2 hours and some, it has to fix the settings fairly often. Since I'm not always around or awake when it's doing that, I needed to run a few scheduled tasks to fix things. I run 3 scheduled tasks: one to fix the GPU core's priority every 3 minutes (it takes practically no CPU time to run these things), one to fix its affinity every 5 minutes, and one to lower the SMP client's VM's priority every 30 minutes. That way, the GPU client, which can supposedly deliver over 5000 PPD, will usually never wait more than a minute or 2 before it is getting the optimal amount of CPU time it needs to run at full speed. The SMP task only runs every half hour because it generally isn't going to re-launch unless I restart the computer (which is very rare) or suspend the Virtual Machine, which only happens while I'm working on it and need the CPU. The reason I have to do that is because the GPU client can really screw up screen refreshes, making them take too long to be useful for work.

Running them together, I get over 5700, and usually (FahMon claims) over 5800 PPD with the slowest SMP clients, and above 6500 with the faster ones. I run it that way after I quit using the machine for work, but doing anything that redraws the screen or uses much CPU time really eats up the speed.

right now i have the SMP client Core usage as idle as opposed to low...it says that idle or at least shows idle as the default

i see something similar on the GPU client but i'm not totally sure, it's something about the core priority (enable if other F@H applications are interfering or something like that) and i left that at the recommended setting

I have pretty much the same problem, I have an e8400 and a HD 4870, however, even with Vista, the GPU client maxes out a core. Correct me if I am wrong but that just does not seem right especially since I see comments all the time that say Vista uses barely any CPU to feed the GPU client.

I have pretty much the same problem, I have an e8400 and a HD 4870, however, even with Vista, the GPU client maxes out a core. Correct me if I am wrong but that just does not seem right especially since I see comments all the time that say Vista uses barely any CPU to feed the GPU client.

Am I missing something?

mine doesn't quite max out even one core...it uses about 15 percent max out of each core...

The SMP priority that I'm changing is for the program that runs the guest Linux operating system while I'm running Windows XP. The program executable is vmware-vmx.exe, and normally it gets the same priority as any other program that you'd run, Normal. But I found I can get more CPU time for my GPU client's core program if I lower vmware-vmx's priority to BelowNormal. I leave the SMP code itself to do whatever it likes.

HurgyMcGurgyGurg: I can't help you with Vista other than to tell you to read what others have written in the early posts in the first thread that highlighted the huge amount of output nVidia clients were producing beginning in June of this year, if I remember the date correctly, which was originated by PRIME1 as I remember, and had a subject mentioning the new GPU client. That's what got me to buy my first new GPU since 2005, and for the first time in a very long time to buy one that was among the top half in performance. Then look for an ATI thread to see what is said about them (if there is a separate thread for them). I think there was comment in the "New GPU client" thread about them after a number of posts.

It may be that the ATI requires more CPU time. I'm not sure, but there is probably an answer already on this forum somewhere.