So, I've been trying to figure out how to "get the most" out of my videocard. I did run lunatic's optimizing app, not sure if it really made a difference. I also am running a gtx-295 for GPU processing. I have driver's 197.13, which seem good for me.

OK, here's what I figured out today. If you monitor the GPU frequencies on 200-series videocards they have downclocking for power reduction when not rendering 3d applications. I also frequent several other forums and found that some people have this downclocking problem when gaming and there is a way to force higher clocks without running a 3d application.

Rivatuner.... Here it is. My core clock was at max 400mhz which is the standard 2d clock rate for my card. After running rivatuner and doing a few modifications I'm now running at 612mhz for number crunching, which should give me a 50% increase on work units related to this card. I tried using EVGA precision and MSI Afterburner which are both great applications for 3d applications but they dont' affect the downclock related to not having a 3d application running.

I recommend checking out GPU-z if you don't know what your core clock is running at and seeing if it's running at it's max speed. Also here is the RIVA-Tuner GPU downclocking prevention thread to guide you through the difficult Riva-tuner userface (it's not user friendly in my opinion)

If you have problems with your temps staying cool I do not recommend making the videocard work any harder. I'm lucky enough to have good cooling. But even with watercooling my GPU temps rose 4* when I upclocked the GPU.

Try scrolling down to core clock on the graph. For me it showed the core number to be higher than the core clock actually was running. Here is a screenshot. The deal is that in EVGA precision when you set the core clock you're actually setting the 3d clock, if the card isn't stressed enough it will continue to be downclocked. My gtx295 idles at 300mhz, 2d rendering at 400mhz and 3d at 576mhz are the stock timings.

If Seti@Home is anything like any other GPGPU application then it will benefit mainly from increasing the shader speed without increasing the core or memory speed that much. In fact you can make good power/heat savings by increasing the shader speed and *decreasing* the core/mem speeds.