Unfortunately, there isn't any way to limit the GPU Usage similar to limiting the CPU Usage. The reason is that the current GPUs don't have any kind of scheduling system unlike the CPU which has an advanced scheduling system managed by the operating system. This could theoretically change in the future with software advancements but for now, you can either fold on the GPU (either while you are using it or when the system is idle) or not.

The CPU usage bar is for uniprocessor cores which effectively do not exist anymore because even uniprocessor is being done by an SMP core now. If you want to limit the CPU you can manually choose how many CPU threads (in the Slots tab) and that will specify the number of CPU Cores to use. With a Quad core you have a granularity of 25% and an eight core (Including HT) you will have a granularity of 12.5%.

I've been away from folding@home (I am team 196148) for a while but I'm sure I remember being able to limit GPU usage in the past. This is a heat management issue as well as basically wanting to be able to use the PC while folding. Is it possible to go back to an older version that allows GPU usage management?

There was a old v7 client that tried GPU limiting but it didn't work as expected or very well at all so it was removed. What it did was run the GPU at 100% for an interval of time and then 0% for an interval of time. The problem was it could not get the intervals fast enough to be useful so that all it did was effectively power/heat-cycle the GPU and lots of hot/cold power cycles is actually harmful.

As has already been said, the GPU has no concept of priority or time-slices or any of the other features that have been used to limit CPU usage. Even with all those features, modulating the CPU usage doesn't work very well except by limiting the number of CPU tasks that can be assigned to less CPUs than you have.

Now consider your GPU. Assign it a block of work that it can complete in X µs. It works as hard as possible to complete that block of work. By refusing to give it more work for X+Y µs, you might be able to limit the average throughput but as P5-133XL suggests, you'll be averaging busy time with idle time. Now consider somebody who has a faster or slower GPU. Any values chosen for X or Y will have different results because the two GPUs can process the GFLOPS in that work packet at a different rate. Then, too, the GPU also accepting work from other sources (such as a screen refresh or video decode or whatever) which tend to fill in the idle times, if there are any, with productive work so the average can drift.

In effect, you can turn off the GPU thermally or you can turn it off until the system is stable, by actually modulating the processing is essentially impossible. Repeated on and off cycles at a rate of 1/(X+Y) isn't good for your GPU.