That theory would suggest that once one core is free, one can run as many GPU instances as one would like, at least until the freed core fills up.

Not quite. For the same reason, If GPU requires CPU and CPU busy with processing from other GPU app instance one still get slowdown. So, more cores would require in few GPU or few tasks per GPU configs.
(Need to note, that this slowdown, at least for ATi cards, has statistical character. That is, time to time whole GPU AP task can be completed on fully loaded CPU w/o any slowdown. But time to time slowdown so big that task can be even aborted by BOINC).

The relative performance differences are mitigated by the requirement of freeing a core. This card is getting 700 credits/hour on Astropulse, vs. 550/hour for a GTS450. However, it requires freeing up a core, and one CPU core on Astropulse generates 105/hour. So the gain is really only 45 credits/hour - an 8% boost when the Boinc benchmarking would suggest a factor of 4. (2400 GFlops vs. 600)

Could you describe how these numbers were recived? (what configs were compared exactly?) Especially, credits/hour for CPU. It's known fact that multicore (and even in greater degree multicore with HT) CPU performance doesn't scale linearly with CPU cores load. I.e., credit per hour for single task running on 4-core CPU (for example) less (and sometimes much less) than in 4 times less than credit per hour for 4 tasks at once running on 4 cores CPU config. In other words, freeing 1 CPU core one not only speedups (sometimes greatly speedups) GPU, one also speedups remaining CPU tasks as well.
Further, what config allows to run OpenCL NV AP (you compare credits for AP, not CUDA MB here) w/o freeing CPU core? AFAIK good performance possible only in case of 26x.x drivers where CPU freeing not required for NV indeed (though, as Claggy pointed out few times, overall GPU performance with 26x.xx drivers lower than recent drivers + freed core). Where another data published?SETI apps news
We're not gonna fight them. We're gonna transcend them.