If I rendered to a texture during one frame and then wanted to read from that texture would I have a big performance hit? If so, what about if I had two render targets and I rendered to one of them during a frame and read from the other in a flip flop fashion, would I still incurr an "across the bus" penalty?

I'm not sure what you mean by "big performance hit", but writing to a texture and then reading from it later is a very common operation that a GPU can certainly handle. In general the performance will be dictated by the bandwidth available to the GPU, so higher-end GPU's will take less of a performance hit. Switching render targets every frame won't save you any bandwidth, and you'll just end up using more memory.

I have been writing a software rasterizer to do some occlusion culling and it seems to work ok(ish) but then I thought it might be quicker to render my occlusion polygons (using different block colours) to the same-sized render target (256 pixels wide), then lock the texture to get access to the data, loop through each pixel adding its colour to a palette and then indexing that palette for visibility.

The penalty incurred by GetRenderTargetData will depend on the amount of latency between the CPU and the GPU. Usually, commands are buffered in such a way that the GPU will be running one or more frames behind the CPU. i.e. you submit a draw call on the CPU and the GPU might not actually act on that command for 30ms+...If you ask for some results immediately after asking the GPU to generate those results, then the CPU is forced to stall, sitting around waiting for the GPU to catch up and do the work. If you do this every now and then, you'll likely get a huge stall, as there could be 30ms+ of latency in the queue. If you do this every single frame, then the stalls will be smaller, as the CPU won't have had a chance to 'get ahead' of the GPU by too much. However, the downside here is that when there is not much latency between the CPU and GPU, then the GPU can run out of work to do, and it ends up stalling.e.g. say that every frame the CPU submits 4 different bits of work for the GPU to do, and then does all the other stuff required for a frame.A normal situation looks like the left side.A situation where the CPU/GPU are forced to be in sync looks like the middle columns -- without that extra latency, there's much more chance that the GPU is going to end up waiting on the CPU to feed it more work.A situation where the CPU wants to read back the results of the first draw work before continuing looks like the right side. The CPU has to wait on the GPU to finish this work, and then the GPU also suffers a stall while it waits for the next task from the CPU to arrive.