Having a few problems rendering my scene to another texture and then drawing that texture to screen. My aim is to eventually perform post-processing on the texture to provide effects such as nightvision.

This is drawing a grabled image to screen (although in the right location). Am I missing any steps out or overlooking anything? Can this work using a sprite or do I need to somehow create a textured quad and align it with the camera view? If so...how? Or will this still present the same garbled texture to screen?

At the moment I have a red background with a slighly offset sprite of random colours.

Well you said you want to do some post processing on the target after it recieves scene render, so yes, you should go for textured aligned quad. I do not know much about sprites but it is wrong to use them in my opinion. You should regulary bing pixel shaders, bind quad, bind target as texture and draw to backbuffer indexed primitive, and be done. Remember that pixel shader interpolates all per pixel input, what may not be true for (I gess only) for sprites as they are a rather deprecated abstraction of some particular operation (I gess).

Using a sprite to view a backbuffer will work. That is how I checked my shadow buffer when writing the code. I would suspect that it might be a garbled buffer. Try writing it directly to the backbuffer and see what you get.

Well, there is a parameter pRenderTexture, so make sure you can actualy issue shader and multiple textures between the begin/end. The parameter makes me think that you cannot? And since deffered processing needs more textures vitaly (GBuffers of other data and such), you should do the aligned quad aproach, in case you find out that Sprite::Draw is just a deprecated fixed stuff, as I think it is. Inspect on your own, but I would rather invest the energy to aligned quad, that if done properly, will never restrict you on anything.