We rendered a particle system of "heat" particles in a texture target. During the final compositing, we simply used the (red, green) values of each "heat render target" pixel as a (u, v) per-pixel 2D texture coordinates displacement during the texel fetch of the "rendered scene" texture target.
[/quote]

What I am wondering, is why do you think they render into a separate "heat render target". I have a forward pass that renders refractive materials like glass, and it seems to me I can render the "heat" particle system during this pass.

0

Share this post

Link to post

Share on other sites

The problem is, that you can't (except maybe latest technology) read from and write to the same render target. The disortion effect needs to read back texels from the source image, therefore you can't render it directly to the same render target.
Edited September 20, 2012 by Ashaman73

0

Share this post

Link to post

Share on other sites

If you render out the distortion amount first, it lets you use a cheaper shader that doesn't sample a render target. Then you just have one pass where you sample a render target. This might be a big deal if there was a lot of overdraw in their particles. Plus it would have allowed them to accumulate the distortion at a lower resolution if they'd wanted to.

1

Share this post

Link to post

Share on other sites

Think of the heat effect as just one type of scene distortion which uses the current backbuffer as a source texture. There's also frosted/rippled glass, raindrops, ice, and so on. These materials should be drawn last, so our approach is to resolve the current backbuffer into a same-sized texture, then sample it with a screen projection. Shaders which use this texture don't use any lighting, it's already pre-lit. Works nicely for a lot of materials, and doesn't cause any serious performance issues.