Share this post

Link to post

Share on other sites

I also think it should be implemented by a pixel shader,but as I read in a siggraph 2002 sketch paper by Jason.L Mitshell "Real-Time Image-Space Outlining for Non-Photorealistic Rendering", it says: For the cases of silhouette and crease outlining, we render the fullscenes world-space normals and eye-space depths into an RGBA(nxworld, nyworld, nzworld, deptheye) texture using a vertex shader topopulate the color channels.

I don't quite understand why they used the vertex shader, could someone explain it to me?Thanks.

Share this post

Link to post

Share on other sites

Original post by zedzeeki fail to see why u *need* a vertex or fragment shader at all

You don't need a fragment shader, but you need a vertex shader since the goal is to encode the values (normx,normy,normz,eyedepth) into the RGBA channels of the buffer. Those values will be outputed into the primary color per-vertex, and be interpolated across the triangles, filling the buffer with correct values. You need either a vertex shader for that, or calculate those values yourself in software and use glColor4f. Obviously it's best if you use vertex shaders.

Notice that the eyedepth is not the value used in the depth-buffer, it is just the z coordinate of the vertex position in eyespace(properly scaled down I assume in order to be packed into the [0,1] range).

Anyway, you can use render-to-texture extensions to copy the buffer into a texture, but they're a pain in the ass. glCopyTexSubImage2D will work fine unless your program is too performance demanding.