3 Rendering Polygons Often need to render triangle meshesRay tracing works well for implicitly defined surfacesMany existing models and modeling apps are based on polygon meshes – can we render them by ray tracing the polygons?Easy to do: ray-polygon intersection is a simple calculation (use barycentric coords)Very inefficient: It’s common for an object, to have thousands of triangles, for a scene therefore hundreds of thousands or even millions of triangles – each needs to be considered in intersection testsTraditional hardware pipeline is more efficient for many triangles:Process the scene polygon by polygon, using “z-buffer” to determine visibilityLocal illumination modelUse crude interpolation shading approximation to compute the color of most pixels, fine for small triangles

6 Programmable Shader Based PipelineAllows programmer to fine tune and optimize various stages of pipelineThis is what we have used in labs and projects this semesterShaders are fast! (They use multiple ultra- fast ALU’s, or artithmetic logic units). Using shaders all the techniques mentioned in this lecture can be done in real timeFor example, physical phenomena such as shadows and light refraction can be emulated using shadersThe image on the right is rendered with a custom shader. Note that only the right image has realistic shadows. A normal map is also used to add detail to the model.

10 Shading Models Review (4/6)Gouraud shading can miss specular highlights because it interpolates vertex colors instead of calculating intensity directly at each point, or even interpolating vertex normals (Phong shading)𝑁 𝑎 and 𝑁 𝑏 would cause no appreciable specular component, whereas 𝑁 𝑐 would, with view ray aligned with reflection ray. Interpolating between Ia and Ib misses the highlight that evaluating I at c using 𝑁 𝑐 would catchPhong shading:Interpolated normal comes close to the actual normal of the true curved surface at a given pointReduces temporal “jumping” affect of highlight, e.g., when rotating sphere during animation (example on next slide)

14 Mipmap for a brick textureMipmappingChoosing a texture map resolutionWant high resolution textures for nearby objectsBut high resolution textures are inefficient for distant objectsSimple idea: MipmappingMIP: multum in parvo, “much in little”Maintain multiple texture maps at different resolutionsUse lower resolution textures for objects further awayExample of “level of detail” (LOD) management common in CGMipmap for a brick texture

16 Shadows (2/5) – More AdvancedFor each light 𝐿 For each point 𝑃 in scene If 𝑃 is in shadow cast by 𝐿 //how to compute? Only use indirect lighting (e.g., ambient term for Phong lighting) Else Evaluate full lighting model (e.g., ambient, diffuse, specular for Phong)Next: different methods for computing whether 𝑃 is in shadow cast by 𝐿Stencil shadow volumes implemented by former cs123 ta and recent Ph.D. Kevin Egan and former PhD student and book co-author Prof. Morgan McGuire, on nVidia chip

17 Shadows (3/5) – Shadow VolumesFor each light + object pair, compute mesh enclosing area where the object occludes the lightFind silhouette from light’s perspectiveEvery edge shared by two triangles, such that one triangle faces light source and other faces awayOn torus, where angle between normal vector and vector to light becomes >90°Project silhouette along light raysGenerate triangles bridging silhouette and its projection to obtain the shadow volumeA point P is in shadow from light L if any shadow volume V computed for L contains PCan determine this quickly using multiple passes and a “stencil buffer”More here on Stencil Buffers, Stencil Shadow VolumesOriginal SilhouetteProjected SilhouetteExample shadow volume(yellow mesh)

18 Shadows (4/5) – Another Multi-Pass Technique: Shadow MapsRender scene using each light as center of projection, saving only its z-bufferResultant 2D images are “shadow maps”, one per lightNext, render scene from camera’s POVTo determine if point P on object is in shadow:compute distance dP from P to light sourceconvert P from world coordinates to shadow map coordinates using the viewing and projection matrices used to create shadow maplook up min distance dmin in shadow mapP is in shadow if dP > dmin , i.e., it lies behind a closer objectShadow map (on right) obtained by rendering from light’s point of view (darker is closer)LightCameraPdPdmin

20 Environment Mapping (for specular reflections) (1/2)Approximate reflections by creating a skybox a.k.a. environment map a.k.a. reflection mapOften represented as six faces of a cube surrounding sceneCan also be a large sphere surrounding scene, etc.To create environment map, render entire scene from center point one face at a timeCan do this offline for static geometry, but must generate at runtime for moving objectsRendering environment map at runtime is expensive (compared to using a pre-computed texture)Can also use photographic panoramasSkyboxObject

21 Environment Mapping (2/2)To sample environment map reflection at point P:Computer vector E from P to eyeReflect E about normal to obtain RUse the direction of R to compute the intersection point with the environment mapTreat P as being center of map; equivalently, treat environment map as being infinitely largeRPNEeye

22 Overview: Surface DetailObservationWhat if we replaced the 3D sphere on the right with a 2D circle?The circle would have fewer triangles (thus renders faster)If we kept the sphere’s normals, the circle would still look like a sphere!Works because human visual system infers shape from patterns of light and dark regions (“shape from shading”). Brightness at any point is determined by normal vector, not by actual geometry of modelImage credit: Dave Kilian, ‘13

29 Bump Mapping, Another Way to Perturb NormalsIdea: instead of encoding normals themselves in the map, encode relative heights (or “bumps”)Black: minimum height deltaWhite: maximum height deltaMuch easier to create than normal mapsHow to compute a normal from a height map?Collect several height samples from textureConvert height samples to 3D coordinates to calculate the average normal vector at the given pointTransform computed normal from tangent space to object space (and from there into world space)You computed normals like this for a terrain mesh in the Lab 4!Nearby values in (1D) bump mapOriginal tangent-space normal = (0, 0, 1)Bump map visualized as tangent-space height deltasTransformed tangent-space normalNormal vectors for triangles neighboring a point. Each dot corresponds to a pixel in the bump map.

30 Other Techniques: Displacement MappingActually move the vertices along their normals by looking up height deltas in a height mapDisplacing the vertices of the mesh will deform the mesh, producing different vertex normals because the face normals will be differentUnlike bump/normal mapping, this will produce correct silhouettes and self-shadowingBy default, does not provide detail between vertices like normal/bump mappingTo increase detail level we can subdivide the original meshCan become very costly since it creates additional verticesDisplacement map on a plane at different levels of subdivisionhttps://support.solidangle.com/display/AFMUG/Displacement

31 Other Techniques: Parallax Mapping (1/2)Extension to normal/bump mappingDistorts texture coordinates right before sampling, as a function of normal and eye vector (next slide)Example below: looking at stone sidewalk at an angleTexture coordinates stretched along the near edges of the stones, which are “facing” the viewerSimilarly, texture coordinates compressed along the far edges of the stones, where you shouldn’t be able to see the “backside” of the stones

32 Other Techniques: Parallax Mapping (2/2)Would like to modify original texture coordinates (u,v) to better approximate where the intersection point would be on the bumpy surfaceOption 1Sample the height map at point (u,v) to get height hApproximate region around (u,v) with a surface of constant height hIntersect eye ray with approximate surface to get new texture coordinates (u’, v’)Option 2Sample the height map in region around (u,v) to get the normal vector N’ (use the same normal averaging technique that we used in bump mapping)Approximate region around (u,v) with the tangent planeIntersect eye ray with approximate surface to get (u’, v’)Both produce artifacts when viewed from a steep angleOther options discussed here: 2006/papers/TUBudapest-Premecz-Matyas.pdfFor illustration purposes, we use a smooth curve to show the varying heights in the neighborhood of point (u,v)

33 Other Techniques: Steep Parallax MappingTraditional parallax mapping only works for low-frequency bumps, does not handle very steep, high-frequency bumpsUsing option 1 from previous slide:the black eye ray correctly approximates the bump surfacethe gray eye ray misses the first intersection point on the high- frequency bumpSteep Parallax MappingInstead of approximating the bump surface, iteratively step along eye rayat each step, check height map value to see if we have intersected the surfacemore costly than naïve parallax mappingInvented by Brown PhD ‘06 Morgan McGuire and Max McGuireAdds support for self-shadowingShort description under each screenshot of what to look at to notice the difference between the screenshots