Texture Mapping

Why texture map?

How to do it

How to do it right

Advanced Texture mapping

Slide 3 : 3 / 32 : The Quest for Visual Realism

The Quest for Visual Realism

The appearance of many real world objects
depends on its texture. The texture of an object is really the relatively fine
geometry at the surface of an object. To appreciate the role surface textures
play in the appearance of real world objects, consider carpet. Even when all
of the fibers of a carpet are the same color the carpet does not appear as a
constant color due to the interaction of the light with geometry of the fibers.
Even though Java 3D is capable of modeling the geometry of the individual carpet
fibers, the memory requirements and rendering performance for a room size piece
of carpet modeled to such detail would make such a model useless. On the other
hand, having a flat polygon of a single color does not make a convincing replacement
for the carpet in the rendered scene.

Slide 4 : 4 / 32 : The Quest for Visual Realism (2)

The Quest for Visual Realism (2)

An example of texture mapping: the left hand image is flat
shaded whilst the right has been textured.

When creating image detail, it is cheaper to employ mapping
techniques that it is to use myriads of tiny polygons. The image on the right
portrays a brick wall, a lawn and the sky. In actuality the wall was modeled
as a rectangular solid, and the lawn and the sky were created from rectangles.
The entire image contains eight polygons. Imagine the number of polygon
it would require to model the blades of grass in the lawn! Texture mapping creates
the appearance of grass without the cost of rendering thousands of polygons.

Slide 5 : 5 / 32 : Photo-textures

Photo-textures

Increased scene detail without increasing geometric complexity

Ed Catmull, who later founded Pixar, was sitting in his
car in the parking lot of the engineering building at the University of Utah
in 1974 thinking about how to make a more realistic computer animation of a
castle. The problem was that modeling the rough stone walls would require prohibitively
many polygons. Even if he were only interested in the color variation, not the
geometric variation (bumpiness), it would still require a new polygon for each
point that had a different color.

What approach would you take? He concluded that if pictures
of the stone wall could be stored for reference and then be applied or mapped
onto the otherwise smoothly shaded polygon, then the result would be increased
scene detail without increasing geometric complexity (and its associated
increased difficulty in modeling the scene).

In effect, texture mapping simply takes an image (a texture
map) and wallpapers it onto the polygons in the scene.

Since those days, texture mapping has become the most
important method of increasing the level of detail of a scene without increasing
its geometric complexity (the number of geometric primitives in the scene).

Slide 6 : 6 / 32 : Texture coordinate

Texture coordinates

Specify a texture coordinate at each vertex / mapping between (s,
t) and (u, v)

Canonical coordinates where u and v are between 0 and 1

often (Java3D), has to be the size of the texture image to be a
mathematical power of two (1, 2, 4, 8, 16, ) in each dimension.

Slide 7 : 7 / 32 : Transformation

Transformations

Rectangular Patern Array

Surface

Pixel Area

Texture Space : (s, t) Array Coordinates

<< ====== >>

Object Space : (u, v) Surface Parameters

<< ====== >>

Image Space : (x, y) Pixel Coordinates

Texture - Surface Transformation

Viewing and projection Transformation

u = fu(s,t) = aus
+ but + cu

v = fv(s,t) = avs
+ bvt + cv

The basic method of texture mapping is
to specify the coordinates in the texture map (s,t) that map to a particular
point on the geometric surface. For polygonal models, the (s,t) coordinates
are specified at the polygon vertices and are interpolated across the surface
as are R,G,B,A,Z and other things to come. Normally, the texture coordinates
are on the range [0..1] in u and v. However, for textures that are to repeat
more than once going across the polygon, texture coordinates outside the [0..1]
range need to be acceptable.

Slide 8 : 8 / 32 : Transformation

Transformations : Direct or Inverse

2D Texture Space

3D Object Space

3D World Space

3D Camera Space

2D Image Space

Parametrization

Model Transformation

Viewing Transform

Projection

Texture Scanning, Direct mapping

============================>>>

<<<============================

Inverse Scanning,
pixel order scanning or image order scanning

Inverse scanning is used most of the time...

Forward mapping is stepping through the texture map and
computing the screen location of each texel. This is done in real-time video
effects systems. Most other systems use inverse mapping, which is done by stepping
through the screen pixels and computing the color of each by determining which
texels map to this pixel.

A disadvantage of mapping from texture space to pixel
space is that a selected texture patch usually does not match up with the pixels
boundaries, thus requiring calculation of the fractional area of pixel coverage.
Therefore, mapping from pixel space to texture space is the most commonly used
texture mapping method. This avoid pixel subdivision calculations and allows
antialiasing (filtering) procedures to be easily applied.

Slide 9 : 9 / 32 : Texture Filtering

Texture Filtering : Antialiasing

No 1 to 1 mapping between texel (pixels in the texture map) and pixels on
the screen.

Forward mapping or inverse mapping.

Since there will almost never be a 1 to 1 mapping between
pixels in the texture map (texels) and pixels on the screen, we must filter
the texture map when mapping it to the screen. The way in which we accomplish
the filtering depends on whether we are forward mapping or inverse mapping.

Relative size of the textels on the screen

Pixel is either the size of multiple texels...

Minification filter is used to map the multiple texels to one pixel.

... or a pixel is smaller than one texel.

Magnification filter is used to map the texel or texels to the pixel.

Texture Filtering : Aliasing from Undersampling

No matter the type of mapping we’re undertaking,
we run the danger of aliasing. Aliasing can ruin the appearance of a texture-mapped
object. Aliasing is caused by undersampling. Two adjacent pixels in an object
may not map to adjacent pixels in the texture map. As a result, some of the
information from the texture map is lost when it is applied to the object.

Antialiasing : Prefiltering

Antialiasing minimizes the effects of aliasing.
One method, called prefiltering, treats a pixel on the object as an area (Catmull,
1978). It maps the pixels area into the texture map. The average color
is computed from the pixels inside the area swept out in the texture map.

Slide 12 : 12 / 32 : Antialiasing : Supersampling

Antialiasing : Supersampling

A second method, called supersampling, also
computes an average color (Crow, 1981). In this example each of four corners
of an object pixel are mapped into the texture. The four pixels from the texture
map are averaged to produce the final color for the object.

Slide 13 : 13 / 32 : Texture Filtering : Antialiasing

Texture Filtering : Antialiasing

Prefiltering

Supersampling

Slide 14 : 14 / 32 : MIP-mapping

MIP-mapping

In extrem situtation, the best is to use MIP-mapping

Multum in Parvo, or many in a small place

pre-computes part of the filter operation by storing down-sampled replicas
of each texture

Since many texels will probably map to each pixel, it
would be extremely expensive to perform the entire filter computation at each
pixel, each frame. MIP-mapping (Multum in Parvo, or many in a small place) pre-computes
part of the filter operation by storing down-sampled replicas of each texture,
going from the original m x n texture map all the way to a 1 x 1 texture map.
At run time, if a given screen pixel maps to many texels, we simply choose the
appropriate MIP-map level in which a single texel represents all the desired
texels, and perform our texture sampling on that level.

Since the pixel will project to an arbitrary place on
the MIP-map image, rather than exactly onto a texel, we can use bilinear or
other filtering to interpolate nearby texels. This is Bilinear MIP-mapped Texture
Filtering talked about in video game hardware. Better than this, we can bilinearly
filter on the two most appropriate MIP-map levels and then blend the results.
This is Trilinear MIP-mapped Texture Filtering. It requires the use of eight
texels, and requires seven linear interpolations, but provides an important
improvement over bilinear filtering.

Slide 15 : 15 / 32 : MIP-mapping

MIP-mapping : chose the best interpolation

Antialiasing is expensive due to the additional computation
required to compute an average color. Mipmapping saves some expense by precalculating
some average colors (Williams, 1983). The mipmap algorithm first creates several
versions of the texture. Beginning with the original texture the mipmap algorithm
computes a new texture thats one fourth the size. In the new, smaller
texture map, each pixel contains the average of four pixels from the original
texture. The process of creating smaller images continues until we get a texture
map containing one pixel. That one pixel contains the average color of the original
texture map in its entirety.

In the texture mapping phase the area of each pixel on
the object is mapped into the original texture map. The mipmap computes a measure
of how many texture pixels are in the area defined by the mapped pixel. In this
example approximately nine texture pixels will influence the final color so
the ratio of texture pixels to object pixel is 9:1.

To compute the final color, we find the two texture maps
whose ratios of texture pixels are closest to the ratio for the current object
pixel. We look up the pixel colors in these two maps and average them.

Main idea : use of interpolation to get the needed texel projection, but...

Slide 16 : 16 / 32 : Perspective Correction

Perspective Correction

Use of interpolation to not compute all the projections (Cf Gouraud shading)

Mappings that vary linearly in space do not vary linearly under perspective.

The distance between parallel lines is an example. Another example
is any mapping applied to the surfaces being rendered in projection (such as
RGB color interpolation, or interpolation of the u,v coordinates of a texture
map).

In order to correctly interpolate texture
coordinates under perspective we must consider the set of all values that will
be projected to be a single homogeneous vector:
(x, y, z, w, u, v, 1, R, G, B, A)
After transforming the vertex we have:
(xw, yw, zw, w, u, v, 1, R, G, B, A)
When projecting a homogenous vector we divide by the homogenous coordinate:
(xw/w, yw/w, zw/w, w/w, u/w, v/w, 1/w, R/w, G/w, B/w, A/w)
We then interpolate between the vertices and perform the projection of the texture
coordinates at the pixels. That is, at the vertices we compute u/w, v/w, and
1/w. We linearly interpolate these three across the pixels. Then at each pixel
we divide the interpolated texture coordinates by the interpolated 1/w to yield
the final u,v.

Perspective Correction

Looking at One Edge

First, let's consider one edge from a given triangle. This edge and
its projection onto our viewport lie in a single common plane. For the
moment, let's look only at that plane, which is illustrated below:

Slide 19 : 19 / 32 : Visualizing the Problem

Visualizing the Problem

Notice that uniform steps on the image plane do not correspond to uniform
steps along the edge.

In order to simplify, let's assume that the viewport is located 1 unit away
from the center of projection.

Slide 20 : 20 / 32 : Linear Interpolation in Screen Space

Linear Interpolation...

Compare interpolation in screen space ...

... to interpolation in 3-space

,

Slide 21 : 21 / 32 : How to Make Them Mesh

How to Make Them Mesh

Still need to scan convert in screen space... so we need a mapping from t
values to s values.

We know that the all points on the 3-space edge project onto our screen-space
line. Thus we can set up the follow equality:

and solve for s in terms of t giving:

Unfortunately, at this point in the pipeline (after projection) we no longer
have z1 and z2 lingering around. However,
we do have w1 = 1/z1 and w2 = 1/z2
(homogenous coordinates).

Slide 22 : 22 / 32 : Interpolating Parameters

Interpolating Parameters

We can now use this expression for s to interpolate arbitrary parameters,
such as texture indices (u, v), over our 3-space triangle. This is accomplished
substituting our solution for s given t into the parameter interploation.

Therefore, if we premultiply all parameters that we wish to interpolate in
3-space by their corresponding w value and add a new plane equation to interpolate
the w values themselves, we can interpolate the numerators and denominator in
screen-space. We then need to perform a divide a each step to get to map the
screen-space interpolants to their corresponding 3-space values.

For obvious reasons this method of interpolation is called perspective-correct
interpolation. The fact is, the name could be shortened to simply correct interpolation.
You should be aware that not all 3-D graphics APIs implement perspective-correct
interpolation.

Slide 23 : 23 / 32 : Remarques

Remarques

When we did Gouraud shading didn't we interpolate illumination values that
we found at each vertex using screen-space interpolation?

Didn't we just demonstrate that screen-space interpolation is wrong ?

Does that mean that Gouraud shading is wrong?

Yes, Yes, Yes

Gourand shading is wrong.

However, you usually will not notice because the transition in colors is very
smooth
(And we don't know what the right color should be anyway, all we care about
is a pretty picture).

There are some cases where the errors in Gourand shading become obvious.

When switching between different levels-of-detail representations

At "T" joints.

Slide 24 : 24 / 32 : Texture coordinate generation

Texture coordinate generation

Perhaps the most painful in texture mapping is assigning texture coordinates
to shape vertex.

We'll see more details about that procedure in the Java3D
part

In two-dimensional texture mapping, we have to decide
how to paste the image on to an object. In other words, for each pixel in an
object, we encounter the question, "Where do I have to look in the texture
map to find the color?" To answer this question, we consider two things:
map shape and map entity.

We’ll discuss map shapes first. For a map shape
that’s planar, we take an ( x,y,z ) value from the object and throw away
(project) one of the components, which leaves us with a two-dimensional (planar)
coordinate. We use the planar coordinate to look up the color in the texture
map.

Texture coordinate generation : planar map shape

This slide shows several textured-mapped
objects that have a planar map shape. None of the objects have been rotated.
In this case, the component that was thrown away was the z-coordinate. You can
determine which component was projected by looking for color changes in coordinate
directions. When moving parallel to the x-axis, an object’s color changes.
When moving up and down along the y-axis, the object’s color also changes.
However, movement along the z-axis does not produce a change in color. This
is how you can tell that the z-component was eliminated.

Texture coordinate generation : cylinder map shape

A second shape used in texture mapping is
a cylinder. An ( x,y,z ) value is converted to cylindrical coordinates of (
r, theta, height ). For texture mapping, we are only interested in theta and
the heigh t. To find the color in two-dimensional texture map, theta is converted
into an x-coordinate and height is converted into a y-coordinate. This wraps
the two-dimensional texture map around the object.

The texture-mapped objects in this image
have a cylindrical map shape, and the cylinder’s axis is parallel to the
z-axis. At the smallest z-position on each object, note that the squares of
the texture pattern become squeezed into "pie slices". This phenomenon
occurs at the greatest zposition as well. When the cylinder’s axis is
parallel to the z-axis, you’ll see "pie slices" radiating out
along the x- and y- axes.

Texture coordinate generation : sphere map shape

When using a sphere as the map shape, the ( x,y,z ) value
of a point is converted into spherical coordinates. For purposes of texture
mapping, we keep just the latitude and the longitude information. To find the
color in the texture map, the latitude is converted into an x-coordinate and
the longitude is converted into a y-coordinate.

The objects have a map shape of a sphere, and the poles
of the sphere are parallel to the y-axis. At the object’s "North
Pole" and "South Pole", the squares of the texture map become
squeezed into pie-wedge shapes. Compare this to a map shape of a cylinder. Both
map shapes have the pie-wedge shapes at the poles, but there is a subtle difference
at the object’s "equator". The spherical mapping stretches the
squares in the texture map near the equator, and squeezes the squares as the
longitude reaches a pole.

What Else About Mapping ?

Texture Tiling

Often it is useful to repeat or tile a texture over the surface of a polygon.

Texture Transparency

Texture matrices

Transformation of the texture (rotation, translation...)

Slide 29 : 29 / 32 : What Else About Mapping (2) ?

What Else About Mapping (2) ?

Texture 3D and Procedural Textures

Texture mapping can be divided into two-dimensional and
three-dimensional techniques. Two-dimensional techniques place a two-dimensional
(flat) image onto an object using methods similar to pasting wallpaper onto
an object. Three-dimensional techniques are analogous to carving the object
from a block of marble.

In three-dimensional texture mapping, each point determines
its color without the use of an intermediate map shape (Peachey, 1985; Perlin,
1985). We use the (x,y,z) coordinate to compute the color directly. Its
equivalent to carving an object out of a solid substance.

Procedural Textures

Wood, metal, marble... is not only a picture, it is a material which can be
modeled

Most 3D texture functions do not explicitly store a value
for each (x, y, z)-coordinate, but use a procedure to compute a value based
on the coordinate and thus are called procedural textures.

Perturbing stripes can result in a texture with a marbled
appearance.

Making a wood-grained object begins with a three-dimensional
texture of concentric rings (Peachey, 1985). By using noise to vary the ring
shape and the inter-ring distance, we can create reasonably realistic wood.

Slide 30 : 30 / 32 : What Else About Mapping (3) ?

What Else About Mapping (3) ?

Bump mapping and Displacement mapping

Storing other shading parameters in maps

Bump mapping : normal variation

We detect bumps in a surface by the relative darkness
or lightness of the surface feature. In local illumination models, this is
primarily determined by the direct diffuse reflection term which is proportional
to N*L. N is the surface normal and L is the light vector. By modifying N,
we can create spots of darkness or brightness.
James Blinn invented the method of perturbing the surface normal, N, by a
"wrinkle" function. This changes the light interaction but not the
actual size of the objects so that bump mapped objects appear smooth at the
silhouette.

Here is an example of bump mapping. Notice that for both this
image and the one below that the surfaces appear to have bumps or gouges
but that the silhouette of the object is still smooth.

Displacement mapping

In displacement mapping, the surface is actually modified,
in contrast to bump mapping where only the surface normal is modified. This
means that displacement mapped surfaces will show the effect even at the silhouette.

A Pixar Renderman image with displacement mapping. Notice that the
surface not only appears non-smooth but the silhouette is also non-smooth.

In contast, displacement mapping alters an objects
geometry (Cook, 1984). Compare the profiles and the shadows cast by these two
objects. Which is bump mapped?

Slide 31 : 31 / 32 : Environment mapping

Environment mapping

Environment mapping is a cheap way to create reflections
(Blinn and Newell, 1976). While its easy to create reflections with a
ray tracer, ray tracing is still too expensive for long animations. Adding an
environment mapping feature to a z-buffer based renderer will create reflections
that are acceptable in a lot of situations. Environment mapping is a two-dimensional
texture mapping technique that uses a map shape of a box and a map parameter
of a reflection ray.