Simulation of reflective and transparent objects using cube textures

This article describes methods for simple simulation of reflective and transparent objects, and physical phenomena related to them. Surface of reflective object is partly or almost completely reflects environment, and color of transparent object is formed with taking into account refraction of light rays. How to achieve interactivity in visualization of such objects? Renderering engine should determine what is reflected by the surface, and what is visible throught object after refraction of the rays. Ray tracing methods are accurate, but they aren't interactive on modern hardware. First and most easy solution is to use environmental mapping method. This method is used for simulation of reflective and transparent objects, and to create non-constant ambient lighting.

Environmental mapping

Environmental mapping method is based on storing of environment around the reflective/transparent object in a texture. We can use cube texture to save panoramic view around the object, and views above and below the object. Cube texture contains six usual 2D square textures. Each of these subtextures represents view of the world (scene) along or agains one of the main axes (XYZ). Texture is called cube texture, because if we will try to map each 2D subtexture to face of a cube, we will get complete 3D environment. 2D subtextures should be seamless (there shouldn't be gaps between them or high difference of colors on the border between two adjacent textures).

We can't sample cube texture with usual 2D texture coordinates. We need a direction vector. Lets imagine that start of the vector is at the center of coordinate system. Cube with applied cube texture is also placed at the center of the coordinate system. Now look from the start of the vector in direction of the end of the vector. Right in front, you will see a pixel that will be sampled from cube texture. So, the sampled color will be equal to color at the intersection point of the vector and the cube. It's not required to normalize vector before sampling, but the vector should be transformed from model space coordinates to world space coordinates. For example you can sample cube texture with normal at a point of a surface:

In environmental mapping method it's assumed that all objects that are stored in cube texture are infinitely far from the center of the cube. This leads to some inaccuracies, for example, for objects that are near reflective/transparent object, but usually these errors are unnoticed by viewers. Cube textures are ideal for static scenes, and also for scenes, where other objects are far from reflective/transparent object. If reflective/refractive object moves, or objects move around this object, then cube texture should be updated with actual environment. Ideally each reflective/transparent object in the scene should have its own cube texture (where environment is stored relative to the center of the object) and cube textures should be updated at every change of the environment. But this approach dramatically reduces performance. So in most cases one cube texture is used for all objects in the scene, and it's updated only after significant changes of the environment.

How to create cube texture?

You can create cube texture from wide panoramic photo. Apply panoramic photo to a sphere. Then place a camera inside the sphere and render the scene for six times (with different orientations of the camera: along X axis, against Y axis, etc). Select perspective as type of projection, field of view of the camera should be 90 degrees and aspect ratio - 1. Save results to six files when rendering is finished. Open images that correspond to top and bottom views of the camera in a photo editor. Blur centers of these two images if there're rendering errors (rendering errors are possible due to fact that rectangular panoramic photo was wrapped around the sphere).

If you have nice 3D scene, then you can create cube texture from it. It's done in the very similar way as cube texture from panoramic photo. Render the scene for six times (no need for a sphere!) with different orientations of the camera, and save the results. In this case there's no need to fix top and bottom views.

Loading of cube texture

Loading of cube texture is straightforward: we should load six 2D images and bind them together to create cubemap. For example, if there're six images on a hard drive, with names env_posx.jpg, env_negx.jpg, env_posy.jpg, env_negy.jpg, env_posz.jpg and env_negz.jpg:

void loadCubeTexture(const QString path){ // function for loading of cube texture// path - in this example it is "env.jpg"

Color of reflective object is formed from color of object's material and from color of the environment reflected in the object. In case of mirror, chrome or similar materials, color of the object is almost fully formed by color of the reflected environment. In environment mapping method reflected color is calculated in the following way. We have a normal at a point of the surface, and we have a vector from the camera position to the point on the surface (view vector). Calculate reflected vector with help of GLSL function reflect(I, N), where incident vector I is equal to our view vector V, and normal vector N is equal to normal at the point of the surface (calculation of reflected vector). Light ray that is reflected by the object at the current point of the surface is same as R vector (as shown on the previous image). With reflected vector R we can sample color from cube texture. Then, we can mix reflected color with diffuse color of the object. As you can see from following image, reflective environment mapping effect requires smooth normals and curved detailed surfaces. Sphere reflects environment quite well, but the cat has strange reflections, as it has sharp edges. Environment mapping method doesn't allow to reproduce effect of multiple refrections, e.g., when object reflects its own reflection in another object (actually it allows, but you have to do a lot of rendering to cube textures). We can control reflectivity of the object by special texture - reflectivity map. It defines places on the object that are reflective and that aren't. For example, parts of the mesh that corresponds to white color in the texture is reflective, and if color is gray, then the point is partly reflective.

Color of transparent object is formed from object's material color and from color of the environment that is visible throught the object. To calculate color, that is visible through the object, it's required to calculate refracted ray and use it to sample cube map. You can calculate refracted vector with help of Snell's Law. In GSLS these calculations are implemented in refract(I, N, IOR) function. Incident vector I is our view vector V (from the camera to the point on the surface), and normal N is the normal at the point of the surface. IOR (index of refraction) - ratio of refractivity index of first medium to refractivity index of second medium (more info about calculation of refracted vector). Refract() function returns zero vector (with 0 length) if there're total internal reflection of the vector. Next, you can sample cube texture with refracted vector to get refracted color (as shown on previous image), and it will be a color that is visible through object. Only one refraction of light ray is taken into account as environment mapping method is only simple interactive simulation. More precise simulations take into account multiple refractions of the vector, e.g., when it enters the object and when it exits the object, and so on. Quality of refractive environmental mapping depends on smoothness of normals of the model in the same way as for reflections. And as for reflections, it's possible to use special texture (refractivity map) to control refractivity of the object.

Reflected and refracted vectors can be calculated in vertex or in fragment shader. Results are the same as for per-vertex and per-fragment lighting. Calculations in vertex shader give lowest quality, but better performance, and calculations in fragment shader - better quality, but lower FPS. In most cases environmental mapping is calculated in vertex shader as the method is only rough approximation.

Fresnel coefficient

Transparent object is more transparent on low angles between normal of the surface and view vector, than when angle between normal and view vector is high. Lets consider surface of the water: when we are looking from above (view vector is parallel to normal of the surface), we can see bottom through the water, but when we are looking at the horizon (view vector is nearly perpendicular to normal of the surface), we can see only reflections of the sky. This is due to the fact that light partly reflects and partly refracts when reaches border between two mediums. Ratio of reflected part of the light to total light intensity is called Fresnel coefficient. With this value we can linearly interpolate between reflected color and refracted color. But precise caclulation of this value is complicated, and we can approximate it for our simple simulation. We can use any formula, which gives high value for perpendicular angle between view and normal vectors, and low values for parallel angles. For example:

Results for first and second formula:

Chromatic dispersion

Chromatic dispersion of the light lies in the base of the phenomenon of splitting of white light into a rainbow (dispersive prism). Dispersion can be described in the following way: lightwaves with different frequencies have different indices of refraction. E.g., red lightwaves refract more, and blue refract less. OpenGL uses RGB color system, and as result there're only three different frequencies of the light: red, green and blue (In real life there're infinite number of frequencies). Simulation of chromatic dispersion is easy. We should calculate three refracted vectors, each with different index of refraction value. Then sample three colors from cube texture with these vectors. We will get three different colors, that corresponds to red, green and blue lightwaves. Final color is calculated as in the following code snipet:
vec3 Tr = refract(I, N, IOR_red);
vec3 Tr = refract(I, N, IOR_green);
vec3 Tg = refract(I, N, IOR_blue);
vec3 finalColor;
finalColor.r = texture(u_envTexture, normalize(Tr)).r;
finalColor.g = texture(u_envTexture, normalize(Tg)).g;
finalColor.b = texture(u_envTexture, normalize(Tb)).b;

How to use cube texture as source of color of a light

Another application of cube textures in environmental mapping is to use them as source of color and intensity of ambient lighting. Sample color from cube texture with normal vector of the surface, and use this color as color or intensity of ambient lighting. On the following image you can see results of mixing of sampled color with model's diffuse color.

If you like such ambient lighting and want to use it, then you should decrease size of the cube texture to 8x8 or 16x16 pixels. Such textures won't contain small details of high frequency, and transitions between different light colors will be smooth. Use special software to create smaller cube textures, like CubeMapGen from AMD. If you decrease size of cube texture in photo editor, then cube texture will contain different colors on the borders between two adjacent 2D subtextures. Last image depicts cube map with reduced size. It doesn't contain small details and gaps on borders.