In the previous post, you learned how to shade a 3D model. The shading effect was very simple. It merely provided the 3D model with a depth-perception. In this post, you will learn how to light an object by simulating Ambient-Diffuse-Specular (ADS) Lighting.

Before I start, I recommend you to read the prerequisite materials listed below:

How Light Works?

When light rays hit an object, the rays are either reflected, absorbed or transmitted. For our discussion, I will focus solely on light rays reflection.

When a light ray hits a smooth surface, the ray will be reflected at an angle equal to its incident ray. This type of reflection is known as Specular Reflection.

Visually, specular reflection is the white highlight seen on smooth, shiny objects.

In contrast, when a light ray hits a rough surface, the ray will be reflected at a different angle as its incident ray. This type of reflection is known as Diffuse Reflection.

Diffuse reflection enables our brain to make out the shape and the color of an object. Thus, diffuse reflection is more important to our vision than specular reflection.

Let's go through a simple animation to understand the difference between these reflections.

The animation below shows a 3D model with only specular reflection:

Whereas, the animation below shows a 3D model with only diffuse reflection:

As you can see, it is almost impossible for our brain to make out the shape of an object with only specular reflection.

There is another type of reflection known as Ambient Reflection. Ambient reflection is light rays that enter a room and bounces multiple types before reflecting off a particular object.

When we combine these three types of reflections, we get the following result:

Simulating Light Reflections

Now that you know how light works, the next question is: How can we model it mathematically?

Diffuse Reflection

In diffuse reflection, a light ray's reflection angle is not equal to its incident angle. From experience, we also know that a light ray's incident angle influences the brightness of an object. For example, an object will have different brightness when a light ray hits the object's surface at a 90-degree angle than when light rays hit the surface at a 5-degrees angle.

Mathematically, we can simulate this natural behavior by computing the Dot-Product between the light rays and a surface's Normal vector. When a light source vector S is parallel and heading in the same direction as the normal vector n, the dot product is 1, meaning that the surface location is fully lit. Recall, the dot product ranges between [-1.0,1.0].

As the light source moves, the angle between vectors S and n changes, thus changing the dot product value. When this occurs, the brightness levels also changes.

Specular Reflection

In Specular Reflection, the light ray's reflection angle is always equal to its incident angle. However, the specular reflection that reaches your eyes is dependent on the angle between the reflection ray (r) and the viewer's location (v).

This behavior implies that to model a specular reflection; we need to compute a reflection vector from a normal vector and the light ray vector. We then calculate the influence of the reflection's vector onto the viewer's vector, i.e., we compute the dot product. The result provides the specular reflection of the object.

Ambient Reflection

There is not much to ambient reflection. The ambient reflection depends on the light’s ambient color and the ambient’s material reflection factor.

The equation for Ambient Reflection is:

Simulating Light in the Rendering Pipeline

In Computer Graphics, Light is simulated in either the Vertex or Fragment Shaders. When lighting is simulated in the Vertex Shader, it is known as Gouraud Shading. If lighting is simulated in the Fragment Shader, it is known as Phong Shading.

Gouraud shading (AKA Smooth Shading) is a per-vertex color computation. What this means is that the vertex shader determines the lighting for each vertex and pass the lighting results, i.e. a color, to the fragment shader. Since this color is passed to the fragment shader, it is interpolated across the fragments thus giving the smooth shading.

Here is an illustration of the Gouraud Shading:

In contrast, Phong shading is a per-fragment color computation. The vertex shader provides the normal vectors and position data to the fragment shader. The fragment shader then interpolates these variables and calculates the lighting for the fragment.

Here is an illustration of the Phong Shading:

With either, Gouraud or Phong Shading, the Lighting computation is the same, although the results will differ.

In this project, we will implement a Phong Shading Light effect. For your convenience, I provide link to the Gouraud Shading Light project at the end of the article.

Setting up the project

Let's apply Lighting (Phong shading) to a 3D model.

By now, you should know how to set up a Metal Application and how to implement simple shading to a 3D object. If you do not, please read the prerequisite articles mentioned above.

For your convenience, the project can be found here. Download the project so you can follow along.

Note: The project is found in the "applyingLightFragmentShader" git branch. The link should take you directly to that branch. Let's start,

Open up file "MyShader.metal"

The only operation we do in the Vertex Shader is to pass the normal vectors and vertices (in Model-View Space) to the fragment shader as shown below:

The diffuse reflection is computed by first computing the diffuse intensity between the normal vectors and the light ray vector. The diffuse intensity is then multiplied by the light color and the material diffuse reflection factor. The snippet below shows this calculation:

To compute the specular reflection, we take the dot product between the reflection vector and the view vector. This factor is then multiplied by the specular light color and the material specular reflection factor.

How does a GPU apply a Texture?

As you recall, a rendering pipeline consists of a Vertex and a Fragment shader. The fragment shader is responsible for attaching a texture to a 3D model.

To attach a texture to a model, you need to send the following information to a Fragment Shader:

UV Coordinates

Raw image data

Sampler information

The UV coordinates are sent to the GPU as attributes. Fragment Shaders can not receive attributes. Thus, UV coordinates are sent to the Vertex Shader and then passed down to the Fragment Shader.

The raw image data contains pixel information such as the image RGB color information. The raw image data is packed into a Texture Object and sent directly to the Fragment Shader. The Texture Object also contains image properties such as its width, height, and pixel format.

Many times, a texture does not fit a 3D model. In this instances, the GPU will be required to stretch or shrink the texture. A Sampler Object contains parameters, Filtering and Addressing, which inform the GPU how to wrap or stretch a texture. The Sampler Object is sent directly to the Fragment Shader.

Once this information is available, the Fragment Shader samples the supplied raw data and returns a color depending on the UV-coordinates and the Sampler information. The color is applied fragment by fragment onto the model.

UV coordinates

One of the first steps to adding texture to a model is to create a UV map. A UV map is created when you unwrap a 3D model into its 2D equivalent. The image below shows the unwrapping of a model into 2D.

By unwrapping the 3D character into a 2D entity, a texture can correctly be applied to the character. The image below shows a texture and the 3D model with texture applied.

During the unwrapping process, the vertices of the 3D model are mapped into a two-dimensional coordinate system. This new coordinate system is known as the UV Coordinate System.

The UV Coordinate System is composed of two axes, known as U and V axes. These axes are equivalent to the familiar X-Y axes; the only difference is that the U-V axes range from 0 to 1. The U and V components are also referred as S and T components.

The new vertices produced by the unwrapping of the model are called UV Coordinates. These coordinates will be loaded into the GPU. And will serve as reference points to the GPU as it attaches the texture to the model.

Luckily, you do not need to compute the UV coordinates. These coordinates are supplied by modeling software tools, such as Blender.

Decompressing Image Data

For the fragment shader to apply a texture, it needs the RGBA color information of an image. To get the raw data, you will have to decode the image. There is an excellent library capable of decoding ".png" images known as LodePNG created by Lode Vandevenne. We will use this library to decode a png image.

Texture Filtering & Wrapping (Addressing)

Textures don't always align with a 3D model. Sometimes, a texture needs to be stretched or shrunk to fit a model. Other times, texture coordinates may fall out of range.

You can inform the GPU what to do in these instances by setting Filtering and Wrapping Modes. The Filtering Mode lets you decide what to do when pixels don't have a 1 to 1 ratio with texels. The Wrapping Mode allows you to choose what to do with texture coordinates that fall out of range.

Texture Filtering

As I mentioned earlier, there is never a 1 to 1 ratio between texels in a texture map and pixels on a screen. For example, there are times when you need to stretch or shrink a texture as you apply it to the model. This will break up any initial correspondence between a texel and a pixel. Because of this, the color of a pixel needs to be approximated to the closest texel. This process is called Texture Filtering.

Note: Stretching a texture is called Magnification. Shrinking a texture is known as Minification.

The two most common filtering settings are:

Nearest Neighbor Filtering

Linear Filtering

Nearest Neighbor Filtering

Nearest Neighbor Filtering is the fastest and simplest filtering method. UV coordinates are plotted against a texture. Whichever texel the coordinate falls in, that color is used for the pixel color.

Linear Filtering

Linear Filtering requires more work than Nearest Neighbor Filtering. Linear Filtering works by applying the weighted average of the texels surrounding the UV coordinates. In other words, it does a linear interpolation of the surrounding texels.

Texture Wrapping

Most texture coordinates fall between 0 and 1, but there are instances when coordinates may fall outside this range. If this occurs, the GPU will handle them according to the Texture Wrapping mode specified.

You can set the wrapping mode for each (s,t) component to either Repeat, Clamp-to-Edge, Mirror-Repeat, etc.

If the mode is set to Repeat, it will force the texture to repeat in the direction in which the UV-coordinates exceeded 1.0.

If it is set to Mirror-Repeat, it will force the texture to behave as a Repeat mode but with a mirror behavior; thus flipping the texture.

If it is set to Clamp-to-Edge, it will force the texture to be sampled along the last row or column with valid texels.

Metal has a different terminology for Texture Wrapping. In Metal, it is referred to as "Texture Addressing."

Applying Textures using Metal

To apply a texture using Metal, you need to create two objects; a Texture and a Sampler State object.

The Texture object contains the image data, format, width, and height. The Sampler State object defines the filtering and addressing (wrapping) modes.

Texture Object

To apply a texture using Metal, you need to create an MTLTexture object. However, you do not create an MTLTexture object directly. Instead, you create it through a Texture Descriptor object, MTLTextureDescriptor.

The Texture Descriptor defines the properties, such as image width, height and image format.

Once you have created an MTLTexture object through an MTLTextureDescriptor, you need to copy the image raw data into the MTLTexture object.

Sampler State Object

As mentioned earlier, the GPU needs to know what to do when a texture does not fit a 3D model properly. A Sampler State Object, MTLSamplerState, defines the filtering and addressing modes.

Just like a Texture object, you do not create a Sampler State object directly. Instead, you create it through a Sampler Descriptor object, MTLSamplerDescriptor.

Setting up the project

Let's apply a texture to a 3D model.

By now, you should know how to set up a Metal Application and how to render a 3D object. If you do not, please read the prerequisite articles mentioned above.

For your convenience, the project can be found here. Download the project so you can follow along.

Note: The project is found in the "addingTextures" git branch. The link should take you directly to that branch.

Let's start,

Declaring attribute, texture and Sampler objects

We will use an MTLBuffer to represent UV-coordinate attribute data as shown below:

// UV coordinate attribute
id<MTLBuffer> uvAttribute;

To represent a texture object and a sampler state object, we are going to use MTLTexture and MTLSamplerState, respectively. This is shown below:

Loading attribute data into an MTLBuffer

To load data into an MTLBuffer, Metal provides a method called "newBufferWithBytes." We are going to load the UV-coordinate data into the uvAttribue buffer. This is shown in the "viewDidLoad" method, line 6c.

Decoding Image data

The next step is to obtain the raw data of our image. The image we will use is named "small_house_01.png" and is shown below:

The image will be decoded by the LodePNG library in the "decodeImage" method. The library will provide a pointer to the raw data and will compute the width and height of the image. This information will be stored in the variables: "rawImageData", "imageWidth" and "imageHeight."

Creating a Texture Object

Our next step is to create a Texture Object through a Texture Descriptor Object as shown below:

The descriptor sets the pixel format, width, and height for the texture.

After the creation of a texture object, we need to copy the image color data into the texture object. We do this by creating a 2D region with the dimensions of the image and then calling the "replaceRegion" method of the texture, as shown below:

Creating a Sampler Object

Next, we create a Sampler State object through a Sampler Descriptor object. The Sampler Descriptor filtering parameters are set to use Linear Filtering. The addressing parameters are set to "Clamp To Edge." See snippet below:

//1. create a Sampler Descriptor
MTLSamplerDescriptor *samplerDescriptor=[[MTLSamplerDescriptor alloc] init];
//2a. Set the filtering and addressing settings
samplerDescriptor.minFilter=MTLSamplerMinMagFilterLinear;
samplerDescriptor.magFilter=MTLSamplerMinMagFilterLinear;
//2b. set the addressing mode for the S component
samplerDescriptor.sAddressMode=MTLSamplerAddressModeClampToEdge;
//2c. set the addressing mode for the T component
samplerDescriptor.tAddressMode=MTLSamplerAddressModeClampToEdge;
//3. Create the Sampler State object
samplerState=[mtlDevice newSamplerStateWithDescriptor:samplerDescriptor];

Linking Resources to Shaders

In the "renderPass" method, we are going to bind the UV-Coordinates, texture object and sampler state object to the shaders.

Lines 10i and 10j, shows the methods used to bind the texture and sampler objects to the fragment shader.

The fragment shader, shown below, receives the texture and sampler data by specifying in its argument the keywords [[texture()]] and [[sampler()]]. In this instance, since we want to bind the texture to index 0, the argument is set to [[texture(0)]]. The same logic applies for the Sampler.

Setting up the Shaders

Recall that the Fragment Shader is responsible for attaching a texture to a 3D object. However, to do so, the fragment shader requires a texture, a sampler object, and the UV-Coordinates.

The UV-Coordinates are passed down to the fragment shader from the vertex shader as shown in line 7b.

//7b. Pass the uv coordinates to the fragment shader
vertexOut.uvcoords=uv[vid];

The fragment shader receives the texture at texture(0) and the sampler at sampler(0). Textures have a "sample" function which returns a color. The color returned by the "sample" function depends on the UV-coordinates, and the Sampler parameters provided. This is shown below:

In the previous article, you learned how to render a 3D object. However, the object lacked depth perception. In computer graphics, as in art, shading depicts depth perception by varying the level of darkness.

Before I start, I recommend you to read the prerequisite materials listed below:

Shading

As mentioned in the introduction, shading depicts depth perception by varying the level of darkness on an object. For example, the image below shows a 3D model without any depth perception:

Whereas, the image below shows the 3D model with shading applied:

The shading levels are dependent on the incident angle between the surface and light rays. For example, an object will have different shading when a light ray hits the object's surface at a 90-degree angle than when light rays hit the surface at a 5-degrees angle. The image below depicts this behavior; the shading effects vary depending on the angle between the light source and the surface.

We can simulate this natural behavior by computing the Dot-Product between the light rays and a surface's Normal vector. When a light source vector s is parallel and heading in the same direction as the normal vector n, the dot product is 1, meaning that the surface location is fully lit. Recall, the dot product ranges between [-1.0,1.0].

As the light source moves, the angle between vectors s and n changes, thus changing the dot product value. When this occurs, the shading levels also changes.

Shading is implemented either in the Vertex or Fragment Shaders (Functions), and it requires the following information:

Normal Vectors

Normal Matrix

Light Position

Model-View Space

Normal vectors are vectors perpendicular to the surface. Luckily, we don't have to compute these vectors. This information is supplied by any 3D modeling software such as Blender.

The Normal Matrix is extracted from the Model-View Space transformation and is used to transform the normal vectors. The Normal Matrix requires being transposed and inverted before they can be used in shading.

The light position is the location of the light source in the scene. The light position must be transformed into View Space before you can shade the object. If you do not, you may see weird shading.

We will implement shading in the Vertex Shader (Function). In our project, the Normal Vectors will be passed down to the vertex shader as attributes. The Normal Matrix and light position will be passed down as uniforms.

Setting Up the Application

Let's apply shading to a 3D model.

By now, you should know how to set up a Metal Application and how to render a 3D object. If you do not, please read the prerequisite articles mentioned above. In this article, I will focus on setting up the Shaders.

For your convenience, the project can be found here. Download the project so you can follow along.

Note: The project is found in the "shading3D" git branch. The link should take you directly to that branch.

Let's start,

By now, you should know how to pass attribute and uniform data to the GPU using Metal, so I will not go into much detail.

Also, the project is interactive. As you swipe your fingers across the screen, the x and y coordinates of the light source will change. Thus a new shading value will be computed every time you touch the screen.

Open up the "ViewController.mm" file.

The project contains a method called "updateTransformation" which is called before each render pass. In the "updateTransformation" method, we are going to compute the Normal Matrix space and transform the new light position into the Model-View space. This information is then passed down to the vertex shader.

Updating Normal Matrix Space

The Normal Matrix space is extracted from the Model-View transformation. Before each render pass, the Normal Matrix must be transposed and inverted as shown in the snippet below:

Since we need to compute the light ray direction, we transform the model's vertices into the same space as the Light Position, i.e., we transform it into the Model-View space. Once this operation is complete, we can compute the light ray direction by subtracting the light position and the surface vertices. See the snippet below:

//3. transform the vertices of the surface into the Model-View Space
float4 verticesInMVSpace=mvMatrix*vertices[vid];
//4. Compute the direction of the light ray betweent the light position and the vertices of the surface
float3 lightRayDirection=normalize(lightPosition.xyz-verticesInMVSpace.xyz);

With the Normal Vectors and the light ray direction, we can compute the intensity of the shading. Since the dot product ranges from [-1,1], we get the maximum value between 0 and the dot product as shown below:

//5. compute shading intensity by computing the dot product. We obtain the maximum the value between 0 and the dot product
float shadingIntensity=max(0.0,dot(normalVectorInMVSpace,lightRayDirection));

Next, we multiply the shading intensity value, by a particular light color. The shading color is then passed down to the fragment shader (function).

Perspective-Projection Transformation

This transformation is known as the Model-View-Projection transformation.

And that is it. One of the concepts that differ between rendering a 2D vs. a 3D object are the transformations performed.

Setting up the Application

Let's render a simple 3D cube using Metal.

By now, you should know how to set up a Metal Application and how to render a 2D object. If you do not, please read the prerequisite articles mentioned above. In this article, I will focus on setting up the transformations and shaders.

For your convenience, the project can be found here. Download the project so you can follow along.

Note: The project is found in the "rendering3D" git branch. The link should take you directly to that branch.

In computer graphics, a mesh is a collection of vertices, edges, and faces that define the shape of a 3D object. The most popular type of polygon primitive used in computer graphics is a Triangle primitive.

Before you render an object, it is helpful to Triangularize the object. Triangularization means to break down the mesh into sets of triangles as shown below:

It is also helpful to obtain the relationship between a vertex and a triangle. For example, a vertex can be part of several triangles. The relationship between a vertex and a triangle primitive is known as "Indices," and they inform the Primitive Assembly, in the Rendering Pipeline, how to connect the triangle primitives.

The "indices" we will provide to the rendering pipeline are defined below:

We will also use an MTLBuffer to represent our Model-View-Projection uniform:

//Uniform
id<MTLBuffer> mvpUniform;

Loading attribute data into an MTLBuffer

To load data into an MTLBuffer, Metal provides a method called "newBufferWithBytes." We are going to load the cube vertices and indices into the vertexAttribute and indicesBuffer as shown below. This is shown in the "viewDidLoad" method, Line 6.

Updating the Transformations

The next step is to compute the space transformations. I could have set up the project to show a static 3D object by only calculating the Model-View-Projection transformation in the "viewDidLoad" method. However, we are going to go a step beyond.

Instead, of producing a static 3D object, we are going to rotate the 3D object as you swipe your fingers from left to right. To accomplish this effect, you will compute the MVP transformation before every render pass.

The project contains a method called "updateTransformation" which is called before each render pass. In the "updateTransformation" method, we are going to rotate the model, and transform its space into a World, Camera, and Projective Space.

Setting up the Coordinate Systems

The model will be rotated as you swipe your fingers. This rotation is accomplished through a rotation matrix. The resulting matrix corresponds to a new model space.

Linking Resources to Shaders

In the "renderPass" method, we are going to link the vertices attributes and MVP uniforms to the shaders. The linkage is shown in lines 10c and 10d.

//10c. set the vertex buffer object and the index for the data
[renderEncoder setVertexBuffer:vertexAttribute offset:0 atIndex:0];
//10d. set the uniform buffer and the index for the data
[renderEncoder setVertexBuffer:mvpUniform offset:0 atIndex:1];

Note that we set the "vertexAttribute" at index 0 and the "mvpUniform" at index 1. These values will be linked to the shader's argument.

Setting the Drawing command

Earlier I talked about indices and how they inform the rendering pipeline how to assemble the triangle primitives. Metal provides these indices to the rendering pipeline through the Drawing command as shown below:

Now that you know the basics of Computer Graphics and how to set up the Metal API, it is time to learn about matrix transformations.

In this article, you are going to learn how to rotate a 2D object using the Metal API. In the process, you will learn about transformations, attributes, uniforms and their interaction with Metal Functions (Shaders).

Let's start off by reviewing Attributes, Uniforms, Transformations and Shaders.

Attributes

Elements that describe a mesh, such as vertices, are known as Attributes. For example, a square has four vertex attributes.

Attributes behave as inputs to a Vertex Function (Shader).

Uniforms

A Vertex Shader deals with constant and non-constant data. An Attribute is data that changes per-vertex, thus non-constant. Uniforms are data that are constant during a render pass.

Unlike attributes, uniforms can be received by both the Vertex and the Fragment functions (Shaders).

Transformations

In computer graphics, Matrices are used to transform the coordinate system of a mesh.

For example, to rotate a square 45 degrees about an axis requires a rotational matrix to transform the coordinate system of each vertex of the square.

In computer graphics, transformation matrices are usually sent as uniform data to the GPU.

Functions (Shaders)

The rendering pipeline contains two Shaders known as Vertex Shader and Fragment Shader. In simple terms, the Vertex Shader is in charge of transforming the space of the vertices. Whereas, the Fragment Shader is responsible for providing color to rasterized pixels. In Metal, Shaders are referred to as Functions.

To rotate an object; you provide the attributes of the model, along with the transformation uniform, to the vertex shader. With these two set of data, the Vertex Shader can transform the space of the square during a render pass.

Setting Up the Project

By now, you should know how to set up a Metal Application. If you do not, please read the prerequisite articles mentioned above. In this article, I will focus on setting up the attributes, updating the transformation uniform and the Shaders.

For your convenience, the project can be found here. Download the project so you can follow along.

Note: The project is found in the "rotatingASquare" git branch. The link should take you directly to that branch.

Loading attribute data into an MTLBuffer

To load data into an MTLBuffer, Metal provides a method called "newBufferWithBytes." We are going to load the square vertices into the vertexAttribute MTLBuffer as shown below. This is shown in the "viewDidLoad" method, Line 6.

Updating the rotation matrix

Now that we have the attributes stored in the buffer, we need to load the uniform data into its respective buffer. However, unlike the attribute data which is loaded once, we will continuously update the uniform buffer.

The project contains a method called "updateTransformation". The project calls the "updateTransformation" method before each render pass, and its purpose is to create a new rotation matrix depending on the value of "rotationAngle."

To make it a bit interactive, I set up the project to detect touch inputs. On every touch, the "rotationAngle" value is set to the touch x-coordinate value. Thus, as you touch your device, a new rotation matrix is created.

In the "updateTransformation" method, we are going to compute a new matrix rotation and load the rotation matrix into the uniform MTLBuffer as shown below:

Linking Resources to Shaders

The next step is to link the attribute and uniform MTLBuffer data to the shaders.

Metal provides a clean way to link resource data to the shaders by specifying an index value in the Render Command Encoder argument table and linking that value to the argument of shader function.

For example, the snippet below shows the attribute data being set at index 0.

//set the vertex buffer object and the index for the data
[renderEncoder setVertexBuffer:vertexAttribute offset:0 atIndex:0];

The shader vertex function, shown below, receives the attribute data by specifying in its argument the keyword "buffer()." In this instance, since we want to link the attribute data to index 0, we set the argument to "buffer(0)."

We need to set the index value for our attribute and transformation uniform. To do so, we are going to use the "setVertexBuffer" method and provide an index value for each item. This is shown in the "renderPass" method starting at line 10.

//set the vertex buffer object and the index for the data
[renderEncoder setVertexBuffer:vertexAttribute offset:0 atIndex:0];
//set the uniform buffer and the index for the data
[renderEncoder setVertexBuffer:transformationUniform offset:0 atIndex:1];

Setting up the Shaders (Functions)

Open the "MyShader.metal" file.

Take a look at the function "vertexShader." Its argument receives the vertices at buffer 0 and the transformation at buffer 1. Note that the "transformation" parameter is a 4x4 matrix of type "float4x4."