Truth be told this demo is nice. But it isn’t even WebGL yet. It’s simply demonstrating two principles behind 3D camera: perspective transform and the camera rotation effect. It works using the good old regular canvas tag initialized in 2D.

The rest of this chapter explains how these principles work. WebGL of course is even much more powerful than this. It gives your <canvas> tag direct access to the GPU — your video card. It even makes 2D games faster.

I am an independent author of several software development books. I published this entire WebGL chapter for free to help with exposure for my game WebGL Gems. If you like it, just check it out. It’s a fun subject.

You can preorder my book here until April 10th. You’ll receive it in your inbox shortly sent to your PayPal email address as a PDF. I do plan a paperback, if you can wait for that it’ll be available on Amazon.

These principles are fundamental to any 3D transformations and not just when dealing with WebGL. For this reason it almost doesn’t matter what graphics outlet we implement here. I’ll use regular 2D canvas and demonstrate how 3D points are projected onto a 2D screen using math.

Let’s take a look at a diagram describing the matter at hand. We’ve initialized some star particles in 3D space far enough from the camera.

Notice that in standard 3D coordinate systems by default the Z coordinate extends away from the camera in negative direction.

Once initialized at a random distant point we will move the stars toward the camera by decreasing only their Z coordinate. Their X and Y coordinates will not be touched.

When each individual star gets too close to the camera we will reinitialize it by pushing it away from the camera again by subtracting a random value from its Z coordinate. This will create an infinite star field effect.

I wrote a small star particle engine to demonstrate these principles on a 2D canvas. Of course when we will be working directly in WebGL then the GPU will crunch our matrix operations like a hungry dinosaur. But understanding the basics has tremendous repercussions in terms of getting better as a 3D graphics programmer. I think it’s a road worth pursuing.

When watching stars move on the 2D screen to which the camera is projecting the final image we will notice something peculiar. Our stars are moving in X and Y direction.

Even though in 3D coordinate system we’re merely changing the Z coordinate of each star. Movement of each star appears to happen in 3D because it’s how our brain and eyes process information.

This transformation from 3D to 2D space is what Projection Matrix is responsible for. It’s what creates that interactive depth effect on a flat screen. But in 3D graphics there are two other matrices: Model and View.

How does Model and View matrices fit in? The Model matrix contains the X, Y, and Z coordinates of each star particle. You can think of each star being represented by a single vertex. Or as a very simplified version of a 3D model. It is just a single vertex coordinate.

The View represent the position of our camera. Before I go into multi-dimensional matrix representations, I will show you a bare bones example using just the mathematical calculations.

Standard Projection Matrix Calculation

To transfer our 3D coordinates to appear on a 2D screen we need to perform some basic calculations that create a camera projection transformation. Essentially this is what View matrix is for. But in this example we’ll break down the calculations to their bare minimum.

To demonstrate this process, let’s write a quick canvas demo with a basic star particle engine. I’ll keep the code as short as possible. But we’ll see exactly where matrices take their origin when it comes to representing 3D data on a flat 2D display output. Knowing these fundamental principles is important if you ever want to truly understand math behind most 3D operations. And really, most of them are just movement on 3 axis and rotation around one or more axis.

In the next sub chapter you will find the complete source code from starfield canvas program. There is also a web link to the working example. It’s a basic program that displays stars moving toward the camera view, creating the illusion of space travel.

The code is very basic here and the program itself is very short. It will fit on about two pages of this book. The purpose of this demo is to show that with just two types of vertex transformations (which are translation and rotation) we can create a foundation on which to build mathematical understanding of matrix functions.

TheStarClass

First, we’ll create a new Star class representing a single star using JavaScript’s class keyword. It’s not entirely similar to class keyword in languages such as C++ or Java, but it accomplishes a vaguely the same task.

This class will contain the X, Y and Z coordinates of the star particle. It will also have a constructor that will take care of initializing the default (and random on X and Y axis) position of any given star.

The class will contain functions reset for initializing a star, project which is the core 3D to 2D projection algorithm that demonstrates what matrices are actually trying to accomplish, and the draw function that will actually draw each star at its projected 2D coordinates.

Speaking of which, the Star class has two pairs of coordinates. One represents the actual placement of a star vertex in 3D world space using X, Y and Z coordinates.

But the class will also have x2d and y2d pair for separately storing the actual pixel coordinates when the star is rendered on the flat canvas view. Finally the class will store the star’s angle of rotation on Z axis to demonstrate basic trigonometry operations you’ll often see when dealing with 3D graphics.

JavaScript Source Code

Let’s take a look at the source code of the Star class.

I tried my best to edit it for Medium format.

You can also just look it up from the example at the beginning of this article.

I chose the canvas size of 800 by 500 to simulate somewhat of a wide screen format. Note that the width and height attributes of our canvas tag must also match these values.

The reset function provides default position values for the “starting point” of a star on the X and Y axis respectively:

this.x = 1 — Math.random() * 2.0;

this.y = 1 — Math.random() * 2.0;

These calculations will create random values between -1.0 and 1.0 for each of the axis.

To finish initializing our star we simply push the star away from the camera by a random value that falls somewhere between 0.0 and -MAX_DEPTH.

After fiddling around with the parameters I’ve chosen MAX_DEPTH to be 10 units because visually it creates best results in this scenario.

this.z = Math.random() * -MAX_DEPTH;

Note that the default z value is negative. This is normal. Remember that by default a 3D camera is facing toward the negative Z axis. It’s something we’re assuming here.

This is the standard throughout OpenGL specification. But here, we’re imitating it in software. You could essentially reverse the Z axis if it were your preference. But we’re trying to adhere to standard principles here.

The x2d and y2d are the final rasterized coordinates in 2D space. Are we not doing 3D graphics here? Yes, but the final pixel value is always rasterized to a flat 2 dimensional rectangle.

That’s the whole point of camera projection algorithm represented in the next method of the Star class called render. And in just a moment we’ll see how it does that mathematically.

Apparently each star will also have an angle of rotation. Again, I am only including this to demonstrate a principle. In 3D graphics you will do a lot of rotation transformations.

Here, in addition to moving stars toward the camera view, on each frame we will also rotate all of them in clockwise direction. This is achieved by simply incrementing the Z value of each star and also its angle of rotation.

In order to rotate a point around Z axis we need to perform operations on X and Y axis. This is the standard trigonometric formula that can be applied for rotating any vertex around an axis.

For example, swapping y coordinate with z and plugging that into the formula above will rotate the point around Y axis. Swapping x coordinate with z will rotate the point around X axis. In other words, the point rotates around whichever axis is missing from the equation.

Changing the angle from positive to negative will rotate the point in an opposite direction. The general idea remains the same. Here is the pseudo code:

x = x*cos(angle)-y*sin(angle);y = y*cos(angle)+x*sin(angle);

The angle here is the degree by which you wish the point to be rotated per animation frame. Whenever you’re rotating a 3D object’s vertex, you can be sure that behind all matrix operations this calculation is taking place in the raw. Perhaps, optimized by a look-up table.

In the star field demo we’re rotating each star by 0.005 on each frame of animation. Note that the JavaScript Math.sin and Math.cos formulas take the angle in radians, not degrees. Finally, we’re going to project the star from its 3D coordinates to 2D on the screen.

While we are still on a 2D canvas, the calculations below will transform the coordinate system to what’s shown on the diagram above. In 3D graphics the camera by default is looking down negative Z. But what’s more important, the X=0 and Y=0 are exactly at the center of the screen, regardless of the screen resolution.

It is for this reason that in 3D graphics we slice the screen into 4 quadrants. Whichever direction you go, whenever you hit one of the four bounds of the screen you will reach 1.0 which represents the maximum value in that direction.

There is just one more thing. Remember that our screen is wider than it is taller. In other words, just this algorithm alone will produce a somewhat skewed effect unless both width and height of our canvas are the same. That’s not the case here.

And for this reason we need to fix this weird effect by adjusting the X coordinate and multiplying it by the screen width / height ratio:

This is pretty much the equivalent of operations performed by Projection matrix. Except our matrix equivalent of this function will also include near and far clipping plane. We will talk about matrix structure and their basic function in just a moment.

By now in our source code we’ve projected the 3D star onto a 2D canvas view and rotated each star by 0.005 degrees (in Radians) on each animation frame.

Now let’s move the star closer to the camera view by 0.0025 pixels per frame.

// Move star toward the camerathis.z += 0.0025;

I chose 0.0025 by trial and error. It just seemed to produce better visual results. But because the time animation function setInterval has no time limit in this particular demo (I set it to 0 wait time between frames) it may or may not appear exactly the same on your computer. The demo will be running as fast as is allowed by your system.

Have you ever wondered how our eyes see light? The particle (or wave?) enters through an opening in the pupil. But when these light particles land on the back of our eye and hit the retina the image is projected upside down. Our brain just has a magical way of reversing that information.

Come to think about this, our WebGL 3D camera is just a mathematical representation of this natural phenomenon. Who can tell? Jesus, I have no idea how it actually works. I really don’t. But, in our little version of the eye mechanism since we have control over what happens here, we simply need to prevent vertices from falling outside of the viewing cone on the Z axis.

Clipping Planes

There is still one important part missing. When the star’s Z coordinate reaches 0 and starts to increment in positive direction, our perspective formula will interpret it in reverse. In other words, stars that go &gt;= 0 will start to appear as if they are moving away from us.

That’s what happens when objects move past the threshold set by the near clipping plane. When objects move beyond it, the results are reversed. But this isn’t what we need. In fact, we don’t have to worry about any vertex data that breaks beyond that limit.

In our program it is important to set these boundaries. The starfield demo “cheats” a bit over this by eliminating all stars that go outside of the screen boundary in any direction, which only roughly coincides with them getting closer to the camera. And it also never draws any stars at all outside of the -10.0 boundary. It only creates an approximation of the clipping plane function.

A good way would be to simply exclude vertices based on their Z value. Your near clipping plane does not have to be at Z=0. It is usually a smaller value. For example -0.1 or -0.005.

But there is also a far clipping plane and it can extend out to 250, 500 and 1000 units. In our starfield demo it is only 10. It really depends on how far of the geometry you wish to be seen in your game world. And also on what a “unit” of space really means to your camera.

Good news is that when we will be using a JavaScript matrix library later on, it’ll take care of these issues. A perspective projection is usually defined by two clipping planes. These parameters are integrated into the camera projection matrix as required arguments.

I’m just glad we got these principles down so it’s easier to understand their implementation throughout the rest of the book.

Star Field Demo Results

The final effect achieved will appear roughly as shown on the diagram below. Here I inverted the background color. In the actual demo the background is black. It looks better in a book this way and doesn’t waste up black ink in print.

If you don’t have access to a WebGL-enabled browser at this moment (reading this book on a Kindle device, for example) I wanted to include this diagram here to theoretically show you what we’re trying to achieve.

When executed in your browser these calculations will create a hypnotic star travel effect.

The Z of each star is increased. When projection transform is applied this will create the illusion that the stars are moving toward the camera. Or perhaps that we are traveling forward in space and stars remain in the same location in the space world.

This dual nature of 3D movement is not a coincidence. And you will run across this often when dealing with the 3D camera. It’s peculiar. But it’s something you just have to get used to. Moving the world is the same as moving the camera in inverse direction. Visually the effect is the same.

We will talk a lot more about camera and learn how to control it with precision to do pretty much anything we could possibly need to in a 3D game, including mouse-controlled camera or creating a camera that’s always following an object in the world.But let’s get back to our demo for a moment. We figured out the star transformations but need to finish writing the core program.

Here, each star is an individual particle. It is initialized by a for-loop as you will see in the source code shown below. In the demo I set the maximum number of stars to be 2000.

All of this is done by the remaining part of our JavaScript demo where we set some basic default parameters, initialize 2000 stars, and use JavaScripts timing function setInterval to execute a frame of animation without a time delay (as soon as possible.)

I hope you already opened the link to test this demo at the beginning of this article. It shows exactly what this code does.

There you see in action the movement on Z-axis and rotation around Z-axis transformations that are very common to matrix operations. And we’ve already taken a look at their plain mathematical representation using basic trigonometry in the source code of this demo explained in this chapter.

Congratulations.

You now have a handful of 3D techniques that help you “Think in 3D”.

This WebGL Tutorial was sponsored by Learning Curve book publisher specializing in software documentation.