[[This is Chapter 17(g) from “beta” Volume V of the upcoming book “Development&Deployment of Multiplayer Online Games”, which is currently being beta-tested. Beta-testing is intended to improve the quality of the book, and provides free e-copy of the “release” book to those who help with improving; for further details see “Book Beta Testing“. All the content published during Beta Testing, is subject to change before the book is published.

As it was noted in the beginning of this Chapter, please keep in mind that

in this book you will NOT find any advanced topics related to graphics.

What you will find in this chapter, is the very very basics of the graphics, just enough to start reading the other books on the topic, AND (last but not least) to understand other things which are essential for networking programming and game development flow.

Bottom line:

if you’re a gamedev with at least some graphics experience – it is probably better to skip this Chapter to avoid reading about those-things-you-know-anyway.

This Chapter is more oriented towards those developers who are coming from radically different fields such as, for example, webdev or business app development (and yes, switch from webdev into gamedev does happen).

Lighting

While those of us coming from other computing fields might not believe it, lighting is one of the most important things when it comes to the visual quality and “feeling” of the 3D scene. In other words, lighting can easily make or break your graphics.

NB: Please note that as with everything else 3D-related, we won’t go into details of lighting, and will just scratch the surface.

Before going into any discussion about lights, let’s quote [GritzdEon] on an inherently linear nature of light processing:

Light transport is linear. The illumination contributions from two light sources in a scene will sum. They will not multiply, subtract, or interfere with each other in unobvious ways.1

This linearity of light is a cornerstone on which the whole light processing is built. In particular, as soon as we have multiple light sources, we can calculate them separately, and then simply add them to get the correct result.

1 Saving for scenarios when we need to deal with wave properties of photons, like interference or diffraction, but I didn’t see such things in games (yet?)

Light Sources

To start playing around with lighting, we should define some light sources. In 3D graphics, at least the following different types of light sources are traditionally recognized:

Ambient light. It is a type of light which is the same for all the 3D scene for any given frame. In other words, while it may change from one area to another one, and with time too, for the purposes of rendering one single frame it is the same over the whole 3D scene. Ambient lights can be described by one single variable – color.

“Think of a room with closed shutters and no lights in twilight – and you’ll get an idea about ambient light componentExample. Think of a room with closed shutters and no lights in twilight – and you’ll get an idea about ambient light component; note, however, that normally ambient light will be present in any scene – however, in other environments it will be much less noticeable due to other light sources.

Directional light. Essentially – parallel light which goes across the whole scene. Directional light is described by vector and color.

Example. Sunlight. While technically, sunlight is not strictly parallel, for all rendering intents and purposes we can consider it parallel (well, at least as long as we’re on Earth; realistic rendering on Mercury might be different).

Spot Light. A cone of light. Usually is represented by two cones; inner cone has constant intensity, which then gradually decreases towards the outer cone, where it disappears completely. Described by point, vector, inner/outer angles, and color.

Example. Many lamps and lights exhibit such behavior.

Point Light. Intensity of point light decreases with distance (usually as R^2). Described by point, color, and characteristic distance (for example, the distance where intensity goes down by half).

Emissive objects. Some objects in the scene can emit light themselves. Can be implemented using emissive textures.

Examples include all kinds of glowing stuff – from fireflies to “glowing ectoplasm” and beyond.

Combinations of all the above.

Example. In [Gregory] an example from a real-world game is discussed, where were combined emissive texture, spot light, translucent mesh, and projected texture – all of it just to make a flashlight (though a Really Important one for the game).

Reflection model

Ok, we’ve got our light sources, but now we’re facing the next question – how those light sources interact with the objects within our scene?

“In games, more often than not, we’ll be dealing with so-called Phong reflection modelIn games, more often than not, we’ll be dealing with so-called Phong reflection model (a.k.a. Phong illumination or Phong lighting, but not to be confused with Phong shading), named after Bui Tuong Phong (who described it in his 1973 PhD thesis).

The idea behind it is that a light reflection from a surface consists of three components (terms):

Ambient, which depends neither on “at which angle the light comes to the surface” nor on “at which angle we’re looking at the surface”

Diffuse, which depends on “at which angle the light comes to the surface” but not on “at which angle we’re looking at the surface”.

Specular, which depends both on “at which angle the light comes to the surface” and on “at which angle we’re looking at the surface”

Mathematically, for a single light source (and single color component) it can be described as

I = ka*ia + (kd*(N · L) + ks*(R · V)α)*i

Where:

I is intensity of reflected light

“Diffuse and specular reflection constants, as well as shininess constant, are often taken from respective texture mapska, kd, ks are ambient, diffuse, and specular reflection constants of the material respectively (the latter two often taken from respective texture maps)

α is as “shininess constant” of the material (may be represented as a single-color texture known as “roughness map”)

ia is intensity of ambient light

i – intensity of our single light source

N – normal to our surface

L – vector at which light comes to the surface

R – where the light coming from vector L reflects (can be calculated based on “law of reflection”)

V is a vector indicating angle from which we’re looking at the surface (i.e. angle of our camera)

NB: all vectors are assumed to be pre-normalised

Of course, Phong model is not the only reflection model out there – but it2 is widely used in games, and allows to achieve very good results. For further discussion on lighting (in particular, on more generic BRDF model) – I suggest to refer to [VanVerthBishop].

2 and its variations such as Blinn-Phong model

Lighting and MOG

As with pretty much anything within the context of this Chapter, we want to answer a question “how this feature affects MOGs?” For lighting, the answer is simple – it almost never does. Even if your game calls for effects such as PC being blinded by a laser reflected from a mirror, for MOGs it is not implemented via thorough rendering of all the lights within your 3D scene, their respective reflections, and effects of the light on the human eye – but rather at a game logic/physics level via separately coded blinded-by-laser-beam logic (handling reflections at the same level if necessary).

Camera & Frustum

“For the purposes of this book, we’ll discuss just two most common projections – perspective and orthographic.By this time, we’ve got the whole 3D scene, and have even lit it, but we still haven’t defined a mechanism for seeing it 🙁 . To allow rendering of a 3D scene onto a 2D surface (screen), we need to add a camera – and to think in terms of projections. For the purposes of this book, we’ll discuss just two most common projections – perspective and orthographic.

Perspective projection

When using perspective projection, camera is a point from which we’re looking at our 3D world – and a vector (coming from this point) describing direction we’re looking at. At some point along this vector, there is a plane (orthogonal to the vector), where we’re projecting our 3D world. And a part of this plane is our 2D screen. While this description may sound convoluted, in fact it is simple – just look at the picture above. Mathematically, however, it is indeed rather complicated (perspective projection is neither linear nor even an affine transformation, though a 3D perspective projection can still be described by linear manipulations with 4D matrices, see [[TODO]] section above).

With perspective projection, the only things which we can see, reside within the pyramid which has camera as its apex, and lines from camera to the corners of our 2D screen – as its edges. This pyramid is known as frustum,3 and has very important implications in 3D – and in MOGs too.

The most important consideration about frustum is that (as a rule of thumb) we don’t need to render objects which are completely outside of the frustum; this is known as frustum culling (also mentioned in [[TODO]] section above) – and (in one or another form) is present in any serious 3D game.

3 strictly speaking, “frustum” is only a part of the pyramid behind our screen, but for our purposes the difference is usually negligible

Orthographic Projection

“with orthographic projection, the lines which project points from our 3D world onto our 2D screen, are no longer converging to one single point; instead, they go parallel to each otherAnother rather popular projection of rendering 3D world onto a 2D screen is a so-called orthographic projection. Compared to perspective projection described above, we still have our 3D world, we still have our 2D screen, we still have a vector “where we’re looking from”, and we still have our frustum. However, the lines which project points from our 3D world onto our 2D screen, are no longer converging to one single point; instead, they go parallel to each other; as a result, our frustum for orthographic projection is no longer a pyramid, but rather a rectangular parallelepiped (box).

In games, orthographic projection is used mostly for map-like views and for rendering 2D graphics using GPUs (see [[TODO]] section above).

Frustums and MOGs

In addition to having profound impact on 3D rendering, frustums may have an impact on MOGs too. In particular, as it was discussed in Chapter III, it might be used for “interest management” (which is commonly used both to optimize traffic and to avoid “see-what-you-shouldn’t-see” cheating, see Chapter III for detailed discussion).

The idea of using frustum for interest management goes along the following lines: as we need to render only the part within the frustum, it means that the Client doesn’t really need to know about the world outside of frustum. However, as it was discussed in Chapter III, this technique has its problems (in particular, a problem of sharp turns), and as of now, I don’t know of games which use it in practice. Still, keep your eyes open (and take a look at Chapter III for discussion).

[[TODO: homogeneous clip space]]

[[To Be Continued…

This concludes beta Chapter 17(g) from the upcoming book “Development and Deployment of Multiplayer Online Games (from social games to MMOFPS, with social games in between)”. Stay tuned for beta Chapter 17(h), where we’ll continue our very cursory discussion of 3D with a 1000-word crash course on rendering pipelines and shaders.]]