On the Back of a Napkin: Part 3

This article originally appeared in Dev.Mag Issue 4, released in June 2006.

Now that we’ve got a solid foundation to work with, we can start looking at more relevant issues in 3D. This week we’ll explore the idea of texture filtering and why it’s a good thing. We’ll find out what the various types of filtering actually do, how they do them and how much of a frame rate hit each one causes. Read on if you want to know the difference between bilinear, trilinear and anisotrophic filtering.

Why filter?

The reason that texture filtering is used is a small problem in the graphics industry called aliasing. Aliasing problems are very easy to spot and can ruin the visual illusion in a game.

Any computer screen is divided into pixels, duh. Each pixel can only be a single colour, it’s impossible to have a pixel start off being red on one side and then fade to black on the other. The idea is that any image can be abstracted (See, it IS all lies, even your screen) by splitting it into enough individual pixels, unfortunately that doesn’t always work very well: We notice “blockiness” on diagonal and nearly vertical/horizontal lines very easily.

Aliasing issues

Aliasing is called that because it’s the process of referring to one thing by a set of different handles or names. In this case, we’re trying to get the pixels in a rendered image to refer to the pixels in a texture. As each pixel in our image is rasterised, (if you have no idea what that means, read the article on Vertices again) interpolation gives us a unique set of texture co-ordinates that tell us where on the texture to fetch the colour our pixel should be. That sounds complex, but it isn’t: Rasteriser starts on a new pixel onscreen -> interpolation gives us the various values that pixel needs (by blending between the various vertices) -> texture co-ordinates give us an x and y point on our texture -> pixel is made that colour, with some adjustments for lighting and all that jazz.

Groovy. But textures are also made up of pixels (which we call texels to save on headaches), so they can have aliasing issues of their own. Damn. Here’s a picture of one of the problems:

Aliasing issues on textures

Aliasing issues on textures

Bilinear filtering

All you have to do to see the effects of bilinear filtering is to run almost any game in software mode and then again with hardware acceleration. Bilinear filtering makes textures “smoother” and less blocky by grabbing four texels near the sample point and averaging their values to get a blended colour for the screen pixel. It’s this “blurring” that smoothes out the textures on the screen and avoids aliasing misses.

Bilinear filtering

Bilinear filtering

There are a few problems with bilinear filtering though. The first and most visible is caused by mipmapping. Mipmapping is a technique used to manually limit aliasing issues by providing smaller versions of textures that an engine uses when objects are far away, this means that there are less texels that it’s possible to miss when there are large “gaps” between sample points. Some engines use many levels of mipmaps, especially if it’s possible to see very far into the distance.

Quake's picmip setting.

Trivia: The famous “picmip 5” setting that Quake3 pros used simply scales down all the textures in the game, making a 512×512 texture effectively a 64×64 or 32×32 image instead. This blurs all the textures like crazy, but that’s not why the pros did it: They were after the small increase in FPS caused by having smaller textures and less texel lookups and a rather debatable “visibility increase”. Oh what crap textures you have grandma! All the better to see you with dear.

So, mipmapping was invented before bilinear filtering as a way to deal with distance aliasing issues. The smaller textures (remember how the U and V texture coordinates only range from 0 to 1? The different sizes of mipmapped textures are one of the reasons for that) allow for less “misses” of texels because there are less texels in total. But, when you’re using bilinear filtering AND mipmapping, the smaller textures are blurred a lot more by the bilinear filter:

Mipmapping/bilinear filtering artifacts

This sudden increase in blur is what we see in games as a horizontal or vertical “line” on floors and walls, especially when moving. It ends up looking like there’s an error that stays a certain distance ahead of us in the game, which can get very frustrating. That’s why there’s the option to turn on trilinear filtering.

Trilinear filtering

Just as bilinear filters across two dimensions, trilinear filters across three. Except that the third dimension is the Dimension of MipMapping! This means that where bilinear filtering grabs four texels and averages them out according to an algorithm, trilinear grabs eight texels (four from one mipmap and four from the other) and again averages them out to get a final colour for the pixel on screen.

Trilinear filtering

Anisotropic filtering, the next level

So, both bilinear and trilinear filtering work in texture space to try to calculate the correct colour for a textured pixel. Unfortunately this isn’t always the best approach: It works fine when the textures that are being filtered are displayed on polygons that are at right-angles to the camera, but it’s a poor approximation for polygons that are at non-perpendicular angles. This is because of the shape of a pixel on screen when projected into texture space depends on the angle of the polygon the texture is being used on. Wait, that sounds confusing. Here’s a picture:

Projecting pixels on oblique surfaces

Anisotropic filtering takes this difference in mapping into account and uses many samples of the texture in patterns that depend on the projection to calculate the final colour of our on-screen pixel. Unfortunately ATi and nVidia use different patterns and sometimes even different amounts of samples to arrive at their final values, so it’s not really possible to draw a simple snapshot of anisotropic filtering. It is possible to mention that anisotropic filtering uses a lot more texture samples per pixel (obviously) so both card manufacturers decided to allow us to have some say in the amount of bandwidth vs the visual quality of anisotropic filtering by giving us the arbitrarily named 2x, 4x, 8x and even 16x Aniso settings that we can tweak in our drivers.

But what does that all mean?

Why don’t we take a step back and figure out what all this means for our gaming?

Texture filtering makes games look better by making our textures less dependant on resolution. Of course, we could simply up our resolutions and make our games look smoother and crisper that way. That’s option 1, but it does mean that our whole rendering pipeline is calculating a lot more pixels, so your FPS will depend on the speed of your GPU’s core clock.

If you can stand it, turning filtering off (and living with only point sampling and mipmaps) is the fastest approach in terms of memory bandwidth. It doesn’t look great at all though.

Bi- and tri-linear filtering are the current standards because an average graphics card these days has a memory clock that’s fast enough to allow 4 (for bilinear) and 8 (for trilinear) texture-memory reads per pixel. So, depending on your card, you can probably afford to use either of those filtering methods without taking a FPS hit at all.

Anisotropic filtering ups the memory reads per pixel quite dramatically, sometimes even doing as many as 128 reads on the highest settings! So your memory clock speed is really important if you want to use aniso. Newer cards can handle the lower levels of aniso (4x and lower) without pushing themselves too much, both manufacturers use optimization algorithms to attempt to apply anisotropic filtering to only those parts of a scene that need it.

If you really hate fuzzy textures, enable the highest level of aniso filtering you can, but it will give you slower fill rates as each pixel reads tons of memory. The slowdown will get worse the larger your resolution, but such a high level of filtering might make it tolerable to take your screen size down a notch or two. But that’s all purely personal choice.

Related

About dislekcia

Danny Day still enjoys telling people he's a game designer far too
much. He has yet to apologise for accidentally staring Game.Dev all
those years ago (some believe he never will) and currently runs QCF
Design in the hope of producing awesome games that you'll enjoy playing. Don't talk to him about education systems, procedural generation or games as art if you value your time. [Articles]