Games have trouble covering large surfaces so that they can look good at a distance and close-up. This was a huge problem in the past (and basically unsolvable for pre-2000 game engines) that still crops up today under certain circumstances.

Let’s say we have a room. We’ll use our old standby UnrealEd to illustrate. We make a room and adjust the textures so that they look good from a distance. But then they look super blurry when we get close:

I’m using a tiny texture (this one is really for ledges and such) to make the problem more obvious, but this screenshot doesn’t really do it justice. It’s when you’ve got just a few pixels stretched across your entire monitor that it really starts to look awful.

You can fix this by making the texture repeat more times. But then the wall will look detailed up close, and monotonously repetitive at a distance:

It’s a tradeoff, really. The problem is that we have lots of space but we can’t afford to fill that space with unique information. Our poor 1999 level graphics card just can’t hold enough texture data to cover this wall with a single ultra-detail texture. The space requires more information than we can afford to provide.

Let’s say I need to texture a wall, but my texture budget only allows me to use a measly 32×32 texture. I dunno. Maybe my boss hates me or something. Here is the texture I have to work with:

Well, I can be an idiot and just stretch that sucker over an entire wall:

Much too blurry. Or, I can cause the texture to repeat a bunch of times:

The obvious tiling is very bad, and ends up making it look like a magic eye picture. Your eye spots the repetition and rejects it. Sure, brick walls are repetitive, but they don’t have the same three bricks with the exact same shape and the exact same shade and the exact same spacing. You can even see tiling problems like this in modern games once in a while. Take Champions Online:

The tiling here is really obvious in-game, although I’m not sure it will show up in this reduced screenshot. So we’ll zoom in a bit. Take a closer look at the ground:

Even the slight color changes can’t hide the tiling, which makes the world look like a big quilt.

The brute force solution has been to make the texture maps bigger, and graphic card makers have been only too happy to oblige with ever-escalating devices with access to larger and larger memory pools. But it’s actually not a great solution. The way the human eye works, if you want something to look twice as detailed, you need to double the resolution of the texture, so that it is both twice as wide and twice as tall. But textures take up memory based on their total area. Width times height and such. So to make a texture look twice as good, it needs to consume four times as much memory.

Making things worse is the fact that games were pushing for more detail in both the close-up and distance scenery. The artist wants to make a telephone where you can actually see the buttons as opposed to a blob of color. And game designers are pushing for ever larger gameworlds, which means we’ll be seeing bigger stuff from even further away. The need for better levels of detail on both ends of the scale, combined with the desire for greater variety, and further combined with the problem of quadrupling the memory use for double the quality, all created a need for so much data that even the preposterous rate of GPU growth couldn’t keep up.

But there turned out to be better solutions than just covering everything in enormous textures. One solution was detail textures. (At least, that’s what they were called in the Unreal Engine. As always, graphics technologies often have different names in different development houses.) Going back to my brick wall: Instead of cranking up the resolution on the bricks in a vain attempt to outpace the ability of the human brain to detect artificial patterns, what if I add just one more 32×32 texture:

Now, I’ll draw the wall twice. Once, I’ll draw it with the brick so that it tiles, then I’ll take that little square of light and dark patterns and stretch it over the entire wall. Blending the two together, we get:

Even though this was made in photoshop, I didn’t cheat here. This image was made with a pair of 32×32 textures. The bricks make it look good up close, while the detail texture creates just enough variance in the pattern to break it up. The brick texture is tiled 18 times across and down this wall, so if I had wanted to fill the whole thing with detail, I would have needed 324 times as much memory. Instead, I got most of the way there with just twice as much.

This is very similar to the problem we have in Fuel. The data needed to cover the entire world would fill over 40 DVDs. Like trying to fill our brick wall with unique data, we just can’t afford to do that. But we can come really, really close by taking small sets of data and mixing them together at different resolutions.

Completely, unrelated to the terrain, but I thought this place was interesting. Elsewhere there’s a city half-buried in the sand that you can explore, but this half-submerged one is unreachable. Shame. The skyline makes me want to check it out. Very forlorn. Love the decaying buildings.

I believe the overall shape of the world is probably guided by a large-scale, super-low resolution heightmap that puts the big details into place. (I mean “low” resolution in the sense that the points might be 100 meters apart. Even at that, this would still be a big chunk of data. It would be 1280×1280, which would give us over 1.6 million elevation points.) This heightmap would define stuff like mountain ranges and the big bodies of water. Then patches of repeating detail are added to the big map. In my analogy above, the big heightmap is the light and dark noise, and the patches are the brick texture.

This means that you can probably spot the repeating topography if you play long enough. Of course, it won’t be exactly the same. You’ll find yourself on a horseshoe-shaped ridge, looking down the steep drop into a valley of snow and rock. Perhaps a kilometer away this horseshoe bit is repeated. Except here the thing is scaled down vertically, so the valley is shallow, ringed with trees, and filled with water in its basin.

I’ve come across areas like this in the game, where a bit of the land will feel vaguely familiar or remind me of someplace else. It’s a different texture with different foliage and the roads might follow different patterns, but I suspect I’m seeing the data repeat. Note that I doubt you’d notice this unless you were looking for it. Which, now that I’ve pointed it out, you will be. Sorry.

So that’s my take on how I think we get 200GB of world out of a gigabyte or so of data. It’s certainly the approach I’d use.

Things that feel vaguely familiar or remind you of someplace else do not a topography break. The real world is filled with places that remind us of other places. If anything, that makes it more realistic, particularly because it gives you a certain feel for how the geology of the world works.

3 Years later: Nope. Repeating identical bricks doesn’t make things more realistic, and it does break the “Topography” (Pretty sure that word doesn’t mean what you think it does). The point is, while things may be similar, or slightly repeated, they have to differ slightly, or else we notice they’re the same. Brick walls should not be the same, patches of sand should not be the same, etc.

Well to be fair, there is a difference between Deja Vu and metagame familiarity. The concept of repeating environments – especially open world – is hardly new and I’ll grab a bet most gamers are gonna become unconciously aware of it anyway.

I guess the most obvious way to combat this sort of thing is procedural layers of textures, which plays in to your last few posts.
Though, being a big fan of Mr. Carmack over at id software, I feel obliged to point you in the direction of a certain technology he has been working on for id TECH 5.
To summarize, he has taken a hybrid bruteforce method to produce what amounts to a single HUGE texture for the entire game world that is streamed in on the fly. You can check out his presentation here: http://s09.idav.ucdavis.edu/talks/05-JP_id_Tech_5_Challenges.pdf

The most amazing abilities this technology gives a game is the fact that there are NO tiled textures, and maybe even more importantly – artists can ‘paint’ the game world colaboratively! Which means less development time in an increasingly development hungry games industry.

There are better methods than this which don’t involve heightmaps at all.

On Warhammer Online (The original not the Mythic one) our terrain engine defined the world using shapes and terrain modifiers. This data was calculated on the fly to generate the terrain in the world as you moved around as well as texturing it according to procedural rules based on altitude/angle and theme.

The result was an immense area of terrain that fit into a tiny data footprint as all we stored were the shapes and terrain modifiers. Data that amounted to a few bytes each.

It was actually vastly quicker to create terrain this way than using heightmaps or the “Painting” techniques other engines used. I was able to knock out a mountain pass in a morning.

However at the time this was a very expensive technique. But this was in the days when Geforce 3 cards were high-end and multi-core processors were years away. These days I imagine this approach would be incredibly fast.

Actually, it is reachable if you unlock the vehicles in the last region of the game, Rainier Peak. It’s one of the ‘maverick’ vehicles that appears on your map and that you have to hunt down and ram to acquire.

The downside with the multi-texturing is that it requires either.. well.. multitexturing, tying up two texture units out of the available four (Yes, even high end cards still only allow four) that you could’ve used for other fancy stuff like particles or characters.

Or you can do it with a second rendering pass, which is slow since you’re rendering the scene twice.
But allows you to blend as many textures as you want by adding more rendering passes.
This is the approach I’m using for my stuff.

Still, multitexturing is the way it’s done most of the time. And the easiest way to solve the problem of repeating textures is to have multiple base- and detail-textures.

Though that requires more work from the artist, and as you’ve (shamus) pointed out games already cost a billion trillion dollars to make

Something changed recently, and I don’t know how. But a while back there was that TED Talk about Photosynth, where the guy displayed multigigabyte pictures at arbitrary resolutions and said “the only thing that ought to limit [this process of showing massive datasets] is the number of pixels on your screen at any given moment”. I’d have naively thought that there would be a host of other limits, like memory and cache size, bandwidth, access times, etc. But what I saw in that demo wouldn’t have worked if these were still primary concerns.

Then Carmack started introducing IdTech5 and touting the idea of arbitrarily large arbitrarily deeply layered textures that were not tiled at all and, according to him, allow the artists to do whatever they want to surfaces without affecting performance in the slightest. He did point out that this breakthrough did nothing to help ease the work of rendering geometry, it only applied to surface textures. But he took care to reiterate that the textures could be arbitrarily large, with no tiling and no (noticeable?) computational burden.

So obviously something has changed, some elegant new methods have been devised for handling large images. I work with fairly large images on my machine, and it slows down noticeably when the images become too large and the memory starts thrashing. But I’ve seen demos of both Photosynth and IdTech5 where absurdly large photos and textures were drawn on the screen with ridiculous ease, immediately responsive to changes both large and small in magnification or distance.

A few years ago I’d have attributed this technological leap to the usual suspect: faster hardware coming out all the time. But these two examples show that something else has changed, something fundamental in how images can be processed. I hope I someday find out what it is.

I’ll admit to being behind on the latest tech. I haven’t even READ about how idTech5 works. I know what it does but I don’t know how. I imagine once I sit down and absorb it I’ll have the same reaction that I did to BSP technology: “I would never have thought of this.”

I thought game developers had learned about aperiodic tilings? I remember being amazed when I read Stam’s paper years ago… I immediately started to think about trying to implement it in hardware.

It’s kind of too bad the first 3Dfx boards didn’t all come with 2 TMUs, then detail textures might have been common; on the other hand, probably most would use the two for trilinear filtering. I don’t think any 3 TMU boards were ever produced, though the hardware was designed to allow it.

An extra thought: detail texturing has been used by Id since Quake! Light maps combined with the underlying texture maps is a form of detail texturing, though in this case, the light maps were lower resolution than the object texture maps.

The megatexture principal was designed and implemented by Carmack for Splash Damage’s Enemy Territory: Quake Wars. The demo contains one map, so you could always take a look at that for free, if it’s at all revealing of the technique.

Personally I think we’re past the stage that engineering – even carmack level rocket science – is sufficient to be the sole determinant of the impact of a game world.

After seeing some of this I started to fiddle around with Terragen 2 (a fantastic piece of software by the way) and am now in the middle of recreating the world of our D&D game. I’ve gotten pretty good results and the most amazing thing is that the island we have is around 550km by 500km and the whole thing takes about 71.6KB, excluding my mask files (.bmp format at the moment) which are only necessary because 1. I don’t really know how to use it yet and 2. I’m trying to recreate a particular island. If you just need to create a stunning terrain then you only need a few KB. And you can export it in a format that’s compatible with various 3D engines. As soon as CoolBasic V3 comes out we’re going to make an RPG computer game of our current D&D adventure. Maybe. =)

The part that astonishes me the most is definately the scale of the thing. You can easily go from creating whole planets to the level of small pebbles (I’m not sure if grain-of-sand-level is quite achievable, but it might be), and even in the same scene.

As far as I have worked out the whole system basically just generates fractals and creates the scenery according to that data and can be controlled pretty much at will. Also supports heightfields (naturally).

[…] Oh, also to OP, I don't know if you know this already but I just saw a link to this: Fuel: Terrain – Twenty Sided It's about textures in gaming and stuff. Good luck on the creation. Be sure to let us know when […]

One Trackback

[…] Oh, also to OP, I don't know if you know this already but I just saw a link to this: Fuel: Terrain – Twenty Sided It's about textures in gaming and stuff. Good luck on the creation. Be sure to let us know when […]