All That Glitters: An Interview With Bungie's Senior Graphics Engineer

[In this feature interview reprinted from the December issue of Game Developer magazine, editor-in-chief Brandon Sheffield speaks to Bungie senior graphics engineer Hao Chen about how Halo creator Bungie plans to solve problems for its new IP and the next generation.]

Halo has been a defining series for the Xbox platforms -- not only as the platform's first major system-seller, but also as a series that's known to push the limits of what the console can do, visually. A lot of the visual punch is done on the graphics engineering side, subtly decoupled from the artists. Halo's look is unique and well-defined, but with each iteration is given greater depth and fidelity by the graphics team.

Hao Chen is Bungie's senior graphics engineer, and as the team works on its newest project, which is the first to go multi-platform, the team has encountered new challenges, as well as confronted a whole host of older issues that many companies previously considered solved. As Bungie prepares not only for its leap onto non-Microsoft programs, but also for a transition into the next generation, we spoke in-depth with Chen about the future of graphics on consoles.

Let's talk about the pros and cons of megatextures. Do you use them?

Hao Chen: We don't use them right now. There are two main interpretations of "megatexture." One is to store a giant texture, say 16k by 16k, and use a combination of hardware, OS, and client side streaming to allow you to map that big texture to your world.

This is cool, but we already have similar technology in places where that would be needed. For example, this is great for terrain, but we already have a very good terrain solution that uses a great LOD and paging scheme to allow us to do large and high fidelity terrain.

The other interpretation of megatexture, or the so-called sparse texture, is great for things that only have valid data in small parts of a texture. For example, if you paint a mask on terrain, then you will only have valid data where the paintbrushes touched, and everywhere else it is useless data.

Sparse textures allow us to represent these textures very efficiently without wasting precious storage and bandwidth on useless data. Another cool application of sparse texture that we are excited about is shadows. If we can render to sparse textures, then it's possible that we can render very high resolution shadows to places that need them and still be efficient.

A disadvantage of megatextures is that on a console, we typically already know exactly what the hardware is doing, and we have very very tight control of where the resources go. So we could do a lot of things [that are similar to the benefits provided by] megatextures already on consoles, that you can't do on PC. So I guess there wouldn't really be a disadvantage, it's just that the coolness of having a texture that is automatically managed is less relevant in the console than it is on the PC.

I've certainly seen how even today people are having a lot of trouble rendering shadows without a lot of blockiness or dithering.

HC: That's kind of the problem with computer game graphics these days. A lot of things people consider solved problems are actually quite far from being solved, and shadows are one of them. After all these years we don't have a very satisfactory shadow solution. They're improving; every generation of games they're improving, but they're nowhere near the perfect solution that people thought we already have.

What do you think might be the answer? Your potential megatexture solution, or something else?

HC: We are still far from seeing perfect shadows. Shadows are a byproduct of lighting. All frequency shadows (shadows that are hard and soft in all the right places) are a byproduct of global illumination, and these things are notoriously hard in real time.

There's just not enough machine power, even in today's generation or the next generation, to be able to capture that kind of fidelity. There are also inherent limitations to the current techniques, such as shadow maps, for example. When the light is near the glancing angle of a shadow receiver, then it is impossible to do the correct thing.

With the current state of the art shadow techniques we can manage the resolution much better, and we can do high quality filtering, but we still have long ways to go to get where we need to be, even if we just talk about hard shadows from direct illumination.

I think megatextures could help, but still fundamentally there are things you cannot solve with our current shadow meshes. And until the performance supports real-time ray tracing and global illumination, we're going to continue seeing hack after hack for rendering shadows. Every year we see a few new hacks of current techniques, and with each hack, we see a little bit of improvement on the quality.