Pages

May 30, 2011

I consider myself a practical graphics programmer. I believe mathematically correctness is less important than what looks right(or okay) to gamers.

I recently saw an interesting lens-flare technique that goes along with my belief in a game called HomeFront.
In this game, lens-flare effect is a mere full-screen overlay of a bubble patterned image, which is revealed only on the pixels where bright lights are.

Look at my awesome picture below:

So from the left top picture, let's say the yellow part is where bright light is. (and the chance is you probably have some type of HDR buffer already to do effects like bloom.) Then it'll use the luminance on each pixel as blend factor for lens flare bubble texture, making the final scene reveal bubble pattern on those bright pixels

I found this lens flare technique looks good enough when the high luminance area is small enough. The only time it looked a bit weird was when a large light, such as campfire, covers a lot of pixel spaces, revealing too much bubbles altogether. It almost made me feel like I was doing bubble bath. Hah! But I won't complain.

Given that HomeFront was made by our sister studio, Kaos, I can probably ask them if my speculation(?) is correct, But if I do so, I won't be able to write this blog post without going through our legal team. So let's just leave it as my own speculation.

The reason why I added this feature at work was because our artists wanted a sharpening filter on mipmaps. This feature was present with the original NVTT 1, but removed from NVTT 2. Given that sharpening filter is a simple 3x3 or 5x5 convolution filter, I've decided to add a generic convolution filter support which can take any arbitrary coefficients. With this approach, anyone can run almost every image processing algorithms based on on convolution.

NVTT Modification

So here's how. It requires only a few lines of change on 6 files. So I'll just walk you through.

May 19, 2011

Although I can't deny that posts from a lot graphics programming blogs help us to learn new cool stuff, I'm also often worried about the quality of posts, especially when people claim something not entirely true from a pure "theorycraft" instead of actual experience. Things that make sense on theory don't necessary make sense in reality, that is.

If you are a decent graphics programmer, you should take only empirical results as truth.

Me: "Uh... but look at this. I've already implemented it in our engine 2 years ago, and it was very trivial."

A: "OMG." -looks puzzled-

Okay. So I explained to him how I did it. And I'm gonna write the same thing here for the people who might be interested. (I think the original blog post wanted to say supporting various lighting models is not easy in a deferred context, which is actually a valid point.)

First, if you don't know what Oren-Nayar is, look at this amazing free book. It even shows a way to optimize it with a texture lookup. My own simple explanation of Oren-Nayar is a diffuse lighting model that additionally takes account of Roughness.

Second, for those people who don't know what Light Pre-Pass renderer is, read this.

K, now real stuff. To do Oren-Nayar, you only need one additional information. Yes, roughness. Then how can we do Oren-Nayar in a Light Pre-pass renaderer? Save roughness value on the G-Buffer, duh~. There are multiple ways to save roughness on G-Buffer and probably this is where the confusion came from.

It looks like most light-prepas approaches use R16G16 for G-Buffer to store XY components of normals. So to store additional information (e.g, roughness), you will need another render target = expensive = not good.

Another approach is to use 8 bit per channel to store normal map, but you will see some bending artifacts = bad lighting = bad bad. But, thanks to Crytek guys, you can actually store normals in three 8-bit channels without quality problem. It's called best-fit normal. So once you use this normal storage method, now you have an extra 8 bit channel that you can use for roughness. Hooray! Problem solved.

But my actual implementation was a bit more than this because I needed to store specular power, too. So I thought about it. And found out we don't really need 8 bits for specular power(do you really need any specular power value over 127? Or do you really use any specular power value less than 11?) So I'm using 7 bit for specular power and 1 bit for roughness on/off flag. Then roughness is just on and off? No. It shouldn't. If you think a bit more, you will realize that roughness is just an inverse function of specular power Think this way. Rougher surface will scatter lights more evenly, so specular power should be less for those surfaces and vice versa.

With all these observations, and some hackery hack functions, this is what I really did at the end.

G-Buffer Storage

RGB: Normal

A: Roughness/Specular Power fusion

Super Simplified Lighting Pre-pass Shader Code

float4 gval = tex2D(Gbuffer, uv);

// decode normal using crytek's method texture

float3 normal = decodeNormal(gval.xyz);

float specpower = gval.a * 255.0f;

float roughness = 0;

if (specpower > 127.0f)

{

specpower -= 128.0f;

roughness = someHackeryCurveFunction(127.0f - specpower);

}

// Now use this parameters to calculate correct lighting for the pixel.