Wednesday, 5 June 2013

The surge in interest in Physically-Based Rendering has led to a surge in interest in area light sources. There is, however, something of a dearth of information available for the aspiring implementer.

Introduction

The objective of polygonal area light sources is to define an arbitrary polygon in space that may emit light. I'm going to make two simplifications here: firstly, that the polygon emits a constant colour across its surface and secondly, that we are not considering shadowing.

The earliest source I was able to find on this subject is James Arvo's 1994 paper on polyhedral light sources. John Snyder also presents another excellent reference. The approach taken here is to compute a vector field such that at any point in space r, we may obtain the direction of the light. This vector field is a function of the position r, but also the polygon's vertices. This direction can then be substituted into your brdf equation for a particular surface.

Arvo's algorithm is simple: for each edge of the polygon he computes a normal named gamma, and a weighting value named theta. The normal is of the triangle formed by the polygon edge and the point r. The weight theta is the angle formed by the pair of vectors from the point r to each edge vertex. He forms a weighted sum of these normals, divides it by 2*PI (to normalise the weights returned by acos) and scales it by the light's intensity. This is all you need to perform basic area lighting.

There are a couple of twists. Firstly, you typically need to negate the computed vector to get it into a form ready to dot against your normal. Secondly, you must clip the polygon against the plane of the point being shaded. It is possible to have a valid light source that straddles the plane of the point being shaded.

Basic HLSL code

Here is the HLSL code to perform the basic calculation to obtain the light vector for a polygon at a given point. Here, I process quads. This is no algorithm restriction; this is purely to minimise buffer traffic.

// Derive quad vertices in world-space

float3 quadCentre = g_quadLights[lightIndex].pos;

float3 quadAxisX = g_quadLights[lightIndex].axisX * 0.5f;

float3 quadAxisY = g_quadLights[lightIndex].axisY * 0.5f;

float4 quadVertices[4] =

{

float4(quadCentre - quadAxisX - quadAxisY, 1.0f),

float4(quadCentre + quadAxisX - quadAxisY, 1.0f),

float4(quadCentre + quadAxisX + quadAxisY, 1.0f),

float4(quadCentre - quadAxisX + quadAxisY, 1.0f),

};

float3 lightField = 0.0f;

uint polyVertex1 = 3;

for(uint polyVertex0 = 0; polyVertex0 < 4; ++polyVertex0)

{

float3 delta0 = quadVertices[polyVertex0].xyz - worldSpacePos;

float3 delta1 = clippedPolygon[polyVertex1].xyz - worldSpacePos;

float3 edge0 = normalize(delta0);

float3 edge1 = normalize(delta1);

float theta = acos(dot(edge0, edge1));

float3 gamma = normalize(cross(delta1, delta0)); // Note you can't just cross edge0 and edge1 to save the normalize - that gets you the wrong result!

lightField += theta * gamma;

polyVertex1 = polyVertex0;

}

// Note that we don't divide by 2*PI as in the paper. That is just there to account for the sum of the theta values

// We don't need it since we're doing a normalise - it's just a uniform scale

Clipping

We must now deal with the problem of clipping. This is a particularly awkward operation, especially when it may need to be done per-pixel. Firstly, we must establish whether we actually need to clip. Since the quads are formed using a quad centre and two axes, this may be handled using something very similar to an OBB vs plane test: