Archive for the ‘C++’ Category

I finally extended my OpenGL-based Eulerian fluid simulation to 3D. It’s not as pretty as Miles Macklin’s stuff (here), but at least it runs at interactive rates on my GeForce GTS 450, which isn’t exactly top-of-the-line. My original blog entry, “Simple Fluid Simulation”, goes into much more detail than this post.

I thought it would be fun to invert buoyancy and gravity to make it look more like liquid nitrogen than smoke. The source code can be downloaded at the end of this post.

Extending to three dimensions was fairly straightforward. I’m still using a half-float format for my velocity texture, but I’ve obviously made it 3D and changed its format to GL_RGB16F (previously it was GL_RG16F).

The volumetric image processing is achieved by ping-ponging layered FBOs, using instanced rendering to update all slices of a volume with a single draw call and a tiny 4-vert VBO, like this:

glDrawArraysInstanced(GL_TRIANGLE_STRIP, 0, 4, numLayers);

Here are the vertex and geometry shaders that I use to make this work:

Since the fragment shader gets evaluated at pixel centers, I needed the 0.5 offset on the layer coordinate to prevent the fluid from “drifting” along the Z axis.

The fragment shaders are otherwise pretty damn similar to their 2D counterparts. As an example, I’ll show you how I extended the SubtractGradient shader, which computes the gradient vector for pressure. Previously it did something like this:

Downloads

I’ve tested the code with Visual Studio 2010. It uses CMake for the build system. If you have trouble building it or running it, let me warn you that you need a decent graphics card and decent OpenGL drivers.

With Tron: Legacy hitting theaters, I thought it’d be fun to write a post on volumetric line strips. They come in handy for a variety of effects (lightsabers, lightning, particle traces, etc.). In many cases, you can render volumetric lines by drawing thin lines into an offscreen surface, then blurring the entire surface. However, in some cases, screen-space blur won’t work out for you. Maybe you’re fill-rate bound, or maybe you need immediate depth-testing. You’d prefer a single-pass solution, and you want your volumetric lines to look great even when the lines are aligned to the viewing direction.

Geometry shaders to the rescue! By having the geometry shader emit a cuboid for each line segment, the fragment shader can perform a variety of effects within the screen-space region defined by the projected cuboid. This includes Gaussian splatting, alpha texturing, or even simple raytracing of cylinders or capsules. I stole the general idea from Sébastien Hillaire.

You can apply this technique to line strip primitives with or without adjacency. If you include adjacency, your geometry shader can chamfer the ends of the cuboid, turning the cuboid into a general prismoid and creating a tighter screen-space region.

I hate bloating vertex buffers with adjacency information. However, with GL_LINE_STRIP_ADJACENCY, adjacency incurs very little cost: simply add an extra vert to the beginning and end of your vertex buffer and you’re done! For more on adjacency, see my post on silhouettes or this interesting post on avoiding adjacency for terrain silhouettes.

For line strips, you might want to use GL_MAX blending rather than simple additive blending. This makes it easy to avoid extra glow at the joints.

Here’s a simplified version of the geometry shader, using old-school GLSL syntax to make Apple fans happy:

Glow With Point-Line Distance

Here’s my fragment shader for Tron-like glow. For this to link properly, the geometry shader needs to output some screen-space coordinates for the nodes in the line strip (gEndpoints[2] and gPosition). The fragment shader simply computes a point-line distance, using the result for the pixel’s intensity. It’s actually computing distance as “point-to-segment” rather than “point-to-infinite-line”. If you prefer the latter, you might be able to make an optimization by moving the distance computation up into the geometry shader.

Raytraced Cylindrical Imposters

I’m not sure how useful this is, but you can actually go a step further and perform a ray-cylinder intersection test in your fragment shader and use the surface normal to perform lighting. The result: triangle-free tubes! By writing to the gl_FragDepth variable, you can enable depth-testing, and your tubes integrate into the scene like magic; no tessellation required. Here’s an excerpt from my fragment shader. (For the full shader, download the source code at the end of the article.)

Motion-Blurred Billboards

Emitting cuboids from the geometry shader are great fun, but they’re overkill for many tasks. If you want to render short particle traces, it’s easier just to emit a quad from your geometry shader. The quad can be oriented and stretched according to the screen-space projection of the particle’s velocity vector. The tricky part is handling degenerate conditions: velocity might be close to zero, or velocity might be Z-aligned.

One technique to deal with this is to interpolate the emitted quad between a vertically-aligned square and a velocity-aligned rectangle, and basing the lerping factor on the magnitude of the velocity vector projected into screen-space.

Here’s the full geometry shader and fragment shader for motion-blurred particles. The geometry shader receives two endpoints of a line segment as input, and uses these to determine velocity.

Diversion: Hilbert Cubed Sphere

You might’ve noticed the interesting path that I’ve used for my examples. This is a Hilbert curve drawn on the surface of a cube, with the cube deformed into a sphere. If you think of a Hilbert curve as a parametric function from R1 to R2, then any two points that are reasonably close in R1 are also reasonably close in R2. Cool huh? Here’s some C++ code for how I constructed the vertex buffer for the Hilbert cube:

Some of my previous entries used the geometry shader (GS) to highlight certain triangle edges by passing new information to the fragment shader. The GS can also be used generate long, thin quads along those edges; this lets you apply texturing for sketchy effects. In the case of silhouette lines, you can create an anti-aliased border along the boundary of the model, without the cost of true multisampling.

In this article, I’ll show how to generate silhouettes using GLSL geometry shaders. At the end of the article, I provide the complete demo code for drawing the dragon depicted here. I tested the demo with Ubuntu (gcc), and Windows (Visual Studio 2010).

Old School Silhouettes

Just for fun, I want to point out a classic two-pass method for generating silhouettes, used in the days before shaders. I wouldn’t recommend it nowadays; it does not highlight creases and “core” OpenGL no longer supports smooth/wide lines anyway. This technique can be found in Under the Shade of the Rendering Tree by John Lander:

Draw front faces:

glPolygonMode(GL_FRONT, GL_FILL)

glDepthFunc(GL_LESS)

Draw back faces:

glPolygonMode(GL_BACK, GL_LINE)

glDepthFunc(GL_LEQUAL)

Ah, brings back memories…

Computing Adjacency

To detect silhouettes and creases, the GS examines the facet normals of adjacent triangles. So, we’ll need to send down the verts using GL_TRIANGLES_ADJACENCY:

Six verts per triangle seems like an egregious redundancy of data, but keep in mind that it enlarges your index VBO, not your attributes VBO.

Typical mesh files don’t include adjacency information, but it’s easy enough to compute in your application. A wonderfully simple data structure for algorithms like this is the Half-Edge Table.

For low-level mesh algorithms, I enjoy using C99 rather than C++. I find that avoiding STL makes life a bit easier when debugging, and it encourages me to use memory efficiently. Here’s a half-edge structure in C:

If you’ve got a half-edge structure for your mesh, it’s a simple linear-time algorithm to expand an index buffer to include adjacency information. Check out the downloads section to see how I did it.

By the way, you do need an associative array to build a half-edge table when loading model data from a traditional mesh file. In C++, this is all too easy because of helpful libraries like std::hash_map and Google’s Sparse Hash. For C99, I find that Judy arrays provide a compelling way of creating associative arrays.

Basic Fins

Now that we’ve got preliminaries out of the way, let’s start on the silhouette extrusion algorithm. We’ll start simple and make enhancements in the coming sections. We’ll use this torus as the demo model. The image on the left shows the starting point; the image on the right is our goal.

Silhouette lines occur on the boundary between front-facing triangles and back-facing triangles. Let’s define a function that takes three triangle corners (in screen space), returning true for front-facing triangles. To pull this off, we can take the cross product of two sides. If the Z component of the result is positive, then it’s front-facing. Since we can ignore the X and Y components of the result, this reduces to:

Incidentally, the length of the cross-product is equal to twice the area of the triangle. But I digress…

The next function emits a long, thin quadrilateral between two points. To do this, we’ll add an extrusion vector to the two endpoints. In the following snippet, N is the extrusion vector, computed by taking the perpendicular of the normalized vector between the two points, then scaling it by half the desired line width:

Antialiased Fins

First let’s add some antialiasing to those fins. To pull this off, we’ll attach a distance-from-center value to each corner of the quad. In the fragment shader, we can use these values to see how far the current pixel is from the edge. If it’s less than a couple pixels away, it fades the alpha value. The EmitEdge function now becomes:

Next is the fragment shader, which uses the tipLength variable to represent the length of the desired alpha gradient. We leverage GLSL’s built-in fwidth function to prevent it from looking fuzzy when zoomed in:

The Spine Test

We extruded in both directions, so why are we seeing only half of the fin? The issue lies with depth testing. We could simply disable all depth testing during the silhouette pass, but then we’d see creases from the opposite side of the model. The correct trick is to perform depth testing along the centerline of the quad, rather than at the current fragment. This method is called spine testing, and it was introduced by a recent paper from Forrester Cole and Adam Finkelstein.

In order to perform custom depth testing, we’ll need to do an early Z pass. An early Z pass is often useful for other reasons, especially for scenes with high depth complexity. In our demo program, we generate a G-Buffer containing normals and depths:

The normals are used for per-pixel lighting while the depths are used for spine testing. The fragment shader for G-Buffer generation is exceedingly simple; it simply transforms the normals into the [0,1] range and writes them out:

When rendering silhouette lines, the fragment shader will need texture coordinates for the spine. The geometry shader comes to the rescue again. It simply transforms the device coordinates of the quad’s endpoints into texture coordinates, then writes them out to a new output variable:

Here’s the result. Note that the fin now extends below the contour, allowing you to see the AA on both ends.

Mind the Gap!

Next let’s fill in those gaps, creating the illusion of a single, seamless line. Some researchers create new triangles on either end of the fin, using vertex normals to determine the shape of those triangles. In practice I found this tricky to work with, especially since vertex normals are usually intended for 3D lighting, not screen-space effects. I found it easier to simply extend the lengths of the quads that I’m already generating. Sure, this causes too much overlap in some places, but it doesn’t seem to hurt the final image quality much. The percentage by which to extend the fin length is controlled via the OverhangLength shader uniform in the following snippet:

Further Reading

These are some of the resources that gave me ideas for this blog post:

There’s a nice article at Gamasutra about silhouette rendering with DirectX 10, written by an old colleague of mine, Josh Doss.

Achieving AA by drawing lines on top of the model is called edge overdraw, which was written about in this 2001 paper by Hughes Hoppe and others.

My silhouette lines are centered at the boundary, rather than extending only in the outwards direction. To achieve this, I use the “spine test”, which was introduced in a nice paper entitled Two Fast Methods for High-Quality Line Visibility, by Forrester Cole and Adam Finkelstein.

To learn how to generate texture coordinates for fins (e.g., sketchy lines), and a different way of dealing with the gap between fins, check out a paper entitled Single Pass GPU Stylized Edges, by Hermosilla and Vázquez.

Downloads

The demo code uses a subset of the Pez ecosystem, which is included in the zip below.