Fluid simulation is one of the most active research areas in computer graphics. However, it remains difficult to obtain measurements of real fluid flows for validation of the simulated data.In this paper, we take a step in the direction of capturing flow data for such purposes. Specifically, we present the first time-resolved Schlieren tomography system for capturing full 3D, non-stationary gas flows on a dense volumetric grid. Schlieren tomography uses 2D ray deflection measurements to reconstruct a time-varying grid of 3D refractive index values, which directly correspond to physical properties of the flow. We derive a new solution for this reconstruction problem that lends itself to efficient algorithms to robustly work with relatively small numbers of cameras. Our physical system is easy to set up, and consists of an array of relatively low cost rolling-shutter camcorders that are synchronised with a new approach. We demonstrate our method with real measurements, and analyse precision with synthetic data for which ground truth information is available.

Course projects

Mesh segmentation remains a difficult task, complicated by the fact that our notion of what constitutes
a meaningful segmentation is domain-specific. The recently introduced variational partitioning scheme
provides an attractive and efficient framework in which to perform segmentation. However, much scope remains
for generalisation (for example, decomposing into convex, rather than planar segments). A segmentation
framework based on these ideas has been implemented here, which allows for abritrary proxy types and metrics
to be added with relative ease. Initial evaluations with planar proxies demonstrate that it produces useful
segmentations with very little additional code above and beyond the basic framework.

A common workflow in digital photography involves viewing high resolution images on LCD displays -
either computer monitors or camera viewfinders. To minimise cost and complexity, most cameras use
a single sensor overlaid with a Bayer filter to selectively sample various wavelengths at interleaved
locations. Missing data must then be interpolated, which is known to introduce artefacts. In order to
show the image on a display with a lower resolution, it must then be downsampled, which introduces
more artefacts. This process usually does not take into account the fact that LCD displays are
composed of horizontally displaced subpixels, which can be individually addressed to achieve a higher
effective horizontal resolution. A new downsampling algorithm is proposed, which works directly within
the Bayer domain and exploits subpixel resolution to produce images of higher detail than the
naïve demosaicking and downsampling combination.

Computer vision techniques now make it possible to begin designing more natural user interfaces. A
system that can detect exactly where the user is in relation to the display could use this information in
numerous ways to improve interactivity. Robust tracking is a necessary first step of such a system, and
has been extensively studied in the literature. One particularly efficient tracking algorithm has been
implemented here and under testing has proven to be capable of reliably tracking human faces as long
the motion is slow enough. Making the simplifying assumption of treating the target as a planar surface
allows a simple affine model to capture changes in translation, rotation and scale of the face. Automatic
acquisition is a desirable property of any tracker, and for the restricted case of one user sitting in front of
the camera, a simple background subtraction-based scheme is able to detect targets with sufficient
accuracy to initialise the tracker.

Older projects

Every graphics student has at some point written a raytracer. This is mine. What makes it slightly
different from the usual is that it's written entirely in Smalltalk (my second year project for a programming
languages course). It's fairly simple and very slow, but was fun to play with.

This was a project I did in my spare time during first year. As far as rendering of
particle systems goes it was fairly basic - nothing more than a few screen-aligned quads sent to DirectX.
The interesting part was the scripting system, which parsed text files to set properties for the system. It's
slightly more advanced than a simple property-setting interface though - a limited control structure
allows you to define events that act on particles over time. It was based on a chapter in Mason
McCuskey's book: "Special Effects Game Programming with DirectX".