Signup for the Newsletter

Videos: the best of Siggraph 2017’s technical papers

If you follow the Siggraph YouTube channel, you may already have seen the trailer above, which shows some of the papers due to be presented at Siggraph 2017, which takes place in LA from 30 July to 3 August 2017.

As usual, it provides a good overview of the conference’s key themes, but for the detail, you need to turn to the full videos accompanying the papers – which are steadily appearing online as the show approaches.

Below, we’ve picked out our own favourites, including several not featured in Siggraph’s own round-up, covering everything from fluid simulation and fur rendering to a new method for modelling tentacles.

The resulting 15 videos showcase some of the most innovative, the most visually compelling – and sometimes, just the plain weirdest – research being done in computer graphics anywhere in the world.

Simulation

As ever, simulation proved a rich field for new research – and increasingly, not just the simulation of isolated physical phenomena, but of the interactions between them.

This paper, from researchers at Columbia and the University of Waterloo, explores the interaction between fluids and hair, including the way liquid is caught between strands, and the way it flows down them.

In the demo, the strands look rather coarse – almost more like wire than hair – but the fluid flow looks good, as does the way liquid drips from the hair: achieved by converting the flow to APIC particles.

The simulation framework, libWetHair, is open-source, and available for Windows, Linux and OS X, while the demo itself uses Houdini for surface reconstruction and rendering, so you can try it for yourself.

The sand is modeled an elastoplastic material whose cohesion varies with water saturation, and the return mapping for sand plasticity avoids the volume gain artefacts of the traditional Drucker-Prager model.

The results look good, as do the chances of them being used in entertainment work: Jiang and Teran have formed their own company, Jixie Effects, while co-authors Gergely Klár and Ken Museth are at DreamWorks.

Jiang and Teran also contributed to a second paper on elastoplasticity, this time using a novel version of the Material Point Method to simulate contacts between cloth or hair and other materials.

They use a lot of striking test cases, including a jumper being torn apart and a bag being filled with slime.

However, the most eye-catching demo is probably the one right at the start of the video: 7 million grains of coloured sand flow over a sheet of cloth, then fall to the ground to form the Siggraph logo.

There is a similar demo from this paper on self-illumination in simulated explosions: in this case, 200,000 multicoloured point lights cascading over a solid version of the Siggraph logo.

The simulation is intended to demonstrate a new, more efficient means of calculating illumination within clouds of smoke: converting the original volumetric lighting data into large numbers of point lights.

The authors use a lighting grid hierarchy to approximate volumetric illumination at different resolutions, focusing on temporal coherency to avoid flicker, with results visually indistinguishable from path tracing.

Again, it’s a technique that could quickly see use in production: Can Yuksel is another veteran of DreamWorks Animation, and is currently senior FX TD at Industrial Light & Magic.

Nineteenth-century German mathematician Alfred Clebsch makes an unexpected reappearance at Siggraph this year in the form of the eponymous Clebsch maps, used in the paper above to encode velocity fields.

The method is used primarily as a means of visualising fluid flow: at 02:00 in the video above, you can see a very beautiful visualisation of the vortices shed from a hummingbird’s wings.

However, it also has potential applications for simulation work: in the paper, the researchers note that it can be used to enhance sims through the introduction of subgrid vorticity.

If you’re feeling that solids have taken a back seat to fluids so far, this paper from researchers at Stanford and the University of British Columbia should go some way towards redressing the balance.

Whereas standard rigid body sims assume that dynamics are governed by a single, global constant, the coefficient of restitution, their method allows the value to vary across the surface of colliding objects.

The resulting one-body values are then used to approximate the two-body coefficient of restitution.

When used in dynamics sims, such ‘bounce maps’ result in more complex, visually richer behaviour, with an object rebounding in quite different ways according to the exact position of the impact on its surface.

Modeling

One of the perks of working at Pixar is that you get to use its assets, so part of the fun of Fernando de Goes and Doug L. James’ demo video is seeing characters from Finding Dory deformed into strange shapes.

Their technique for 3D sculpting and 2D image editing is based on the response of real elastic materials to the forces generated by common modelling operations like grab, scale, twist and pinch.

Being physically plausible, the method avoids the artefacts generated by traditional modelling tools, such as the changes in volume created by grab brushes: you can see a comparison with Maya’s Grab Tool at 00:55.

The result? Hank stays looking like an octopus, no matter into what strange shapes you twist him.

‘Skippy’, a new algorithm from a team at Purdue University and Adobe Research, lets you draw curves around complex 3D objects without ever having to adjust your camera view.

The method divides 2D strokes drawn from a single viewpoint into continuous segments, duplicating those that could fall in front of or behind other objects, then finding an optimally smooth 3D path connecting them.

The result is a 3D stroke that hugs the surface of existing geometry: the demo shows an unfortunate ship being engulfed by the tentacles of a kraken, and snakes wrapping around the head of a medusa.

You can even add temporary geometry to the scene specifically to guide the curves, then delete it once they have been created, leaving the snakes coiling through empty air.

Unlike traditional methods, the simulation does not break down as the size of the time steps used in the calculation increase, and the results mimic some interesting behaviours of real crowds, like lane formation.

You can see how it holds up in large sims towards the end of the video, where virtual characters navigate a maze, with crowds from five different starting positions mingling to form an orderly queue at a single exit.

Capture technologies

By enabling users to change settings after footage has been captured, light field video cameras open up new possibilities for VFX artists, but their cost puts them beyond the reach of all but the largest studios.

This paper from a team at Berkeley and UC San Diego offers an ingenious low-cost solution: one of Lytro’s consumer cameras captures light-field data at 3fps, while a standard DSLR captures 2D video at 30fps.

The two can then be used to reconstruct the missing light-field frames via a learning-based approach, using flow estimation to warp the input images, then appearance estimation to combine the warped images.

The result is low-cost, 30fps light-field video, enabling users to change the focal point of a shot in real time, or even move the camera during playback – by a few degrees either way, at least.

Lighting and rendering

Ever wondered why leather has a certain zing in offline renders that it lacks in games? It may be down to thin-film iridescence: the subtle rainbow hues generated by the film of natural grease on its surface.

Or at least, they couldn’t. This paper combats the problem – aliasing in the spectral domain – by antialiasing a thin-film model, incorporating it into microfacet theory, and integrating it into a real-time engine.

The result? Realistic leather, car paint and soap bubbles that render at 30fps. Co-author Laurent Belcour works for Unity Technologies, so the chances of actually seeing the the system in use in games are good.

Standard DCC software treats hair and fur as if they were identical. But in reality, the fibres that make up animal fur have a distinct medulla – an inner core – that human hairs lack.

This paper, from a team at Berkeley and UC San Diego, builds on the authors’ existing double-cylinder model for fur fibres, refining the way in which light scattering through the medulla is calculated.

The method preserves the standard Marschner R, TT and TRT scattering modes used for rendering hair, making it easy to integrate into existing software, but adds two new extensions for fur: TTs and TRTs.

Further practical optimisations enable smooth transitions when switching between rendering near and far objects, ensuring that your CG cats render realistically, no matter how close to the camera they are.

We’ve covered style transfer – the transfer of the colour palette and fine geometric forms from one image to another – on CG Channel before. But we’ve never featured a method specifically aimed at facial animation.

The video above shows the visual style of a range of natural media, including oil paintings, watercolours, pencil sketches, and even a bronze statue, transferred to video footage of live actors.

The eyes on the resulting animated statue look a bit creepy, but all of the other results are seamless, opening up new possibilities for visually stylised animation generated automatically from reference footage.

Several of the authors work at Adobe, so cross your fingers for related tools in the firm’s future releases.

Just plain amazing

If you followed last year’s Adobe Max conference, you may already be familiar with VoCo, the firm’s text-based system for editing recorded speech, but the demo is too compelling not to include here.

Whereas current tools enable editors to rearrange recorded speech by cutting and pasting existing words inside a text transcript, VoCo lets you type entirely new words, and have the software synthesise them.

The result sounds eerily like the original speaker’s voice: there’s sometimes an odd change of emphasis at the start of a new word, but picking one of the variant intonations that VoCo generates usually fixes it.

Adobe claims that VoCo generates better results in a second than a human audio engineer can manage in 20 minutes of painstaking splicing – and on the evidence of the demo above, we’re inclined to believe them.

More research online
That’s it for this round-up – although only a small sample of the research being presented at Siggraph 2017.

As usual, graphics researcher Ke-Sen Huang has compiled a list of the other papers and demo videos currently online. Check it out via the link below, and nominate your own favourites in the comments section.