Rendering: The Shape of Things to Come

Vfx/animation companies and suppliers are constantly raising the bar when it comes to rendering. Janet Hetherington takes a look at advancements in rendering, concerns about scalability and how full spectral rendering could affect the shape of things to come.

In the movie Stealth, futuristic warplanes fly at supersonic speed and perform dizzying aerial stunts. Those are CG planes throughout the whole film, boasts Darin K. Grant, director of technology for Digital Domain. Actors were seated in a motion base with a cockpit before a greenscreen, Grant says. We provided effects, including tracking each turn of the head to make sure the plane canopy and pilots helmet visor did not reflect green.

From creating complicated 3D machinery to touching up a frame to ensure a component looks consistent and real, computer rendering too seems to be flying ahead at supersonic speed. What might have been a wow effect five years ago is now often considered the norm, and companies are constantly raising the bar.

Todays complex scene is tomorrows `toy scene, offers Larry Gritz, chief architect, Gelato Rendering System, NVIDIA. The trend is to make things more and more complex.

Todays render artists have exciting new tools at their fingertips to make those complex scenes a reality faster, more powerful computers complemented by top-performing rendering solutions such as RenderMan, mental ray, Brazil and Gelato and new software such as Maxwell Render.

There are more rendering options. There are more renderers. High-end rendering solutions are becoming favored as more artists try to achieve complex results, observes Stephen Regelous, founder and product manager, Massive Software, whose Oscar-winning AI-based crowd simulation system makes visual effects scenes involving hundreds of thousands of digital characters a practical reality. Also, there are more RenderMan-competitive rendering solutions, such as those from Air and 3Delight.

The availability and use of high-end rendering has changed what people are doing with rendering as well, Regelous continues. Global illumination has gone from being esoteric to fairly standard. Whereas only two years ago, people were writing papers about how you could do subsurface scattering, now many facilities have developed a subsurface scattering technique.

Animation and visual effects are now expected to have very `expensive features ray tracing, global illumination, ambient occlusion, etc. in practically every shot, adds Gritz. By `expensive, we are referring more to the time and compute power required, not a pure dollar cost. But as you can imagine, the cost of systems used in a studio environment can be high, and the salaries paid to their employers even higher.

Ironically in the world of vfx, the best effects are often the ones not immediately noticeable on the big or little screen. Our goal is to make work look invisible, suggests Brazils Scott Kirvin, ceo of Splutterfish, makers of the Brazil rendering system. Brazil is good at that the natural lighting tools are easy to use and the lights behave like lights.

Despite better software, skilled animators and more powerful computers, render times are not necessarily getting faster. The bar is rising all the time. Finding Nemo raised the bar and look, says Chris Ford, business director, RenderMan, Pixar Animation Studios. Look at the Star Wars movies, which require staggering complexity. Work will expand to capacity. It can still take an hour, or two, to render one frame.

Blame It on Blinn

Rendering times are still high because any time renderers or hardware speed up, users just make scenes more complex, observes David Wilton, product manager, NVIDIA professional software. Blinns Law was created years ago to explain this situation the average studio typically has an established threshold for the time it takes to render a frames. As hardware becomes more powerful, these studios use this power to make their frames look better, not faster.

In addition, companies are always coping with scalability when it comes to keeping pace with changing technology. With rendering, scalability is the ability to handle increasingly complex data sets, explains Pixars Ford.

Its fairly easy to create a renderer that will handle small scenes or perform very specific functions well at the expense of doing others poorly, adds Gritz. Its very hard to design a renderer that will handle huge scenes across the entire range of requirements found in a studio environment. Most frustratingly, tools that handle the most difficult cases usually are much slower than tools that only handle small cases, because they must be more general and robust. So often users are faced with a choice of fast or scalable.

Grant of Digital Domain touts the current big push from 32-bit to 64-bit processors. It seems companies have to upgrade every year, he says.

Scalability involves both managing the number of machines and the amount of data, adds Regelous. We need smarter ways of representing data than what the industry has been focused on to date: writing out big files.

In Massive, weve addressed scalability by providing a highly efficient RenderMan-compatible rendering pipeline that is able to handle hundreds of thousands of film-quality humanoid characters in a scene. With GPU rendering, we are now also making creation of these types of large-scale scenes accessible to smaller studios, who until now have not had a way to render such things.

Harnessing the GPU

We see a definite trend in using GPU hardware [video card] processors to help speed up the rendering, notes Brazils Kirvin.

Professional graphics processing units (GPUs) been around for quite some time, echoes Gritz. NVIDIA Quadro products have been available for almost six years. In the past, an artist would leverage the GPU when doing much of the preview and animation work before sending off their image to final render. This final render was done entirely using the CPU. In fact, many artists would pass the time waiting for a render by playing a game, which would run on the GPU.

Gelato was engineered from the ground up to leverage both powerful processors in the computer the CPU and GPU. By offloading tasks normally all handled by the CPU to the GPU, you can create final rendered images faster. While the results vary by the scene, we are seeing, on average, about a 2x speedup on final rendering when comparing Gelato to CPU only film renderers, Gritz stresses.

In July, Massive Software announced graphics processing unit-accelerated rendering for its Massive 3D animation system. Available for use with products including NVIDIA Quadro graphics boards, the GPU-accelerated rendering support enables visual effects artists to render huge-scale Massive shots at film quality without a render farm.

Large facilities can maximize the new GPU-accelerated rendering to free up rendering resources and produce Massive simulations while maintaining quality and speed. Visual effects studios of any size can use Massive and render out large-scale Massive scenes using their typical production setups. Complex crowd scenes and closer foreground scenarios can be rendered on the GPU.

Massive is currently running on a variety of popular boards from NVIDIA and we feel GPU-accelerated rendering will be a terrific addition to our current capabilities in RenderMan, 3Delight, Air and, soon to be added, mental ray, Regelous says.

GPU-accelerated rendering is a technology whose time has come because it has the support of the manufacturers, Regelous adds. Its a huge advantage with our new product, Massive Jet This will help bring our software to those who would not otherwise have the resources to render Massive files.

Unified Assets

Another method under discussion to speed up render times or, rather, to avoid reinventing the rendering wheel is to share assets and models. Its a common question whether assets can be repurposed for games, for example, Ford says. Its a different dynamic to render for a videogame, which needs to be in realtime, scaled to fit performance. While videogames obviously want to be high quality, they are not as high-quality as movies. Ford notes that theres always a jump up in video game quality when a new console is introduced, allowing for a richer image.

However, the opportunity still exists to share existing assets, especially when the same studio (or parent company) is responsible for creating both a movie and a corresponding videogame. While the game assets would almost certainly end up being less complex than what ultimately might be found in a game, there are plenty of companies looking at how to make this a reality, observes Gritz. In addition to the potential cost savings, consider the financial benefits of being able to produce both movie and game in tandem such that when moviegoers attend the film they could walk out and immediately buy the game. Not to mention that this would certainly lead to the game characters looking a lot more like the characters in the film.

There are two key hot spots, in my opinion, notes Regelous. First is the complexity of the scene. How you handle the amount of data for a crowd scene, for example, is something weve been addressing in production with Massive for the past seven years. These days, its necessary to not just write out big chunks of data but to find better ways to represent the data.

The second, and probably most important, hot spot is asset management. There is currently no commercially available asset management render queue program in terms of a solution that can deal with both in the same database. Pipeline, in development by Jim Callahan, promises to be a step towards better management for rendering.

Managing assets across workflow is a hot spot for Gelato too. NVIDIA demonstrated an example of this at SIGGRAPH, where an image was rendered using Gelato and then the same image data was passed to a workstation running IRIDAS software where the image was then color corrected, Gritz says.

There was no need to `export the image out of Gelato and then `import the image into the IRIDAS application, Gritz explains. By maintaining the `native file, you are able to maintain the highest levels of precision and quality throughout the entire workflow. This has huge implications as studios are able to integrate other production tools in a similar fashion.

Data management is definitely important to getting things done in a timely manner. There were 220 vfx shots in the Stealth film, says Grant. In order to manage the data, the Stealth planes were rendered into 22 layers. If you have 100 things in a scene, you need to manage the data in layers that the renderers can handle. You need to achieve a balance.

Digital Domains own software, EnGen (formerly Terragen), was used heavily on Stealth. With planes flying around at Mach speeds and with no physical camera rig able to be attached to a plane to film that kind of terrain flying by, we had to create photorealistic computer-generated terrains that would hold up when seen clearly or when whizzing by, Grant explains. EnGen is the piece of software that we created to create and render those environments. Grant adds that through a combination of inputs of real-world elevation maps and procedural texturing and shading, EnGen is able to recreate environments that look very similar to specific environments in the world or brand new environments. Its unique tesselation techniques allow it to render worlds from 10,000 feet up down to 100 feet off the ground, all within the same scene.

Full Spectral Rendering (FSR)

If a key goal of rendering is to create lighted objects as realistically as possible, then how does Full Spectral Rendering (FSR) factor into the future? Not so long ago, global illumination simulation, which involves control over light scattering and the effects of indirect light in a 3D scene, was a research project rather than a regular effect.

FSR means handling full-frequency effects of light, rather than approximating the behavior of light with only red, green and blue channels. The renderer considers light as an electromagnetic wave and therefore images are computed using a frequency spectrum as opposed to conventional RGB color spaces. The result produces incredibly realistic images with physically accurate color tones that would, in most instances, have to be reproduced with the inclusion of colored lights.

Maxwell Render is among the first of this new breed of renderers. Maxwell Render is a render engine based on the physics of real light; there are no tricks in Maxwell. We have been working on this project for three years and a half now. Once we decided to go forward with the project, we gathered together a developing team in order to create a new solution that the market was already demanding from some time ago, suggests Oscar Monzon, sales & marketing manager, Next Limit Technologies, of Madrid, Spain. We are now in the beta stage, he says, and the 1.0 full release version will come out by the end of October.

Maxwell Render offers the interesting feature of being able to specify a render time in which the renderer will produce the highest quality image within that given amount of time. Called unbiased rendering, this means that given sufficient render time, the rendered solution will always arrive at the correct result without the introduction of any artifacts. The ability to specify a physically accurate sky means a render artist can select any country in the world, any latitude and longitude, any day and any time and Maxwell will compute the appropriate lighting conditions for that location.

Maxwell Render has multiple applications in different areas as our customers have showed in the many works they have sent us, Monzon adds. We have already released plug-ins for 3ds Max, Viz, Maya, Cinema 4D, Solidworks, Rhino, LightWave, SketchUp and ArchiCAD, and have plans for many others.

There are lots of different types of rendering and I can see how it could be used for visualization and design work for, say, architecture and cars, where you need that photorealism, offers Grant. While you may need to exactly match a car paint for an auto design, creating visual effects for film and TV is still an art form. Youre trying to create a mood, not just ensure color accuracy.

FSR is analytically correct lighting, and is useful to lighting designers and architects who have to reproduce lighting exactly, echoes Ford. Traditional methods like using RenderMan are quite adequate for film render.

Even if an animator had access to FSR, chances are studios currently do not have the capability to output the created image.

We dont really see this as an area of major importance to our customers, at least as we understand the definition, comments Gritz. There are comparatively few effects that require it, and currently nearly all output devices (CRTs, LCDs, film, digital projectors) are RGB-oriented and couldnt display full spectral data even if the renderer computed it.

On the other hand, Gritz continues, High-dynamic range (HDR) lighting is a trend we see growing in popularity. We think its much more likely that people will be interested in HDR display. But most renderers work in floating-point anyway, so this is really a change in output devices, not in renderers themselves.

Monzon counters, We are defining a new file format called MXS, MXI that will be compatible with these formats, but it will also contain more information than you can imagine regarding each image. One of the main characteristics of Maxwell is that it is user-friendly software. That is because it works as a reflex camera and makes its use very intuitive. We can add that Maxwell Render does not only achieve images of great quality but also brings real information as lumens per pixel, for example.

Training Day

One way to speed up render times is to have a knowledgeable person at the controls. Somebody good with the tools can make it run very fast, says Brazils Kirvin. He notes that in addition to studios, much of Brazils market is made of individual artists, so they need efficient software thats easy to learn and to use.

RenderMan, which developed its industry standard RIB file for high-quality movie rendering, is also reaching out beyond studio use by introducing a new accessible version of RenderMan for Maya plug-in. The new version, which will be available for download by artists for $995 at the end of the summer, features a deeper integration with Maya.

Gritz, meanwhile, says that most of the initial targets for Gelato are the larger studios. The development team all come from years of working in the top studios, including DreamWorks/PDI, ILM and Pixar. So they are intimately familiar in the types of rendering tools these studios use and have created Gelato in a way that would integrate very easily into these workflows.

As for the individual artists and smaller studios, there are currently Gelato plug-ins for Alias Maya and Autodesk 3ds Max. The NVIDIA Digital Film Group is currently working with a number of schools to make sure they have everything they would need to offer training to students interested in using Gelato.

In August, Massive Software announced a new autonomous agent 3D animation application called Massive Jet that enables the creation of large-scale, believable digital crowd shots out of the box with high quality and a low learning curve. A full-functioning package is priced at under $6,000.

We wanted to build a product everyone can use, says Regelous. Massive Jet offers a low learning curve and the power to produce Massive crowds at a cost within reach of all animation professionals. Using a single license of Massive Jet and agents from our Ready-to-Run Agent Library, you can easily fill a stadium, send a thousand people down the block or stage a huge medieval battle.

The Search for the Holy Grail

Despite all the advancements, there are still rendering challenges to be conquered, especially when it comes to organic bodies. Creating realistic objects, machines or effects is one thing, but rendering a sole convincing person is another. Creating a photorealistic CG human is the Holy Grail of the work we do, comments Grant. People have tiny facial movements. Eyes jitter around. Skin is very porous, and you have to capture the way it reflects. Skin is translucent, and pigment comes from the blood. It all has to be reflected in the rendering.

There are still new ways to make timeless effects, Grant says, and were constantly learning.

Janet Hetherington is a freelance writer based in Ottawa, Canada, where she shares a studio with artist Ronn Sutton.