Russ Andersson on the state of the CG Industry

Introduction

On this occasion we have the pleasure to present you with a view of the evolution, current state and future of the CG industry from the perspective of an expert in the camera tracking and visual recognition field. Russ Andersson, founder of Andersson Technologies LLC and developer of the SynthEyes camera tracking (matchmoving) software, shares his insights from years of working in the CG industry. Enjoy!

The Interview

What first attracted you to writing graphics software, as opposed to another field of computing?

My background originally is in robotics and computer vision for robotics. I segued into some video game work that presaged the Wii & Natal that appeared at EPCOT, and from there into camera tracking work. Robotics has been stuck a bit because the mechanisms haven’t kept up with advances in computing; the film/tv post industry is able to push ahead to new technologies without being hampered the same way. I do miss the visceral impact of bugs in my robotics software, though: crunching metal and flying pieces of plastic.

What features in camera tracking software development have been demanded by the CG industry?

The most recent has been stereo, and with the commercial success of Avatar, I expect to see even more stereo projects. SynthEyes has been used for stereo projects for some years now by advanced users, even though there weren’t specific stereo features in it. I’d originally anticipated adding stereo features about two years ago, but when I asked around, it seemed that there wasn’t a strong readiness at that time. A year later, when I went ahead and added specialized stereo capabilities, those features went instantly from my lab into "secret projects" worldwide, which turned out to be Avatar and perhaps some things we haven’t seen yet. The industry was ready and we’ve seen the audience was ready.

What changes do you see camera tracking has brought to the film/VFX production process?

I think it has allowed directors to plow ahead to reach their artistic vision during shooting, with less and less regard for the details of how that vision will ultimately be achieved in post-production. That can be a full-employment plan for post-production, but the added flexibility while shooting offers the opportunity to achieve a better end product.

What changes have you seen in camera-tracked shots through the years, how have they evolved? Which shots or uses of camera tracking surprised you?

The rise of "reality" and mock-documentary shooting styles, such as for Battlestar Galactica, have meant that there are plenty of shots that are quite wild, not just the more traditional crane and dolly shots. There are more challenging long shots, and of course the resolution requirements have gone up.

The most unusual uses I’ve seen have been for image analysis of Antarctic ice boring images (tracking air bubbles in the ice), and for the analysis of colonoscopy images. So it’s the same idea in much different locales!

What is your view of the current state of the CG field? Do you feel it is as exciting as it was 10-20 years ago, or has innovation basically slowed down these past few years?

It’s only at the simplest level that you might think innovation is slowing. Sure, it’s not like when you saw smooth shading or antialiasing for the first time. But there are still a lot of things happening, a lot of people working in the area, and feature sets continue to explode. The steady increase in computer power, currently being driven by multi-core technology, means that many more things will become practical — speed is important not only for end users, but for developers as well, to be able to identify and develop commercially viable technologies. Taking advantage of all the cores is more difficult, but I’ve been doing that for many years now; multi-processor systems have always been necessary in robotics.