This is the latest in a number of adaptations from the new Inspired series published by Premier Press. Comprised of four titles and edited by Kyle Clark and Michael Ford, these books are designed to provide animators and curious moviegoers with tips and tricks from Hollywood veterans. The following is excerpted from Lighting and Compositing.

A Brief Introduction

Jim Berney is currently a visual effects supervisor working with Sony Pictures Imageworks in Culver City, California. Jim holds two undergraduate degrees in computer science and economics from the University of California, Irvine, focusing on artificial intelligence research funded by NASA. He then completed his master s degree in computer science from California Polytechnic, San Luis Obispo, with studies centering on the research and development of a new global illumination paradigm. Jim also spent one year at the Royal Institute of Technology in Stockholm, Sweden, studying computer architectures. He then spent three years working for DARPA (Defense Advanced Research Projects Agency) as an ADA programmer for a large software engineering consortium.

In making the move to visual effects, Jim worked as a part of the software development team with MetroLight Studios. While at MetroLight, Jim authored flocking software as well as procedural natural phenomenon lighting software. In 1996, Jim joined Imageworks and served an integral role in developing the costuming technology, versioning and publishing and lighting and rendering tools for the production pipeline. Jims film credits include Batman Forever, Under Siege 2, Mortal Kombat, Anaconda, Starship Troopers, Contact, Godzilla, Stuart Little and Harry Potter and the Sorcerers Stone. When Jim is not creating stunning visual effects or working on a new global illumination paradigm, he enjoys motocross racing.

David Parrish:Can you tell me a little bit about how you got started in the computer graphics industry?

Jim Berney: I did my undergraduate studies in computer science, with my emphasis in artificial intelligence. When I graduated, I started doing research with a software engineering consortium called Arcadia that was funded by DARPA. I was basically writing code that would analyze large-scale code, for metric gathering and such, so it was incredibly boring and dry. I did that for a couple of years, and it was nice because I was actually on the UCI (University of California, Irvine) campus. It really wasnt the real world, even though I was getting paid. One day I was at a book fair and I was just walking by a table. There was a book on the table about ray tracing and it was one of the standard references at the time. I bought it. It was a $70 hardback on sale for $20. I thought, This stuffs really cool. This looks fun. I started looking at doing my masters and I looked at Cal Poly San Luis Obispo, where there was a professor, Chris Buckalew, who was doing really interesting stuff on global illumination paradigms. I ended up going there and getting my masters with an emphasis in computer graphics. My thesis was actually developing a whole new paradigm on global illumination. While I was there, I still didnt know what I would do for a living. I didnt know how to make money with this knowledge. At the time, I looked at getting out of computers and being a baker. I actually apprenticed as a baker for a while because I was just so tired of computers. One day I was talking to an undergrad who shared the same advisor as me. He had done an internship at MetroLight, and he was talking about this whole industry that was a complete unknown to me. I understood the underlying theories of animation and computer graphics and rendering, but nothing really about compositing. I remember at the time watching television, seeing a commercial, and still having no idea how that got onto the screen. This kid had already done all the footwork, though. He gave me his list of all the companies and their addresses, so all I had to do was just write a resumé and send it to these people. I did that and of course no one really got back to me. A couple of people asked if I had a demo reel and my response was Whats that? I had no reel and the only people who really gave me a chance were MetroLight Studios. I was hired there as what they call an RTD. Thats a research technical director and the position kind of straddles software and the artist pool. They gave me a shot and my first gig there was writing flocking software for Batman Forever. I wrote some really cool flocking software for that movie, and then I actually implemented it within the shots. I remember seeing the film for the first time and finding out then that my shots were cut from the movie and my name was cut from the credits. I thought to myself, Oh this is great. To top it off, my truck got stolen that week as well. So that was my welcome to L.A. MetroLight was a small group, and they only had a couple of people that were really experts at RenderMan. One of them left right when we started doing Under Siege 2, so I really quickly ramped up on shader writing in RenderMan. I started rendering and lighting shots for Under Siege 2, and then I did some work for Mortal Kombat. We were doing some commercial stuff as well, but those were the two movies. I was there only a year before I moved here to Sony Imageworks.

DP:What were some of the challenges you found in making the transition from your programming background into the world of film effects?

JB: Acquiring a good eye for the aesthetics was a challenge. Everything I did before was complete computer coding. It was math. The programming helped, though, because of the need for efficiency and organization. When I switched over to film effects, my background helped me to take on multiple shots, troubleshoot and problem-solve.

DP:How did you develop your aesthetic sensibility?

JB: It comes with time and the experience of doing all these shots. Since Ive been here, Ive been able to work with Ken Ralston, John Dykstra, Rob Legato, Jerome Chen and quite a few others. You start learning from all these guys about what looks good and what doesnt.

DP:What was programming like at an effects company compared to your previous research experience?

JB: The projects I was working on before starting in the effects industry were huge in scale, and things had to be more formalized. The reason for the formalization was to allow somebody else to be able to pick it up at any time. The consortium was large and based out of multiple cities, so theoretically somebody from another town might pick up the code and need to easily understand it and use it. Everything had to be very well written, so to speak. Coming into effects, since I wasnt necessarily part of software, it was more like meatball surgery. You came in and you coded some stuff up on the spot. It was in the heat of production and it wasn t like we had six months to write a program and test it out thoroughly before using it. I would just put things together and test them in practice on a shot. If it breaks, you fix it, and its a process that constantly repeats as you continue expanding the scope of the program. With really good code, like what we wrote in my engineering days, theres a huge requirement spec written for a software project. That stage is skipped when a tool needs to be written for a production. If I need something to do a particular task, then I create it. As the needs change, this thing takes a life of its own. It kind of morphs and usually becomes a little less stable.

DP:How does optimization work into the equation in the effects industry?

JB: Optimization is tough. Usually its hard to do in the heat of production. You end up optimizing during down time. A very big case of this is BIRPS, our rendering system. That has slowly been evolving since Ive been here. I remember a guy came over just before I did and he started writing the idea of having a sprib, or a single rib, with all of the per-frame information baked out. That was being implemented while I was here and I was actually writing the pipeline around it. We used it on Anaconda and it worked incredibly well for that. I hacked some stuff together for Contact and Starship Troopers, but then we had some down time and really started to think about the design a little more. We were able to implement those changes for use on Godzilla, but, again, it was kind of rough at first. We were just trying to get it ready for Godzilla and didnt really have time to fully develop it. When Godzilla ended, the rendering system got bigger and we started thinking about inheritance and multiple material files. We wanted to have the capability of creating standards that people could share. We were starting to implement that at the beginning of Stuart Little, and it was necessary because of the scale of that show (see Figure 1). Instead of five to 10 artists, now youve got almost 50 artists sharing these materials. The addition of material standards made a big impact.

DP:What was your progression through the ranks at Sony Imageworks to your current position of visual effects supervisor?

JB: I started here as a TD and was hired to work on Anaconda, as well as a test for Dinotopia with Ken Ralston. I was working with Steve Rosenbaum and I did the Dinotopia test as an artist. On Anaconda, because I had written the pipeline, I assumed the role of a lighting lead. When I finished those two projects, I ended up splitting time between Starship Troopers and Contact as an artist. After completing those Id only been here a year. By then it was time for my review and I was angling towards a senior TD position. I guess Rosenbaum and the others had a different idea, and they were pushing me into a CG supervisor role. When the people I had worked with gave their reviews of me to the department head, they said CG supervisor was the position for me. I had no idea. Basically, I remember having a conversation with Stan [Szymanski, head of Sonys digital department], and I said that if right now you threw me on a project as a CG supe Id say you re crazy. He said, Oh really? Because thats the plan. I want to be confident as well as competent and know how to do it before I jump into it. So I said I could see being a CG supe on a smaller project if I have a good mentor, so Im not without a net. Jerome Chen and I ended up doing a couple of little tests for Dr. Dolittle and something for Paulie, a talking bird, and one day he came in and said put all that down, were going to do Godzilla. I thought, great, now its a big project without a net. At that point I became a CG supe within my first year of being with Imageworks. I CG-suped Godzilla, and when that wrapped, the same group went on to Stuart Little. Our titles didnt change, but what we needed to do got upped a little bit. We were CG supes, but we also did a little bit of the work of a digital effects supe. We were on the live-action set and making a lot of creative calls. There was a lot to do on that side of things, and we were also writing code and building the pipeline. When Stuart ended I went on to Harry Potter, and there they upped me to digital effects supervisor. The idea was that Jerome was going to be the visual effects supervisor for Imageworks, even though Rob Legato was the productions visual effects supe [Rob Legato was employed by the studio Warner Bros.]. Then Jerome went on to Stuart Little 2, so that left me at the top for Imageworks. Since there was now no visual effects supe, then how could you have a digital effects supe? So kind of by default I was moved up to visual effects supervisor. I moved up pretty fast. I was only here a year and they already had me in as a CG supe. I did that for about a year and then I jumped to visual effects supe quickly.

DP:What does it take to successfully integrate a CG character into a live-action scene?

JB: If you can do more and more iterations quicker, thats obviously easier. The counterpart, which is kind of where we are if you have a full-furred creature, is a first pass with one light taking 20 minutes just to process before you actually see anything. Thats really cumbersome. Its weird because right now it seems like 90% of it is technical hurdles and 10% is the aesthetics for the artists. If lighting was infinitely fast, you could move lights around in real time, and if it was completely seamless and easy to use, then you could almost take the same paradigm they have on set. There you have a DP [director of photography], a lighting director and grips moving the stuff around. If the CG were that efficient, the TDs would be scaled down to a role similar to a grip. Youd have lighting directors coming through with multiple T who could just sit there in real time and move stuff around. As things get easier, it will go that way. When I first came in, you had to be highly technical because lighting was just script hacking. You had to be a programmer to do photo-real stuff. We didnt have specific shader writers, because the people who wrote the shaders were the only ones who could do the lighting. It was heavily code based, so you had to be a programmer to do the shader writing and the lighting. As tools get easier, its literally just like moving a physical light around. Then it all becomes aesthetics. Were getting there as the computers and RAM are getting cheaper and faster. The code now could probably be much faster because were really not completely utilizing concurrent programming. Were not really parallel computing massive arrays of matrices and what not. If we can get into that, we can actually get faster and faster and faster. As things become increasingly faster, then its just like the real world. You dont need a highly experienced, educated, highly paid person to move the light from A to B. You just need somebody with an eye to tell 100 people to move the lights. If you have one experienced person giving instructions, you could be lighting a lot of shots quickly. Itll be like on the set where you have a director and a DP. Its their vision. Its not the actors and its not the grips. Its not anybodys vision except the director, the DP and the people who are paying for it. Thats just how it is. It works great now because of the way its structured. I like people like yourself who are creative and have a good eye and can do that, because its so cumbersome right now and I cant sit there and hand-hold them. If I say, no, the rim is hitting him right in the face, then you go through and make the technical and aesthetic calls to correct it. If it was incredibly fast, though, you could have another type of person that sits there and directs people in realtime. Then you render it out and walk away and wed see it on film.

DP:Does your eye for detail come naturally or is it something that has developed over your time as a supervisor?

JB: Its both. Its not natural to the point where its unconscious. Its actually an effort and it comes from experience. In an extreme case, I had a project where there were two visual effects supes and one of them would come by and say one thing and the other would say another. Sometimes you work with supes that dont necessarily know what they want. One day they want it red, and you make it red, and then they want it blue. Then you make it blue and they want it green, and then all of the sudden youre back at red and youre going in circles. On the other hand, Ive noticed working with Jerome on Godzilla that he has a target. He says this is what I want. As long as you hit that target, you are done. I just knew thats how you get things done. I never wanted to jack people around. I never wanted to sit there and go one direction and then come back in a circle. That is a conscious effort every time I talk to an artist. Whenever we start a shot, I try to figure out what I want and how to attain it, and then feed the artist the information at the time he needs it. I never completely flood them with every little detail. I start in broad strokes with what we want and thats enough information for the beginning. As we progress, then I slowly get to more fine-grain specifics of what should be done.

DP:So when you look at a show, do you evaluate every shot and develop a vision for what each shot will be and how it will get there?

JB: Yes, to an extent, and you should. Its a lot of work, but not relative to 100 artists spinning in circles for three weeks per shot instead of the one-week target. Then youre talking about thousands of man-hours instead of me just taking maybe 10 and thinking it out. Its just stopping, thinking and planning. Youve gotta have a plan.

DP:I know from our work together on Harry Potter that in addition to having a plan, you always have a positive and encouraging attitude. How do you maintain a positive attitude for yourself and your crew?

JB: It never help s to have people who are completely stressed, negative, freaking out, and causing more pressure than there needs to be. Its just putting it in perspective. This is a job. Its a lot of work and we should take pride in it, but all in all it is a job. Were making donuts, so to speak. When I first started, it seemed like it was the pressure of a brain surgeon. Its like someones life is depending on this filmout. Its not. Well do it again tomorrow. Well get it done, and it always does get done. Work is not my whole world and it shouldnt be everybody elses whole world. There needs to be things out side of work that youre interested in, having a good time with, and you can spend energy on. That way you dont just get bogged down with work 100% of your time. Personally, Ive got to get out of the city. I like the mountains and the desert. Thats why I like riding motorcycles. I get out. On Saturdays when I ride at the track, its out in these crappy areas and its hot as hell, but it just feels good. It feels good just to be out in the baking sun.

DP:Do you have any methods, tips, or tricks you could share for lighting and compositing a scene and creating believable computer graphics elements?

JB: There are a million tricks. One is that you just need to stop. Stop and smartly set things up. People want inst ant results so they wont spend a day correctly setting something up with no image to show for it. Theyll go straight for an image without setting things up properly. Then theyll bang their head for the next two weeks trying to make it look right, but the lights just arent correctly set up. Theyll try to massage the paradigm theyve got together, whatever the rig is, for two weeks. You just need to stop, move that light, create another light, and set it up properly from the beginning. A lot of it has to do with the difficulty in the tools. It takes time to create another light with another shadow. Thats why we re trying to make shadows much easier. I noticed a long time ago that some people didnt want to put shadows in their lights, or even create another light, because it was a pain in the butt. It used to be even harder. The idea is, and eventually it will be, that lights will be shadowing. That should be a given. All lights should shadow. You shouldnt sit there and have to worry about where shadow map files are going to go and the names, and so on. Thats actually why we set the pipeline up the way it is with all the naming conventions and standards. You dont have to worry about where things are and how theyre named.

I was on a show recently and we were trying to do a particular type of animation. It wasnt character animation, but rather long, animated camera moves. We were struggling with it, and wed ask for these very small changes that wouldnt come. We fought it for weeks, and it turned out to be a result of the camera and parenting system being set up incorrectly. In the beginning, I said I dont care if I dont see anything for three days. Just set it up correctly. You know, they went straight to the image to quickly produce stuff. Instead of stopping and setting it up correctly for a day, we fought it for two weeks.

As for specific tricks, there are hundreds of them. If youre trying to sit there and see how a new light works, dont render a fully furred creature at 2K with motion blur [see Figure 2]. You have to optimize for testing. Its a good idea to start with quick renders that are cropped, fast, and low quality just so you can see where the lights are. Start with a sphere. Ive asked that from people for a while. I dont want to see the troll. I dont want to see a furred dog. I dont want to see anything but a ball. Just give me a sphere that looks integrated, and then put the dog in. Of course, the dog won t look right, but its a quick way of initially setting up lights. Ive noticed when people resist doing things, its because they re difficult, so you just need to make it easier, or even automatic. Ill sit there and say it needs a little something, like bringing the rim a bit this way. I come back the next day and the rim lights still not moved. Ill ask every day just to bring the rim light around. It turns out all they were doing was just upping the intensity, because thats the easiest thing to do. It needs to be brought around. You need to move it. The entire process was an effort by the artist to avoid a re-render of the shadow maps. Well, the simple answer is the light has to be moved and the shadow maps need to be re-rendered, because its not going to look right otherwise. If you just stop and analyze the situation, youll realize it only takes about two hours of processor time to render new shadows. Everyone wants instant results, though, so they go for the quick fix even if its wrong. You just cant achieve instant image creation right now. I ask to move a light and the artist wants to avoid a re-render of the shadow maps. They think maybe making the falloff a little different or adjusting the wrap around will fix it. It doesnt work the next day or the next day, and three days later it still doesnt look right. OK, now youve wasted three days and three iterations of film because you didnt want to stop and spend one or two hours in the first place. Even if its a whole day, its worth it.

Outside of the technical aspects, there are some little things that are also important. Steve Rosenbaum taught me that if a supervisors talking, you should be writing. It instills confidence that all the comments I say will get done. A lot of times people dont write, so I wont even throw all of my ideas out because I know theyll fall through the cracks. Theyll just address the big ones. Just because I talk about one thing for 10 minutes and say another thing quickly, it doesnt mean theyre not equally important. That little thing I just threw out there will stop it from finalling just like the thing I went on about for 10 minutes. If a supervisors talking, write it. You wont remember. Youll say, Oh yeah, yeah thats obvious, and later you wont remember. Its easier to have it all listed and you just go through and check it off. The supervisors going to remember and a list makes things so much easier. Another thing is to try not to make excuses. I hate it when you come by and an artist says this didnt work, and this, and that, and this, or that, and its this big long story. Then I ask what they have to show, and they say nothing. I understand the process. If I ask if you did it and you say youll have it later, Ill say OK. Ill respect you. You dont need to make excuses.

DP:What do you think the role of image-based lighting and global illumination will be in the near future?

JB: I studied the hell out of that. I was looking at some of the high dynamic range things. It works for certain instances, but its not flexible in terms of moving a light. Say you have an environment and you use it to automatically create the lighting, and then you want to stick a little creature in it, like a Stuart Little (see Figure 3). Well, when they lit the set, they didnt light the mouse because the mouse wasnt there. So the sets lit well, but its like when we did the Centaur on Harry Potter. OK, the sets lit, but theres no Centaur, and we didnt even light the set as if the Centaur was there. If there was a real creature there, the DP would add specials that dont affect the surrounding, but that light him. Thats totally omitted in this scenario. So now when we come back to the digital effects house and were starting to put our creatures into the scene, the VFX supe kind of plays that role. He plays the role of DP. We talked to a guy whos doing a lot of research with HDR (High Dynamic Range) and a ball and eight pictures. Hes doing some great research and we talked to him for a while because he had done some really beautiful pictures. We asked, What if I want to put a creature in and use that? He said you can do that. So then we asked, What if I want to give it a little rim? Thats when it got really difficult. That doesnt mean its dead or it doesnt work. Im just saying there are many more considerations. Even if I duplicate the exact values of the scene, thats not the lighting I want for my subject. Its a great starting place, as long as you can go from there and add your specials and flag lights off. You just need to think it out. We wanted to do something like that on Potter, but what had been implemented in the past was what we would have gotten. You set up the ball (chrome reference sphere) in an environment, shoot your pictures, and then take those energy readings. Thats your lighting, period. We also needed to be able to sit there and add and manipulate lights.

I love the silver balls and the information they provide. I wrote code on Anaconda that took the image of the ball, cropped it in as a little texture, focused in on the highlights, and gave you a vector of where the light came from. It didnt give you position, but it gave you the direction it came from. We used distant lights in Renderman, with parallel rays, and the lighting in Anaconda took about 10 minutes. Those were our lights, period. We never moved them. We did balance the intensities, since the evaluations did not include intensity or color. We could have included those as well, but didnt have time to automate it. It gave you the automatic lights and then I just sat there and balanced it. Most of the lighting was in the shader. We kind of got caught up on things with Potter, but I had recorded images on set with two silver balls in each scene. One ball gives you a vector, but with two balls and two vectors, their intersection gives you light position. Its easy. I can code that now, but we just never did. On the set of Potter, I shot two balls, but we just never got the chance to code it up. Otherwise, we could have gotten the positions. That actually integrates into the existing pipeline we have now.

DP:Do you have any bits of advice for those interested in lighting and compositing as a career?

JB: It seems like attitude is everything, and our industry is one where you cant just rest on your laurels. You cant just get settled in. Regardless of how long youve been at it, things are always changing and you have to keep your skills up to date. Thats the scary thing, since you can easily get left in the dust if you dont keep your skills current. There are new kids always coming out of school who are willing to do the hours and learn the new tools. I always like when the artists themselves will take the initiative to actually help improve the process. That helps a lot. Instead of just complaining about it, they ask how they can make it better. A lot of the tools and improvements we have here are because of people who have taken that initiative. Some of the lighting tools that you arent even aware of happened that way. There are artists whose job it was to get the shot out and they stopped and figured out they could be more efficient if they were able to create a new way. They did it, and we put it in the pipeline.

Were getting close. Its going to be incredibly different five, let alone 10 years from now. People could become less skilled because of what we want. We want faster, easier tools. OK, if its faster and easier then you probably dont need people who are as smart and experienced. There will probably be a shift, and there already has been, since ten years ago you needed programmers. Now were getting more of a true artist pool and thats a good thing. I think people are tired of banging their head on the box right now. When you look at a movie script and the required special effects, the technical hurdles are always in your head. You ask, Can we do that and can we do it for the money? When I talk to directors, I dont even want to hear that. At some point, the director will be able to tell us what he wants to see, and well have the tools to quickly figure out a way to do it. Therell be nothing we cant do and it wont cost a billion dollars. Well have the technology and the speed. Every show we figure one more thing out. Before Stuart, somebody figured out hair, and then on Stuart, we included hair and made it work with our pipeline. We made a lot of improvements to hair and different hair styles on Potter. With cloth, its the same thing. Five years ago, every CG character was in spandex and shaved. Skin is not perfect, but by the next project with a digital human, itll start to get perfect and itll start to get faster and cheaper. Fire is probably not perfect. Waters getting better. Nature is built-up of all these elements that were slowly nailing down one by one. Fifteen years ago it was marble or something simple. 20 years ago it was plastic and steel. As we start to add all of these to our list of accomplishments, well be able to make whatever we want. Then itll be completely creative. The question can simply be asked, What do you want to do? Maybe we want to go off in some weird galaxy and see some creatures and phenomena that only exist as a vision in a directors mind. People could do whatever they want and express themselves any way they want. They wouldnt have to worry about budgetary constraints, getting it done on time, or even being able to do it at all. Thats coming around the corner. Were getting there. On Stuart Little 2 we had to do a bird, so now there are feathers. Theres a whole different bag of problems. Not thats done and as long as we dont have fish or something, and the next film is the exact same thing, then that would be relatively cheap to do and we would be able to accurately schedule and budget it. Then you could potentially come down to a 40-hour workweek. Why not? When you sit there and try to figure out a show like Potter, its a huge challenge. We had to do 14 digital kids with flowing robes and hair and this and that. We had no way of doing flowing, moving, dynamic hair, specific hairstyles or 14 simulated robes at once. We had to budget time and money for all of these tools that we didnt even know how we were going to make. So, if you take all of that out of the way, it becomes so much easier. Each show you have less and less to figure out, hopefully, and eventually in ten years itll all be figured out. Well continue to optimize, and once it gets completely optimized, it gets less technical and more creative. Then itll be fun.

To learn more about lighting and compositing and other topics of interest to animators, check out Inspired 3D Lighting and Compositing by David Parrish; series edited by Kyle Clark and Michael Ford: Premier Press, 2002. 266 pages with illustrations. ISBN 1-931841-49-7. ($59.99) Read more about all four titles in the Inspired series and check back to VFXWorld frequently to read new excerpts.

David Parrish went straight to work for Industrial Light & Magic after earning his masters degree from Texas A&M University. During the five years that followed, he worked on several major films, including Dragonheart, Return of the Jedi: Special Edition, Jurassic Park: The Lost World, Star Wars: Episode I -- The Phantom Menace, Deep Blue Sea, Galaxy Quest and The Perfect Storm. After five years with ILM and a short stay with a startup company, he was hired by Sony Pictures Imageworks to work on Harry Potter and the Sorcerers Stone.

Series editor Kyle Clark is a lead animator at Microsofts Digital Anvil Studios and co-founder of Animation Foundation. He majored in film, video and computer animation at USC and has since worked on a number of feature, commercial and game projects. He has also taught at various schools, including San Francisco Academy of Art College, San Francisco State University, UCLA School of Design and Texas A&M University.

Michael Ford, series editor, is a senior technical animator at Sony Pictures Imageworks and co-founder of Animation Foundation. A graduate of UCLAs School of Design, he has since worked on numerous feature and commercial projects at ILM, Centropolis FX and Digital Magic. He has lectured at the UCLA School of Design, USC, DeAnza College and San Francisco Academy of Art College.