Tag Archives: vfx

TheMatrix (1999) is an American science fiction action film directed by Larry and Andy Wachowski. The Matrix is set in the future where reality as perceived by humans is actually the Matrix, a simulated reality created by the sentient machines to pacify and subdue human population, while their bodies’ heat and electrical activity are used an an energy source. When computer programmer Neo learns of this, he is drawn into a rebellion against the machines, involving other people who have been freed from the ‘dream world’ and into reality.

The film is most known for popularizing a visual effect known as ‘bullet time’. It is a shot effect that progressing in slow-motion while the camera appears to be moving throughout the scene at normal speed. The directors’ approach to the action scenes drew from upon their admiration for Japanese animation and martial arts film, and the fight choreographers and wire fu techniques from Hong Kong action cinema was influential upon subsequent Hollywood action film production.

Each camera is a still-picture camera, and not a motion picture camera, and it contributes just one frame to the video sequence. When the sequence of shots is viewed as in a movie, the viewers sees what are in effect two-dimensional ‘slices’ of a three-dimensional moment. Watching such a ‘time slice’ movie is same a a real-life experience of walking around ‘in the scene’, from different angles. The positioning of the still cameras can be varied along any desired smooth curve to produce a smooth looking camera motion in the finished clip, and the timing of each camera’s firing may be delayed slightly so that a motion scene can be executed.

For The Matrix, the camera’ positions and exposures were previsualized using a 3D simulation. Instead of firing the cameras simultaneously, the visual effects team fired the cameras fractions of a second after each other, so that each camera could capture the action as it progressed, creating a super slow-motion effect. When the frames are put together, the resulting slow-mo effects approached the equivalent of 12,000 frames per second, as opposed to the normal speed of 24 fps for film. The cameras at each end of the row were standard movie cameras to pick up the normal speed action before and after. Because the cameras can be seen as the rig is in a circular motion, computer technology was used to edit out the cameras that appeared in the background on the other side.

Bullet time effect is used to illustrate the character’s exertion of control over time and space in the movie.

Marvel’s The Avengers was a highly anticipated blockbuster and no doubt it was big hit in cinemas when it was released last year. A main character in the movie was the Hulk, played by Mark Ruffalo. In the movie, he turns from a normal human into a buff green shirtless killing machine. Industrial Light and Magic (ILM) was responsible for this CGI and they did such a great job that they were just recently nominated for an Academy Award for Best Visual Effects.

Here’s kinda how they did the digital double. ILM used motion capture to catch the emotions Mark Ruffalo potrayed on screen. Every bit of Hulk stems directly from Mark, from the pores on his skin, to the grey hair of his temples, right down to using a dental mold of Mark’s teeth as a basis for Hulk’s teeth. Their strategy was to work to out rendering and texture issues on the Banner (Hulk before he turns human) digital double until it looked indistinguishable from Mark Ruffalo.

The realism of this digital double is fucking awesome!

OMG WOWZERS!

As the Banner and Hulk shares the same topology, they were able to transfer textures, material settings and the facial library for animation. This gave them a decent base to start working from but with their significantly different proportions, there was a lot of retargeting work that need to be done. They tried to be economical with their poly counts but with Hulk they made a conscious decision that he was going to be extremely dense in his resolution for a better mesh. By working ike this, they never came up short on resolution for all of the close-ups and detailed shape work that was required to represent the anatomy under Hulk’s skin. They then incested in a robust multi-resolution pipeline so that the model was manageable for the artists to work with.

Canadian director James Cameron directed The Terminator (1984). He is well known for his use of cutting edge visuals and effects technology. The Terminator is his first groundbreaking sci-fi blockbuster movie in the visual effects arena. He pushed the boundaries of special effects with The Terminator. It was during a period of time where Hollywood was experimenting with new means of visual effects through the production of films that fused the genres of science fiction and horror.

Seven years later, Cameron came back to direct Terminator 2: Judgement Day. Judgement Day came back even bigger than before, in terms of CG. It was the first film to feature a computer generated main character. The VFX in the film was completely top notch for that period of time. Not only was there the CGI Terminator, it also morphed and regenerated body parts. And on top of that, it could also turn into a mercury like liquid metal that seeped through little cracks. The movie paved the way for all the other VFX-laden movies.

Most of the effects was provided by ILM and the creation of the visual effects took 35 people altogether that included animators, computer scientist, technicians, and artist. It took ten months to produce, for a total of 25 man-years. And despite the large amount of time spent, the CGI sequence was only a total of five minutes on screen. But all this work was worth it because the visual effects team won the 1992 Academy Award for Best Visual Effects.

For the scene featuring Sarah Conner’s nuclear nightmare, the people from 4-Ward Production constructed a cityscape of Los Angeles using large-scale miniature buildings and realistic roads and vehicles. The pair, after having studied actual footages of nuclear tests, then simulated nuclear blast by using air mortars to knock over the cityscape, including the intricately built buildings. 4-Ward created a large layered painting of the city augmented with a radiating blast dome and disintegrating buildings created with an Apple Macintosh program called Electric Image. They also contributed a number of shots showing molten steel spilling out of a trough onto the floor, and used real mercury directed with blowdryers to create the eerie shots of the shattered T-1000 pieces melting into droplets and running back together.

Davy Jones stars as the protagonist in the second installment of the Pirates series. He is completely CGI and everything about him is so believable it’s crazy! Of course the team responsible for this had to be none other than Industrial Light and Magic.

The production shot real actors on set and digitally replaced them. In order to do this, each actor was scanned and modelled. They wore a motion capture suit which enable them to be replaced in post production. ILM was unable to rely on traditional MoCap or hand animation as there were multiple issues. It had to be done in special studios with multiple cameras and the cameras and tracking markers are special expensive equipment used only in a calibrated environment. Also, the data needed to be cleaned up tremendously as the data stream has both noise and errors. The whole process is complex to set up, and it’s also expensive and highly specialized therefore it wasn’t used. ILM created an innovative new system called Imocap and that allow onset and on location motion capture to elicit the most believable look and performance possible out of actor Bill Nighy.

He wore a pair of gray ‘pajamas’ with reference dots placed around the suit and his face, and his performance was captured entirely on set as he interacted with other actors. This improves the performance of the other actors as they would have someone ‘real’ to interact with, and it also gave the animators a highly detailed reference.

Being ILM, they made a breakthrough with Imocap when they only had to film with a single onset film camera instead of multiple cameras when using MoCap. A single camera removes the many restrictions motion capture process gives. With Imocap, motion capture could be done on set. The approach is to model the actor’s range of motion and then they used an elaborate system to fit the range of possible motions the actor could do, to the data from the single camera source.

Besides Imocap, the other challenges ILM faced with the character of Davy Jones was his 46 flopping tentacles. ILM wanted the tentacles’ curling and movement to reflect Davy Jones’ mood, not just lifelessly bob around, but they didn’t want an animator to have to manually manipulate each and every one frame-by-frame so to solve this, their programmers added a sort of inter-tentacle motor to automatically move them around. Mathematical expressions and/or keyframe motion fed to motors in the joints between the cylinders making up Davy Jones’ 46 tentacles caused them to bend, curl, writhe, and perform in life-like ways. “Stiction” kept the tentacles from sliding.

As the computer knows what the actor’s limbs could do from any one frame to the next, it can ignore a lot of mathematical possibilities and add to the solution. Once the solution is constrained by this virtual range of possible motion, a single camera can produce a very powerful motion capture data stream. While the motion capture system worked extremely well, the lip sync was not done this way and instead hand animated.

For the tentacles, an articulated rigid body dynamics engine was utilized to achieve the desired look. Each tentacle was built as a chain of rigid bodies, and the articulated point joints served as a connection between the rigid bodies. This simulation was performed independently of all other simulations, and the results were placed back on an animation rig that would eventually drive a separate flesh simulation.

Forrest Gump is undoubtedly one of America’s most loved movies. Tom Hanks is such a brillant actor! Chromakey technology was made use of for the film. They used archival footage of many famous moments in American history and used it to simulate Tom Hank’s character in it. Special effects artist Larry Butler developed the chroma key process for the film The Thief of Baghdad for which he won an Academy Award. Back then, chroma keying was a chemical process performed on film negatives. Today it is all done digitally.

Tom Hanks was first shot against a blue screen, along with reference markers so that he could line up with the archival footage. The voices of the historical figures weren’t originally from the footage, they were hired voice doubles. To ensure that the voices matched while the people spoke, special effects was used to alter the mouth movement.

Above is an example of a blue screen. Besides using chroma keying for the historical scenes, there was a character, Lt. Dan (Gary Sinise) who had amputated legs in the movie. The actor wore a pair of long blue socks, and they just keyed out the blue and that’s how they made the legs vanish. Industrial Light and Magic was hired to make this sequence. Aren’t they amazing!!!

The big deal about Peter Jackson’s trilogy The Hobbit: An Unexpected Journey is that it was shot at 48 frames per second on the Red Epic camera in full 5k resolution. It was shot digitally, not film, on memory cards that was about 128 gigabytes each. So why shoot at 48 frames per second, you may ask. The usual cinema film is shot and projected at 24 fps, while The Hobbit is twice as much. When project at 48 fps, the result will look like it’s at a normal speed, but the image has hugely enhanced clarity and smoothness.

According to Peter Jackson, “Looking at 24 frames every second may seem okay – and we’ve all seen thousands of films like this over the last 90 years – but there is often quite a lot of blur in each frame, during fast movements, and if the camera is moving around quicklu, the image can judder or ‘strobe.'” A higher frame rate gets “rid of these issues” and makes the image “much more lifelike.” He also notes that filming at 48 fps makes the 3D images less taxing to watch.

The Red Epic is an epic camera and for the making of The Hobbit, it required them two (for each set of camera) of the Red Epic as they were shooting in 3D. And the problem they faced that the lenses they used were so large that they could not get an interocular similar to the human’s eye. So what they did was that they shot through a mirror on a rig. The left camera shoots through a mirror, and the right camera bounces off the mirror so that both of the filmed products are overlayed on screen.

They hired specialist firm 3ality to build a rig that enabled them to change the interocular and the convergence point as they were shooting. There were various rigs for all the different types of shooting, eg. a crane rig and a handheld rig. The handheld one, also known as the TS5, was small and light and it allowed the Peter Jackson to shoot in tight/cramped corridors or caves. Altogether, they have 48 Red Epic cameras on 17 3D rigs.

Though the Red Epic is epic, it naturally desaturates the footage so on set, they had to over exaggerate the colours to counter the desaturation that was going to happen on screen. Above is an example of the forest scene on set. Besides the forest, the did some colour test before filming and realised that if there wasn’t enough red, it would turn really yellow and react differently than normal skin that has blood running through it. To counter the problem, they had to add alot of red tones to the actor’s make up. Though it looks reddish when not seen on the camera, when they’re filming, on screen it’ll look like normal flesh tone.

Here’s an interesting behind the scenes video on the 3D rigs and cameras they used!

I recently re-watched Battleship and what I thought was pretty cool about the film was that rigid body and water simulation! There’s a scene where the aliens comes out of the water and destroys everything in it’s path and that looked pretty cool and I know it takes a shit load of hard work to do.Of course the awesome people behind the VFX of Battleship was none other than Industrial Light & Magic (ILM). There were over a thousand shots of VFX to be done and part of it involved some sort of water simulation.

Three years before Battleship’s release, ILM had already started discussion on the water sims pipeline. They had a well-developed water sims pipeline which was used in Poseidon, as well as Pirates of the Caribbean. However, they had to step it up a notch and reinvent their water sims due to time constraint. Thus, ILM started on what they would internally call the Battleship Water Project. Together with their R&D team, they came up with a new water pipeline and advanced tools to improve their workflow. The following picture shows the layer breakdown.

It would probably bore you to explain how they did eventually went around doing the water sims pipeline so I shall not go so much detail into that. Basically, their problem was that since the scene was an open sea scene, large water simulation on a level set particle based process was needed and they broken everything down into grids and optimize them. However when the millions of cells of the alien ship interacted and collided with the water geometry, a lot of fine details in the complex water structures was lost from the simulation as the grid size on screen was perhaps only a two foot square compared to real world.

Thus to counter this problem, ILM added on top a type of FLIP PIC solver for particle based simulation which allowed finer detailed solution. This gave the traditional approach an allowance for wider scales. Each of these particle groups would then have a grid placed around them. Developed by ILM themselves, the secondary grids added to the particles are adaptive in size, and calculated based on how close the camera was to the particle simulation. Now with the particle secondary solution, the imagery could be resolved to a pixel resolution.