]]>http://www.artofvfx.com/entebbe-vfx-breakdown-by-bluebolt/feed/0http://www.artofvfx.com/entebbe-vfx-breakdown-by-bluebolt/SKYSCRAPER: Jason Billington – VFX Supervisor – Method Studioshttp://feedproxy.google.com/~r/Art_of_VFX/~3/7wHcWUZfg04/
http://www.artofvfx.com/skyscraper-jason-billington-vfx-supervisor-method-studios/#respondFri, 10 Aug 2018 16:00:44 +0000http://www.artofvfx.com/?p=22852At the beginning of this year, Jason Billington had explained to us the work of Method Studios on BRIGHT. He talks to us today about his work on SKYSCRAPER.

How did Method get involved on this show?
We had previously worked with Craig Hammack, Petra Holtorf-Stratton and ILM on DEEPWATER HORIZON. Due to the similarities between the two films and the visual effects work required, it was a natural fit to do this project together.

How was the collaboration with director Rawson Marshall Thurber?
We had the opportunity to work directly with Rawson on the very first shot of the movie where the Legendary logo turns into snow, and the camera pans down over the cabin in the woods. As it was the very first shot of the movie, Rawson had an attachment to it and wanted to work closely with us on it. While he knew what he wanted, he left it to us to realise his vision. We added CG falling snow to the entire length of the shot, shattered glass and bullet hits to the cop cars, and an entirely CG landscape for the background set extension.

How was the collaboration with VFX Supervisor Craig Hammack?
We previously worked with Craig Hammack on DEEPWATER HORIZON, so we knew each other quite well from that project. We have a great relationship and both know how and what we are after for the final product. We always seemed to be on the same page which made for an effective and enjoyable collaboration.

Can you explain in detail about the creation of the crane and Hong Kong?
The production designer had a design in mind for what the crane should look like, and this is what the principal photography was based on. The on-set production crew created small sections of the crane for Dwayne Johnson to interact with on a green screen. We then extended the plate photography of the crane in every shot to show the full crane. In most of the shots, the crane was entirely CG.

Our model supervisor visited a local crane supplier and took photos of the various parts for textures and reference. He also spoke with crane operators to find out exactly how a crane operates and moves, specifically when on top of a building. This translated really well into the final crane asset, which was shared throughout the various vendors. Everything about the crane was built as fully functional – all the cables, pulleys and wheels were rigged for animation and moved correctly based on the wind and real-world physics that you would expect at a turbulent height.

A team was sent to Hong Kong to get photography for the creation of cyc’s and background imagery. We created 2 cyc’s for the two different heights of the building – one for the height of the crane sequence, and the other for the top of The Pearl. Approximately 500m and 1000m respectively. Due to the film being set at night time, we found that a lot of the shots had a dark sky background. This wasn’t conducive to representing the height and vertigo of the situation. The audience needed some sort of reference point and variation in the skyline, so we added the Hong Kong skyline into many of the shots. We also added light pollution and gradient into the dark night sky. The horizon line, and especially other building reference points around Hong Kong really help in giving size, scale and location to the audience.

Can you tell us more about the digi-double creation of Dwayne Johnson?
The digi-double of Dwayne was a shared asset between Method and ILM. We needed to develop it further for a few close-ups of Will Sawyer’s leg, as seen in the bathroom scene towards the start of the film. Dwayne was shot wearing a green sock on his real leg, he would tuck that leg out of the way and a prosthetic residual limb was attached to his knee area so Dwayne had something to interact with. We later replaced this practical residual limb with CG renders that we sculpted and textured from real amputee references.

How did you handle the reflective aspect of The Pearl?
Reflecting dark night sky can be quite boring and plain – there is no complexity to it. We needed to force some sort of the street and surrounding reflections of buildings into the windows. Once the audience sees something recognisable in the reflection, it immediately reads as glass. It also helped to have some furniture in the rooms. For the storyline, the rooms about half way up and higher were all supposed to be empty. Adding furniture to the interior of the rooms gave a real sense of scale and texture to the building and the glass exterior.

For consistency across the sequences, we built several 360° and partial cyc’s for lighting and comp, including the massive undertaking of a patchwork stitch background done by our comp and digital matte painting teams. They put together moving helicopter footage that covered all the angles of the city from where The Pearl was supposed to be, then we projected it onto proxy geometry of Hong Kong, which would give us the correct reflections of traffic, buildings and signs, animated lights and the city’s flickering lights.

Can you explain in detail about the FX work and especially the fire?
We created a wide range of effects elements mainly involving explosions, fire, smoke, destruction, sparks and embers… always embers. By shot count, most of the FX work we had on SKYSCRAPER consisted of adding smoke simulations and burning embers to exterior shots around the crane and The Pearl. Because of the quantity of shots, it was very important to create elements that could easily be populated, or setups that were procedural enough, so we could quickly render out passes as soon as we received input from the client. For the shots that involved fire, we started with a base setup that has been developed across a number of shows now which gave us a solid foundation. Creating variation in detail size, speed and scale was the next step to add visual complexity. Crafting the helicopter destruction sequence was a challenging task in FX, there were many elements ensuring that the intensity of the shot builds throughout the scene. For the explosions following the helicopter crash, custom Houdini tools were used to create enough detail in the leading edge of the explosion, and an altered blackbody model for shading really pushed the look for us.

Your sequences have a strong feeling of vertigo. How did you enhance this feeling?
Having recognisable features, however small they might be, really helps with the depth and perspective. Things like cars, trees and roads, objects the audience is familiar with aid in this. When the viewer sees these in relation to everything else, it really conveys that vertigo feeling. Adding in some atmosphere helps too, and sound (or lack of sound) solidifies that eerie isolated feeling that can happen at the top of a building like this.

Parallax helps sell the height and depth of the scene, and when the camera was looking down we made the roof top buildings move separately to the ground, really pushing that parallax. This really gave the viewer a sense of depth. We couldn’t just get away with a 2d cyc in most cases for those shots, we had to have a 3d build of multiple projections for the looking down angles.

Can you tell us more about the helicopter crash in the hangar?
This was one of the most challenging and complicated sequences that we completed for the film. During the initial blocking, we studied plenty of reference material and were surprised at how quickly things escalate in helicopter crashes. This research was vital in providing us with accurate references to use as we pushed and pulled the timing of the helicopter and framing of the camera. Animation and FX worked closely together on this sequence to deliver the scale and intensity of the impact. Once the animation team had completed their initial blocking and timing of the crash, the FX team were tasked with creating the destruction of the helicopter and environment with all the secondary elements that you would expect to see. When the helicopter crashes into the back railing, the explosion and fiery aftermath of that event were also heavily reliant on the FX. A complex rigid body and constraint setup was made which allowed bending and detachment of panels and most importantly the ability to art direct it. Once that was working it then fed into secondary simulations of crumpling metal, debris, dust and sparks all created within Houdini. To make sure it all lived in the same world, the FX team passed all of their elements to lighting, which at times can be a technical challenge of its own, but really paid off in the final renders.

Which sequence or shot was the most complicated to create and why?
Definitely the helicopter crash sequence. It was a group of shots requiring every team in our studio to collaborate closely to ensure it was perfect. The camera angles did not allow for any ‘cheating’ in the VFX work. It’s a violent crash in which the helicopter rolls into the building, collides with a wall and then blows up. The roll was an unnatural behaviour that we had to give momentum to while still making it look real, and the final product is something that we are all proud of.

What is your favorite shot or sequence?
I love the crane climb/jump sequence. It was our biggest sequence to manage, and for me, it really sums up the movie in just that short section. Will Sawyer rescuing his family at all costs, all safe exits are blocked, so the only thing left is to attempt the iconic jump. Part of our initial brief for SKYSCRAPER from Rawson, was that he really wanted to sell the sense of vertigo and height to the audience. This sequence really delivers on that and combined with sound we all get a good sense of what it is like to be approximately 120 stories above the ground in a high intensity moment.

How long did you work on this show?
It was about 7 months.

What is Method’s VFX shot count?
Approximately 500 shots.

What was the size of the Method team?
We had 170 talented artists and production crew that worked on SKYSCRAPER over the course of the production.

]]>http://www.artofvfx.com/maniac/feed/0http://www.artofvfx.com/maniac/ANT-MAN AND THE WASP: Jelmer Boskma – VFX Supervisor – Scanline VFXhttp://feedproxy.google.com/~r/Art_of_VFX/~3/mhSsJmKbZ3I/
http://www.artofvfx.com/ant-man-and-the-wasp-jelmer-boskma-vfx-supervisor-scanline-vfx/#respondThu, 09 Aug 2018 16:00:59 +0000http://www.artofvfx.com/?p=22914In 2017, Jelmer Boskma explained the work of Scanline VFX on GUARDIANS OF THE GALAXY VOL. 2. He talks to us today about his work on ANT-MAN AND THE WASP.

How did you get involved on this show?
We got involved on the show by doing an animation test for the movement and flying style of Wasp. This being a new character in the Marvel cinematic universe, we had a pretty open brief with regards to figuring out how she would navigate herself through the air. Lead animator Mattias Brunosson was assigned the task to come up with a variety of flourishes and signature moves that gave Wasp her unique flavor of movement. He and animation supervisor Eric Petey ended up presenting a 20 second clip of Wasp shrinking and locating a bomb on a helicopter. Her movement had a feeling of military precision in the most elegant of ways, combining skydiving, ballet, and parkour.

How was the collaboration with director Peyton Reed and VFX Supervisor Stephane Ceretti?
Most of our day to day communication with Peyton happened through Stephane and VFX producer Susan Pickett. We would talk to them through cineSync sessions about 3 times a week to review and discuss the work. Peyton was very much involved with the animation process as he had a very clear vision of the performances he expected us to deliver. Our meetings with Stef and Susan were, besides focused and productive, simply a lot of fun. There was always room for a laugh or an open mind to new ideas or changes in direction. In the end we are all trying to create the best film we can, and the creative chemistry between all parties involved really helped with that effort.

What was his expectations and approaches about the visual effects?
Stef and Peyton both felt that there was room to explore some new avenues with regards to some of the effects we had seen previously in the first instalment of ANT-MAN. In the first film, the ants were designed to be a little less monstrous looking to help sell the idea that these ‘little’ guys were friendly and on our side. This time around the ants were built with more realistic proportions and detail to help sell their believability. The ‘disco trail’ effect we see when the characters shrink had been visualized pretty clearly in the first movie, and we ended up mostly sticking to what had been established already. We did not have to emphasize or explain it as much, and thus it was used in a more subtle way generally. It was mainly an effort to ensure seamless integration and photorealism was achieved in every single shot.

How did you organize the work with your VFX Producer?
My producer, Marcus Goodwin is the man who kept us on schedule and within budget. If it was up to me we would be polishing and refining our work to no end. It is not just helpful, but simply a necessity to have someone there who can reel things in and keep us moving forward and ensure we finish on time. Marcus brought on Digital Production Manager Olivia Goh, to help break down the work into a schedule we could stick to and assign lead artists to the the various sequences we were going to be working on.

How did you split the work amongst the Scanline VFX offices?
The bulk of the work was done by our Vancouver team. All modeling, lookdev, rigging, animation and most shot lighting was done by our artists there. Compositing tasks were split between teams in Vancouver and Los Angeles on a sequential basis. Brent Prevatt supervised the compositing team in LA, whilst Comp Supervisors Michael Porterfield and Micah Gallagher oversaw the compositing work for sequences done in Vancouver.

What are the sequences made by Scanline VFX?
We were awarded the Restaurant and Kitchen Fight sequences, Waterfront and San Francisco Bay scenes, shots of the giant ant in Scott’s apartment, the post credits Quantum Realm shots and a variety of one-off’s scattered throughout the film. Besides that we also built the CG digi-doubles for both Ant-Man and the Wasp.

Can you explain in detail about the creation of Ant-Man, The Wasp and The Ghost?
The challenge with both Ant-Man and the Wasp was figuring out the moving components of their suits. Both characters wear helmets that unfold from and retract into a compact little mechanical piece sitting on the back of their necks. Figuring out how to fit the much larger helmets into these small packages was one of the challenges for our modeling department, led by modeling supervisor Magnus Skagerlund. For Wasp we designed the final look of her unfolded wings as well as the mechanism on how they unfold from her backpack. In our initial test, Wasp’s wings had a very organic look them. Almost like dragonfly wings, with lot’s of iridescent colors. The final wings ended up having a much more hi-tech fabricated mechanical look, fitting in with the design language of the rest of the costume. We tried to come up with a system of sliding plates and telescoping parts to make the unfolding process look as believable as possible. Especially because we knew it had to happen in slow-motion shots as well. Both models were built using scan data of the actors provided by Clear Angle Studios. We had scans of the actors both in their costume and in a tight fitting bodysuit, to give us the closest possible match to their actual body proportions. We used this data to build two underlying body models that drove the dynamic cloth simulations of the suits. Even though we had shots featuring Ghost, the asset was build by our friends over at DNEG. We ingested their asset and they received our Ant-Man and Wasp assets.

How did you create the various shaders and textures?
We received excellent high resolution texture photography from the crew on set, which we utilized heavily to extract all the data we needed for our textures. Both characters’ costumes were fabricated with rather complex patterned fabrics, displaying a variety of reflective properties. Both costumes featured 3D printed components which looked fantastic on set, but were quite tricky to mimic precisely in CG. Cleanly extracting, touching up and rebuilding all that data was a meticulous process done by hand by our brilliant texture artist Jami Gigot and asset lead/texture artist Ken Lee, who were given the task to take care of the both textures and lookdev for Ant-Man and The Wasp.

Can you tell us more about their rigging and animation?
I touched on this a little in the previous question regarding their build, but getting clean animation out of the complex unfolding components was the main challenge for us with regards to the rigging of the characters. The final process ended up being a close collaborative effort between our rigging and modeling department, providing animated alembic caches of unfolding wings and helmets. Another interesting thing to talk about with regards to their rigs, is that our rigging supervisor Jim Su introduced a rather time saving element of baking down dynamic cloth simulations into the rig. We selected a group of key poses from our general Range of Motion setup and ran dynamic cloth simulations on those. The result of those sims was then broken down into a few frames per pose and piped back into the rig as key driven blendshapes. The initial setup time was time consuming, but in the end saved us an enormous amount of simulation time. Especially for shots with minimal movement, where we were getting great dynamic looking cloth deformations, without having to spend hours simulating it every time.

During the restaurant fight, The Wasp is constantly changing sizes. How did you manage this challenge?
We shot clean plates for every single shot in the movie. The shooting crews that work for Marvel are pretty well acquainted with the needs of VFX (be it sometimes with a little encouragement from Stef), that we don’t have to ask or remind them much. Having those clean plates allowed us to reconstruct the environment seen through the shot camera, without the actors in frame. Whenever we would do a digidouble takeover or a full digital performance replacement, we were able to swap out the actor for our digital character. Key here is that your lookdev and lighting has to be spot on, for these transitions to work. Especially mimicking cloth folds is tricky. Sometimes a combination of 2D blends and 3D matchmove was needed to make these transitions happen cleanly and invisibly. Obviously every time Wasp or Ant-Man shrinks, we switch to a digidouble, but there’s also been quite a few full replacements, where the character was CG during the entire shot.

There are also many slow-motion and macro shots. How did you approach and create these shots?
The macro shots bring an interesting new aspect to the work we had laid out in front of us. When you go as small as we had to, you essentially are in a completely new environment. Where for instance the kitchen, in regular size is one environment build, shrinking down to ant size, every grain of salt becomes the size of a fire hydrant and every bottle of vinegar becomes the size of a building. For these relatively simple assets to hold up in the macro world, a lot more attention to detail had to be spend by our layout and asset teams. We did sometimes benefit from the heavy Depth of Field effect a macro lens introduces to the elements not within the focal range, but anything in focus had to be built to a high level of detail. On set we had a macro unit shooting plates and reference photography, which helped us to a certain extend. Often we ended up rebuilding the shot fully in CG though to allow flexibility with our layout cameras and animation.

How does those two aspects affects your work?
Primarily from a planning and scheduling point of view, there are things to consider. Normally when we’d build a simple asset like a salt shaker or a bottle of vinegar, we would not allocate too much time to it. In this case we had to treat even the simplest of elements as the most hero assets on all levels. From modeling and texturing to shading and lighting. As far as slow-motion shots are concerned. The workflow isn’t affected much, other than that there are more frames to animate and render.

How did you design and create the phasing effect for The Ghost?
We did not design the phasing effect. DNEG led the charge on that, through a combination of alternate animation takes, and complex procedural Houdini distortions of the digi-double. We created a similar system and matched their look once it was approved by the studio.

Can you explain in detail about the creation of Giant-Man?
Giant Man essentially is exactly the same asset as both life size and tiny Ant-Man. The issue you run into though is one of believability. Once you literally scale up a regular sized human model to now be 70 feet tall, there will be details missing you’d expect to see at that size. For Giant Man we had to introduce some additional small wear and tear and break up some edges that were simply too perfect and straight. Even when the idea is that Giant-Man is the exact same character scaled up, it just doesn’t feel right or believable when you approach the look in that way. Having that smaller breakup is necessary if you want the character to look believable at that size.

How does his massive size affects your texture and animation work?
Our textures weren’t the problem, and other than introducing the odd additional scuffs and scratches, held up fine. As far as the animation is concerned you have to be careful with the speed at which you animate the character. Especially when Giant-Man resurfaces from the water and returns to the pier he is enormous. Somewhere around 90 feet in some shots. A character of that size needs to have it’s movement slowed down substantially to come across believable. Slowing down is obviously subjective here, as the distance Giant Man covers when moving or waving his arms is still rather large, even though it feels much slower compared to the movement of the people in the shots.

Can you tell us more about the lighting work with this huge character?
The biggest challenge, not just with Giant-Man but with any shots featuring Ant-Man and the Wasp, is that the helmets they wear are essentially giant spherical chrome mirrors. Especially for the Giant man shots, you have an asset that is very hard to light, as it’s surfaces are so reflective. For these shots it was more about building the environment behind and around the camera, which would be reflected into his helmet, rather than positioning lights to bring out his form to integrate him into the plates. It’s not easy to predict where reflections will land on a spherical object. Sometimes hand placing the elements that would be reflected, such as clouds and buildings, was the only way for us to compliment the form of the asset and make it look good in shot.

How did you handle the various water simulations and interactions of Giant-Man and the ferry?
At Scanline there’s a deep history of doing complex water simulations for a wide variety of projects, long before I joined the company. I very much rely on and am constantly impressed by the artists that work with our proprietary simulation software Flowline, to calculate and render these simulations. From a creative standpoint I try to oversee the general look and direction the work needs to be going, but it’s the talents of the Flowline artists that make it all happen. For Giant-Man the trick was finding the right scale for the water. A single droplet won’t show up, but a larger stream of water can look too forced or hurt the scale. Continuity in terms of the behavior and look of the water on a sequence level was also something we had to keep in mind. We did not want to keep emitting water, but rather aim for a more toned back and realistic look, where Giant-Man ended up having perhaps a few areas where water stuck to his suit rather than having a continuous flow of water coming off of him.

Can you explain in detail about the creation of San Francisco and its pier?
Most of San Francisco and the pier above the water was live photography in which we integrated our CG giant man, tiny ant man, the ants, and seagulls. Just as we did for the macro shots in the restaurant, we recreated a macro close up portion of the pier for a handful of shots. The seagulls were fully CG, keyframe animated and were rigged with a fully functional feather groom that included body and wing feather flutter features. The underwater portion of the pier sequence we see when Giant man falls into the water was a full CG environment we built. Those underwater shots are 99% CG, with the only exception being Wasp’s eyes, which we projected from footage captured with the facial capture camera array rig.

How did you populate the pier and the ferry?
For a few macro shots we ended up rotoscoping background extras from other plates, projected them on cards and inserted them into the shots to liven them up a little. And other than our CG seagulls it was all mostly captured in camera. The exception being that many of the shots had there skies altered to work in continuity with the rest of the sequence. This meant either adding or removing clouds on a shot per shot basis.

Can you explain in detail about the creation of the Quantum Realm?
The Quantum Realm ‘Tag’ sequence came to us pretty late in the day, in the last month of our production schedule. We had seen parts of the Quantum Realm before both in this movie as in the first instalment, but this was supposed to be a brand new area. Very colorful and full of energy. We had artists explore the look both in 3D as in 2D, to help find our color scheme and the general mood for the sequence. Claas Henke, who is both a compositor and visual effects art director, was especially instrumental. The final shots are a combination of mostly clever 2D trickery and carefully timed animated light sources. The tricky part here being the complexity and reflective qualities of Ant-Man’s suit. The look of the suit changes quite dramatically depending on the environment it’s in, so we had to tweak it slightly to make it all work.

Which sequence or shot was the most complicated to create and why?
Some of the most challenging shots for us were the shots of Giant Man interacting with the Burch and the people on the ferry. We filmed all shots for the pier and bay scenes in the same week in San Francisco and the weather changes we witnessed during that one week were quite extreme. What ended up happening was that we shot most of our pier shots on the first 2 days with clear blue skies and direct sunlight. We then moved on to do the shots of Burch on the ferry. By sheer misfortune, it so happened that that scene was shot on a completely grey overcast day. The task came to us and the colorists in the DI to figure out a way to blend these radically different looking shots together and make it feel like they were all shot within more or less the same hour in the day. Altering the lighting in a plate is some of the trickiest stuff to do right and is usually rather time consuming. Finding the right ‘in-between’ look for all shots and then massaging them into shape wasn’t easy. Because we working directly with the DI, this also meant we were turning around stuff quicker and more frequent than usual. In the end I think that the transformation these shots went through was very successful but would never show how much work really went into them.

Is there something specific that gives you some really short nights?
Not really, thankfully! The work I described in the previous question comes closest, but in the end we managed to never divert much from schedule, and therefor my sleep didn’t suffer.

What is your favorite shot or sequence?
The work we did for the film has been so varied, I really can’t pin it down to a single shot, but I do have a few favorite shots. I really love how some of the closeups of Giant-Man on the ferry turned out. His initial resurfacing from the water, which ended up being featured in about every single piece of marketing the studio put out, is another one I think turned out well. I love the slow-motion shot of Wasp running on the knife in the kitchen fight, as it summarizes so much of the journey we went on with regards to finding her style of movement and making her wings work. And then there are a few underwater shots that I am rather fond of, mostly because of their mood and the lighting quality.

What is your best memory on this show?
I very much enjoyed the general atmosphere and level creativity on the show. This being a slightly more lighthearted comedy was especially fun. In the end, the relationship between a VFX house and the studio can really make or break the experience you have working on a film. Thankfully working with Stef and Susan was a blast, and I cherish the laughs we had and the result we ended up crafting together.

How long have you worked on this show?
Almost a year, 11 months or so. (July ’17 to June ’18)

What’s the VFX shots count?
During the entirety of production we worked on just over 400 shots, but with an edit ever in flux, I believe 320 was the number of shots we ultimately delivered.

What was the size of your on-set team?
Scanline had a small presence on set for this show. The ‘team’ being myself and on-set photographer Tim Donlevy. In house we had the help of about 250 people to get it all done.

]]>http://www.artofvfx.com/ant-man-and-the-wasp-jelmer-boskma-vfx-supervisor-scanline-vfx/feed/0http://www.artofvfx.com/ant-man-and-the-wasp-jelmer-boskma-vfx-supervisor-scanline-vfx/Cinesite acquires Trixter!http://feedproxy.google.com/~r/Art_of_VFX/~3/QXGz0_cvT40/
http://www.artofvfx.com/cinesite-acquires-trixter/#commentsThu, 09 Aug 2018 14:00:07 +0000http://www.artofvfx.com/?p=22906Big news! New acquisition inside the visual effects industry as Trixter is now part of Cinesite! Here is the official press release: TRIXTER, GERMANY’S LEADING VFX AND ANIMATION STUDIO PARTNERS WITH CINESITE The deal strengthens Cinesite’s leading position in visual effects and feature animation for the independent privately-owned studio London, UK – August 9th, 2018 […]

]]>Big news! New acquisition inside the visual effects industry as Trixter is now part of Cinesite!

Here is the official press release:

TRIXTER, GERMANY’S LEADING VFX AND ANIMATION STUDIO PARTNERS WITH CINESITE

The deal strengthens Cinesite’s leading position in visual effects and feature animation for the independent privately-owned studio

London, UK – August 9th, 2018 Cinesite announced today it has reached an agreement to partner TRIXTER, a leading German and European provider of visual effects (VFX) and feature animation to premier international film, broadcast and streaming clients. This initial partnership will lead to Cinesite acquiring 100% of the TRIXTER business once regulatory and legal work is completed.

The new partnership will allow the existing senior TRIXTER team to lead the business in Germany and continue to build on its amazing reputation to grow the operations with the benefit and support of a larger parent company. As with Cinesite’s 2015 acquisition of Vancouver-based VFX studio Image Engine Design, TRIXTER will retain its brand and creative centre.

Cinesite group CEO Antony Hunt said: “The TRIXTER team has a fantastic reputation for producing high quality concept art, character design alongside complex VFX and feature animation. In partnering with TRIXTER, we are executing our strategic objective of enhancing our market position in both visual effects and animation and getting the benefit of an amazing creative team of people in Munich and Berlin.

“The skills transfer, technology collaboration, shared resources and approaches across our international studios brings benefits to all our teams and the quality of the work they create. This is borne out by the success of the Cinesite group, which has continued to grow its market share and has seen its revenue increase 40% year on year since 2014.”

Founded two decades ago by Simone Kraus Townsend and Michael Coldewey, Professor at the Munich Filmschool (HFF), TRIXTER is Germany`s leading VFX and animation studio, creating stunning high-end VFX work and character animation for film, broadcast and streaming media platforms. The studio, which has capacity for 220 professionals in Munich and Berlin, has collaborated with Marvel Studios to bring characters like Iron Man, Black Panther, Rocket and Baby Groot to life. The company also contributed to many other projects, including SPIDER-MAN: HOMECOMING for Columbia Pictures, THE FATE OF THE FURIOUS for Universal Pictures, along with episodes for Netflix’s LOST IN SPACE and AMC’s THE WALKING DEAD.

“I am incredibly proud of our people and the business and brand equity we have all built,” said Christian Sommer, CEO, TRIXTER. “By joining forces with Cinesite we will benefit from both their global infrastructure and a broader range of clients to further strengthen our position in the international market.”

TRIXTER co-founders, Simone Kraus Townsend and Michael Coldewey commented; “This is an incredibly exciting time for everybody involved and there are huge opportunities for all of us. We are pleased to have found a supportive and forward-thinking partner in Cinesite and are eager to share TRIXTER’s talents and expertise with the group.”

2018 has been a busy year for the Cinesite group, having secured a $70 million financial capacity plan with asset manager Pemberton Capital. The studio’s technicians and artists have been hard at work crafting some of the year’s biggest blockbusters. Recent VFX credits include ANT MAN & THE WASP, SKYSCRAPER, AVENGERS INFINITY WAR, JURASSIC WORLD: FALLEN KINGDOM, HBO’s GAME OF THRONES and THE COMMUTER with further standout projects coming soon such as MARY POPPINS RETURNS for Disney, ROBIN HOOD for Lionsgate and Warner Brothers’ FANTASTIC BEASTS: THE CRIMES OF GRINDELWALD. Financial details of today’s agreement will not be disclosed.

About CinesiteEstablished in 1991, Cinesite is one of the world’s most highly respected independent digital entertainment studios, producing award-winning animation and visual effects for film, broadcast and streaming media platforms. Alongside its global VFX services, its feature animation division works with IP creators and filmmakers to create high-end animated features, based out of Cinesite’s Montreal and Vancouver facilities. Cinesite continues to forge new partnerships and collaborations with leading studios and filmmakers to deliver stories that resonate with a global audience.

Cinesite is headquartered in London with studios in Montreal and Vancouver with capacity for over 1,300 artists and filmmakers. For more information, visit https://www.cinesite.com/

About TRIXTERFounded two decades ago by Simone Kraus Townsend and Michael Coldewey, TRIXTER is one of Germany`s leading VFX studios, creating stunning high end VFX work and character animation for film, broadcast and streaming media platforms. At the leading edge of the industry, employing computer scientists, film aficionados and incredibly talented artists, TRIXTER is always pushing the boundaries and is currently developing content and technical solutions for VR and 360°.

TRIXTER is located in Munich and Berlin with capacity for over 220 artists and filmmakers. For more information, visit https://www.trixter.de/

]]>http://www.artofvfx.com/cinesite-acquires-trixter/feed/2http://www.artofvfx.com/cinesite-acquires-trixter/ANT-MAN AND THE WASP: Andrew Hellen – VFX Supervisor – Method Studioshttp://feedproxy.google.com/~r/Art_of_VFX/~3/UfY04BIuv1M/
http://www.artofvfx.com/ant-man-and-the-wasp-andrew-hellen-vfx-supervisor-method-studios/#respondTue, 07 Aug 2018 16:00:41 +0000http://www.artofvfx.com/?p=22897Last year, Andrew Hellen explained to us the work of Method Studios on THOR: RAGNAROK. He talks to us today about his work on ANT-MAN AND THE WASP.

How did you get involved on this show?
I came on board after Method was already involved. The show commenced with Hamish Schumacher at the helm, but he ended up moving to another project.

How was the collaboration with director Peyton Reed and VFX Supervisor Stephane Ceretti?
Stef was great at guiding us through the creative challenges; he has a lot of experience creating unique surreal environments, when you think of other films he has supervised like DOCTOR STRANGE and GUARDIANS OF THE GALAXY VOL. 2. We had to come up with something unique and Stef had great insights to help us achieve that. I didn’t have any communications with Peyton directly.

What was his expectations and approaches about the visual effects?
In general terms, Marvel likes to try and keep FX work grounded in reality: obviously there is some creative freedom in certain areas like the Quantum Realm given there is no real world reference.

How did you split the work amongst the Method Studios offices?
The work was done in Vancouver.

What are the sequences made by Method Studios?
We had three key scenes: Ant-Man’s suit malfunction at Cas’ school, Scott’s dream sequence in the Quantum Void and we created the Quantum Realm.

Can you explain in detail about the design of the Quantum Realm?
The challenge was to come up with something we haven’t seen before. We had to invent the Quantum Realm based off some fairly loose concepts provided by production, then take a crash course in Quantum physics for some inspiration. The concept provided was very loose and heavily treated in 2D with heavy film grain lens breathing, very short depth of field and lots of lens flares. The idea behind it was that shooting with a camera in the Quantum realm is difficult to capture and the image gets heavily distorted.

One of the ideas in the Quantum Realm was that the environment reacted to what was happening not only from a physical sense but also from an emotional POV. When the pod crashes, the environment reacts. When Janet finds Hank, the tone shifts and the environment reacts. Some of the reactions were covered by the geometry reacting, some was just the colour of the environment shifting.

Everything is moving constantly in this environment. Did you use procedural tool for that?
The geometry was all simulated in Houdini and rendered in Mantra. We used animated textures also created in Houdini to drive colour and displacement, generating around 30 render passes and mattes to give comp control. We worked this way knowing that we’d do multiple versions of the colour and performance of the environment in finding the right tone. We set up a workflow and worked with production to approve simulations before they were coloured and comped, which meant we could try lots of different looks in comp without having to go back to 3D.

How long have you worked on this show?
Six months.

What’s the VFX shots count?
Around 130 shots in the final cut.

What was the size of your on-set team?
Our team size varied based on where we were In production, but we had about 100 artists at our peak.

What is your next project?
Nothing I’m able to share publicly yet.

A big thanks for your time.

// WANT TO KNOW MORE?

Method Studios: Dedicated page about ANT-MAN AND THE WASP on Method Studios.

]]>http://www.artofvfx.com/genghis-khan-vfx-breakdown-by-redchillies-vfx/feed/0http://www.artofvfx.com/genghis-khan-vfx-breakdown-by-redchillies-vfx/ANT-MAN AND THE WASP: Kevin Souls (VFX Supervisor), Brendan Seals (VFX Supervisor) and Raphael A. Pimentel (Animation Supervisor) – Luma Pictureshttp://feedproxy.google.com/~r/Art_of_VFX/~3/dMiD4E000Rk/
http://www.artofvfx.com/ant-man-and-the-wasp-kevin-souls-vfx-supervisor-brendan-seals-vfx-supervisor-and-raphael-a-pimentel-animation-supervisor-luma-pictures/#respondMon, 06 Aug 2018 16:00:28 +0000http://www.artofvfx.com/?p=22877At the beginning of the year, Kevin Souls, Brendan Seals and Raphael A. Pimentel had explained the work of Luma Pictures on BLACK PANTHER. Brendan and Raphael then worked on A WRINKLE IN TIME. They talk to us today about their work on the new Marvel movie, ANT-MAN AND THE WASP.

How did you get involved on this show?
Kevin Souls (KSO) // We have a long relationship with Marvel, and were a big part of the first ANT-MAN. They also know us as a creative team that can iterate quickly to find solutions, and for this movie we had a whole new set of challenges.

How was the collaboration with director Peyton Reed and VFX Supervisor Stephane Ceretti?
KSO // They are both really fun collaborative people, and super open to ideas. But it was both a compliment and challenge when they would ask “What do you think?”, because they would listen. And it was our opportunity to make a difference.

What was their expectations and approaches about the visual effects?
KSO // Peyton is so full of ideas and gags that it just keeps things growing and refining. Stephane is a very sophisticated VFX supervisor, and really an artist at heart. There is always a sense that you are pushing envelopes to find that special technique or idea that will push the shot. Either way, things constantly evolve until the last second, and it always looks better.

How do you balance director’s vision when creating VFX for any movie?
KSO // Our goal is to always exceed the directors expectations, and to do that we rely on a mixture of art direction and intuition. Marvel movies are always fast paced, and it requires us to sometimes extrapolate from the notes and predict what the director will want. You always have to design your setups with the maximum amount of flexibility, so you can turn on a dime when the story requires it.

What are the sequences made by Luma Pictures?
BSE // We worked on six sequences: Cassie’s School, Ghost Lair, New Prologue or the missile silo launch sequence, Prison Break – where Ant-Man helps The Wasp and Dr. Hank Pym escape, Scott Channels Janet, and Scott House Arrest.

What were the most technical challenging sequences of ‘Ant-Man and the Wasp’? How did your team overcome it?
KSO // Each sequence presented different technical challenges, so I wouldn’t say that any was more technically challenging than the others, but The “Flashback” sequence had the largest scope of work, requiring a full cg environment to be built matching to a real set. Additionally, we had to develop the Quantum Tunnel effects, which would eventually reach critical mass, and destroy the building, both inside and out.

Going in, we knew all the parameters of the effects needs, so we tried to spread the work out across departments to solve the different problems. First off the asset team was tasked with creating an asset that would look photoreal but have all the technical requirements that the FX department would need for the destruction, even art directing the broken pieces of the wall. Next, the animation took ownership of the Quantum Tunnel animation and destruction, knowing that we could achieve all of the shaking and collapse with rigging, blend shape and animation. With those parts tasked out, FX was free to develop the Tunnel FX, volumetrics, and the destruction simulations. Finally, composite and lighting was called in to develop the final look of the tunnel and pull all the elements together.

The niche VFX of Ant-Man lays in complexity details when he shrinks. How did you achieve seamless work of various perspective shots with accurate depth measurements?
KSO // Without a doubt, you need to start with a real camera and real lensing. It’s incredible how sensitive we are to field of view and scale. When something is wrong, people can just tell. So it was important to us that the camera speed and motion be accurate to the scale of Ant-Man. Next, we would layer in levels of detail to the assets and textures, adding geometric features that would hold up to scrutiny at close distance. The lighting also plays a huge role in depicting the scale of a scene, with the softness and fall off of the shadows needing to be precise. Finally, carefully utilizing Depth of Field to guide the eye, and add that photographic feel, is absolutely essential.

How did you manage tight integration of post-production pipeline between your physically separated studios?
KSO // Our two facilities share the same pipeline for all departments. So while we don’t have live parity of renders and setups, we can easily sync something back and forth between the locations and get the exact same result. This makes sharing super easy and, with the time difference, is like having a 24 hour facility – one just picks up where the other leaves off.

Let us know about the core team of Luma Pictures who worked on ‘Ant-Man and the Wasp’:
KSO // I was the VFX supervisor in Los Angeles. Brendan Seals was the VFX supervisor in Melbourne. Alex Cancado is the CG Supervisor in LA and Andrew Zink is the CG supervisor in Melbourne. Raphael Pimentel is the overall Animation Supervisor. Additionally, we had a crew of over 100 production and support staff spread across Los Angeles and Melbourne.

Luma Pictures supervisors shoot some sequences of the movie. Let us know in detail about it, on-shoot and post production both:
KSO // Jamie Hallett assisted Stephane Ceretti with supervision for Principle Photography in Atlanta, providing texture acquisition and production data. I assisted Jesse James with the Additional Photography and reshoots back here in Los Angeles.

How did you organize the work with your VFX Producer?
KSO // We split the work 50/50 across our Los Angeles and Melbourne studios.

How did you create the various shaders and textures for Ant-Man and the Wasp?
Brendan Seals (BSE) // Textures were ingested from Scanline VFX. The textures were appropriated into our pipeline and factored into shader networks look developed in Katana. We have a standardized approach to calibrated and neutralized look development with our own custom built real world studio light rig. With this rig we have captured a variety of real world materials for reference such as various metals and plastics and built these into our library. This greatly enhanced our ability to replicate the metals and fabrics of the prop costumes alongside the captured references of the suits provided by production.

Can you tell us more about their rigging and animation?
Raphael Pimentel (RPI) // By working on the first ANT-MAN film we came in to the project with an understanding of what was needed to achieve realism from a rigging and animation standpoint. All character animation shots began with motion capture data and hand keyed for final polish. Cloth simulations were also added on all shots for wrinkle and wind flutter.

How did you handle the flying sequences challenge?
BSE // For the Ghost Lair infiltration for example, the trick was maintaining a sense of locality and awareness of the environment whilst respecting the macro nature of the lensing required to capture Ant-Man and Wasp. Stef was always a big proponent of letting the character’s distance away from camera dictate the depth of field, that way we could meaningfully art direct the blurriness of the background environment so that the audience always had context for where they were but were watching shots with a realistic approach to lensing. Having that realistic approach to lensing where possible also meant letting the focal plane breathe in and out- it would be impossible for a DP to track perfectly flying characters, even more so at this macro level, so we allowed the characters to subtly drift in and out of focus in many of our flying sequences.

How does the macro aspect of the shots affect your work?
BSE // The main aspects of our work that were affected were developing macro environment details to hold up when characters stood or ran across close-up surfaces and paying respect to the lighting and depth of field of the on-set photography. Many times where the depth of field was too heavy in camera and locked to inanimate objects, we would rebuild the background either with 2.5 methods or utilizing CG environment builds to fabricate a new plate that allowed us the control to establish a different or animated rack focus. It’s so important to ensure the depth makes sense for that macro scale and that the bokeh kernel and aberrations are all replicated for realism.

Can you tell us more about the wings animation for the Wasp and the Ant?
RPI // We began by studying wasp and flying ant reference. Then replicating flying, taking off, landing or just roaming around. We essentially built our characters wing rigs to mirror behaviour in nature. Once animators were able to match wing motion as seen in nature, we knew we they were ready to be put into shots.

How did you create the missile and the silo?
KSO // The Missile Launch was a really fun sequence and, because it was fully CG, it gave us complete freedom to create the world – except for one little catch – parts of the new sequence had to directly match to the missile sequence from the first film. In fact, some of our new shots would inter-cut directly with older shots.

We began by ingesting the original assets for Hank and Janet. Not only did we need to match the shading and textures exactly, but also the rigging and cloth simulation – down to the way the elbows and waist folds moved!

For the launch we researched Russian Missile Silos and were able to find really great reference photography, and even YouTube clips, of the silo design, opening, and launching. With production’s blessing we went about modeling and texturing a highly detailed asset. The Missile was ingested and converted from the first film, we added geometric detail and enhance textures to serve the staging of the new shots. Volumetrics and blowing leaves were added to give a sense of scale and force. Once they were airborne we shifted over to the match-to environment. We started by studying the sky and clouds, eventually building new volumetric clouds and layers to match the look and framing of the original shots.

At the end of the sequence Janet shrinks down to enter into the missile computer. To achieve this new environment, we used a combination of fully CG computer parts, mixed with macro photography of real circuit boards. The Depth of Field was carefully dialed to give a sense of scale but also to guide the eye through the busy frame.

Can you explain in detail about the Quantum explosion of the warehouse?
KSO // The Quantum Tunnel Sequence contained some of our biggest FX challenges in Ant-Man and the Wasp. The Quantum Tunnel was meant to represent an opening into the Quantum Realm. There were actually two QT’s in the movie, the one in our sequence was an older model that was much less stable. Stephane Ceretti wanted this effect to have a unique feel, and although we were meant to destroy the building, it was important that it feel like caustic energy and optical distortion rather than an explosion. We started by looking at the previs and the pulsing rings that they had done. They great timing and energy. Using that as an inspiration, we designed and built a series of passes to be used in compositing to build ‘lens flare like’ optical distortions, refracting the surroundings. The rings would travel down the length and eventually burst out of the tunnel. These “bursts” were more refractive than concussive, but had the power to smash through the thick brick walls of the warehouse. We had to destroy the building inside and out, and it all needed to intercut with photography. To achieve this, we built photoreal replicas of the interior and exterior of the warehouse. The interior would be a mix of photography and CG, whereas the exterior always being entirely virtual. The modeling had to be carefully done, modeling closed surfaces, that it could be simulated for the destruction. Intermixed in the blast were enormous refractive rainbow caustic energy waves of the quantum energy. On top of that we ran massive volumetric simulations to add scale and texture.

The villain is Ghost. Can you explain in details about her phasing effect?
BSE // Ghost was a delicate character effect that had to be carefully controlled to not get in the way of her performance. We received a setup from DNEG that had some really interesting procedural extrusions in Houdini to produce a more tangible glitch that was complemented by various transparency, refractive and aberration treatments in Nuke. We analyzed the effect progression arc that our shots were calling for repeatedly and decided to create a custom Ghost gizmo in Nuke. The gizmo would trigger the sequential effect, utilizing Houdini renders and the treatments in Nuke on a frame set by the compositors. The setup then had the flexibility to edit and change the individual aspects of her look as well as the overall timing to suit the performance ques in the shot.

In a sequence, Scott Lang is really tiny. How did you approach this sequence?
BSE // The idea behind the sequence was to use an old technique called forced perspective, aided by modern technology, to achieve the visual gag of the giant-sized Ant-Man literally bursting at the seams of a small broom closet. To accomplish this, all the plates were designed to be shot independently and then assembled in the compositing process. Ant-Man was shot in a green screen scale model of the room interior and The Wasp was shot using reference props to simulate interaction. The room interior itself was captured as a plate but also scanned in 3D, so we could easily recreate the shots that required a virtual camera move and to manipulate the ceiling when Ant-Man slams into it. The pieces were individually tracked and match-moved while another camera was created to re-film the scene and compensate for the different field of views of each acquisition camera. Luma replaced pieces of Ant-Man’s body with a high-resolution full CG asset. The mix of photography and CG was a key tool we used that helped trick the eye and maintain all the subtle comedic performances.

What is your favorite shot or sequence?
BSE // I’m really fond of our hero Ant-Man shot in Ghost’s Lair where he jumps down from the ant onto the table. It’s not your typical huge action shot but it was the culmination of all our efforts and has some beautiful subtleties to it.

What is your best memory on this show?
BSE // Having Stef ask if the wasp suit in a particular shot was the practical photography but was in fact CG. Stef was very impressed with that and it was really gratifying to see our asset team’s hard work in look dev pay off.

How long have you worked on this show?
BSE // We worked on the show for over 9 months.

What’s the VFX shots count?
BSE // We worked on over 230 shots.

What was the size of your team?
BSE // We had over 100 production crew spread across our Los Angeles and Melbourne studios.