Five of the biggest trends from FMX 2019

This year’s FMX Conference in Stuttgart offered a fascinating look at the latest developments in animation, effects, games and immersive media.

As the main sponsor, Foundry team were out in force at the event and in between hosting our own talks, we managed to catch up on many of the other sessions being held.

In this article, we'll report back on some of the biggest industry topics that were discussed, including taking a look at new deep learning research, how virtual production is advancing, and the latest in lightfields.

So without further ado…

Lightfields

Lightfield capture is an area of research at the cutting edge of immersive technologies.

The aim is to capture imagery from every viewable position. One way is through plenoptic imaging, another more accessible way is by using a rig of multiple cameras. Both methods allow you to capture images taken from every viewable position within a given space. This enables you to display imagery that gives the user a sense of depth and a sense of parallax.

In simple terms, lightfields are like a window into another world. They allow you to look at something from any perspective and see it change, or see behind it, just like you would in real life.

FMX had a whole track on Friday dedicated to this exciting technology, with speakers from eminent universities, as well as some of the latest developments from Google.

One of the talks focused on a project called SAUCE, which the R&D team at Foundry has been involved in over the past few years in a consortium of nine partners.

The project is investigating ways to allow creative industry companies to re-use existing digital assets for future productions, with one thread exploring the possibilities and challenges of integrating lightfield capturing into movie productions.

Virtual Production

An ever-developing technological field, virtual production has been prominent at events for the past few years.

FMX 2019 was no different, with a full day of sessions dedicated to developments in the area.

One of the best-attended looked at how virtual production was key to Digital Domain creating Thanos from Josh Brolin's performance in “Avengers: Infinity War.” This talk gave a fascinating insight into how Josh’s performance was captured live on set and the data modified, as well as offering a glimpse into Digital Domain's custom facial pipeline for direct performance transfer from an actor’s helmet cam footage to a character.

A number of other sessions from the day illustrated how games engines are being adapted for early stages in the production pipeline, such as pre-visualization.

The real time interactivity of games engines is perfect for allowing producers and directors to plan out camera angles and shots before filming and post-production takes place.

Locational VR

In Thursday’s panel session on Studio Insights, Producer and Executive Chris deFaria specifically called out locational VR as one of the big trends to look out for in the future.

A whole raft of talks on the subject seemed to confirm his speculation that this is one area of the medium that is showing significant promise.

Daniel Arey, director of game design and creative development at Niantic, gave a fascinating insight into what the next generation of Pokemon Go could look like, mooting the possibility of custom-built ‘story islands’ on which augmented games could play out.

Elsewhere, a novel technique that sees rollercoasters augmented with virtual reality headsets gave a glimpse into a potential future of theme park attractions.

Combining the power of Hollywood film IP and this new twist on the traditional format, riders are transported into a virtual world matched perfectly to the twists and turns of the coaster.

The experience mixes the real-world immersion zero-g and the rush of wind on skin with a virtual environment viewable via a VR headset.

Open Standards

A track dedicated to the open source community looked at the work of the Academy Software Foundation (ASWF) and how studios have been developing and using open source technologies in their pipelines.

One such outfit is Blue Sky Studios, whose talk focused on their shift from proprietary technology to one based purely on open source collaborations.

Open Shading Language (OSL) has become the de facto standard shading language for VFX and animated features, used across the industry in many commercial and studio- proprietary renderers.

Image Engine hosted a session explaining how they’re pushing the boundaries of OSL, using it as a general purpose programming language for a variety of VFX and Animation processes.

Deep Learning

Deep learning is set to be transformational across a whole spectrum of industries, and its impact on Media and Production could be huge.

For visual effects, machine learning techniques are being harnessed to both enhance the realism of imagery and produce real-time simulation.

This involves technical directors and programmers writing programs that learn from huge sets of visual data. These programs then use that acquired knowledge to adjust a rendering in real-time, with key examples including denoising, ray tracing, and character animation.

Dr. Alexander König showcased an intriguing project called AINIM, focused on developing a machine learning approach to allow animation artists to train a conditional adversarial network on their animations.

The aim is to allow artists to live-produce and control visuals according to their own artistic style using an ML assisted artistic toolset.

Elsewhere, Niloy Mitra of University College London gave a fascinating talk presenting a deep learning algorithms for creating high-quality editable geometric, such as digital buildings and streets.

This could revolutionize how we design, explore, simulate, and eventually fabricate content in games, movies and product design.

Want to hear more about the trends set to shape the VFX, VR and Design industries? Sign up to our monthly Trends newsletter below