Tag Archives: Jaunt

To those who have been following the virtual reality market from the beginning, one very interesting phenomenon is how the hardware development seems to have outpaced both the content creation and the software development. The industry has been in a constant state of excitement over the release of new and improved hardware that pushes the capabilities of the medium, and content creators are still scrambling to experiment and learn how to use the new technologies.

One of the products of this tech boom is the Jaunt One camera. It is a 360 camera that was developed with the explicit focus of addressing the many production complexities that plague real life field shooting. What do I mean by that? Well, the camera quickly disassembles and allows you to replace a broken camera module. After all, when you’re across the world and the elephant that is standing in your shot decides to play with the camera, it is quite useful to be able to quickly swap parts instead of having to replace the whole camera or sending it in for repair from the middle of the jungle.

Another of the main selling points of the Jaunt One camera is the streamlined cloud finishing service they provide. It takes the content creator all the way from shooting on set through stitching, editing, onlining and preparing the different deliverables for all the different publishing platforms available. The pipeline is also flexible enough to allow you to bring your footage in and out of the service at any point so you can pick and choose what services you want to use. You could, for example, do your own stitching in Nuke, AVP or any other software and use the Jaunt cloud service to edit and online these stitched videos.

The Jaunt One camera takes a few important details into consideration, such as the synchronization of all of the shutters in the lenses. This prevents stitching abnormalities in fast moving objects that are captured in different moments in time by adjacent lenses.

The camera doesn’t have an internal ambisonics microphone but the cloud service supports ambisonic recordings made in a dual system or Dolby Atmos. It was interesting to notice that one of the toolset apps they released was the Jaunt Slate, a tool that allows for easy slating on all the cameras (without having to run around the camera like a child, clapping repeatedly) and is meant to automatize the synchronization of the separate audio recordings in post.

The Jaunt One camera shows that the market is maturing past its initial DIY stage and the demand for reliable, robust solutions for higher budget productions is now significant enough to attract developers such as Jaunt. Let’s hope tools such as these encourage more and more filmmakers to produce new content in VR.

The HPA Tech Retreat, run by the Hollywood Professional Association in association with SMPTE, began with an insightful one-day VR seminar— Integrating Virtual Reality/Augmented Reality into Entertainment Applications. Lucas Wilson from SuperSphere kicked off the sessions and helped with much of the organization of the seminar.

The seminar addressed virtual reality (VR), augmented reality (AR) and mixed reality (MR, a subset of AR where the real world and the digital world interact, like Pokeman Go). As in traditional planar video, 360-degree video still requires a director to tell a story and direct the eye to see what is meant to be seen. Successful VR requires understanding how people look at things, how they perceive reality, and using that understanding to help tell a story. Some things that may help with this are reinforcement of the viewer’s gaze with color and sound that may vary with the viewer — e.g. these may be different for the “good guy” and the “bad guy.”

VR workflows are quite different from traditional ones, with many elements changing with multiple-camera content. For instance, it is much more difficult to keep a camera crew out of the image, and providing proper illumination for all the cameras can be a challenge. The image below from Jaunt shows their 360-degree workflow, including the use of their cloud-based computational image service to stitch the images from the multiple cameras.Snapchat is the biggest MR application, said Wilson. Snapchat’s Snapchat-stories could be the basis of future post tools.

Because stand-alone headsets (head-mounted displays, or HMDs) are expensive, most users of VR rely on smart phone-based displays. There are also some places that allow one or more people to experience VR, such as the IMAX center in Los Angeles. Activities such as VR viewing will be one of the big drivers for higher-resolution mobile device displays.

Tools that allow artists and directors to get fast feedback on their shots are still in development. But progress is being made, and today over 50 percent of VR is used for video viewing rather than games. Participants in a VR/AR market session, moderated by the Hollywood Reporter’s Carolyn Giardina and including Marcie Jastrow, David Moretti, Catherine Day and Phil Lelyveld, seemed to agree that the biggest immediate opportunity is probably with AR.

Koji Gardiner from Jaunt gave a great talk on their approach to VR. He discussed the various ways that 360-degree video can be captured and the processing required to create finished stitched video. For an array of cameras with some separation between the cameras (no common axis point for the imaging cameras), there will be area that needs to be stitched together between camera images using common reference points between the different camera images as well as blind spots near to the cameras where they are not capturing images.

If there is a single axis for all of the cameras then there are effectively no blind spots and no stitching possible as shown in the image below. Covering all the space to get a 360-degree video requires additional cameras located on that axis to cover all the space.

The Fraunhofer Institute, in Germany, has been showing a 360-degree video camera with an effective single axis for several cameras for several years, as shown below. They do this using mirrors to reflect images to the individual cameras.

As the number of cameras is increased, the mathematical work to stitch the 360-degree images together is reduced.

Stitching
There are two approaches commonly used in VR stitching of multiple camera videos. The easiest to implement is a geometric approach that uses known geometries and distances to objects. It requires limited computational resources but results in unavoidable ghosting artifacts at seams from the separate images.

The Optical Flow approach synthesizes every pixel by computing correspondences between neighboring cameras. This approach eliminates the ghosting artifacts at the seams but has its own more subtle artifacts and requires significantly more processing capability. The Optical Flow approach requires computational capabilities far beyond those normally available to content creators. This has led to a growing market to upload multi-camera video streams to cloud services that process the stitching to create finished 360-degree videos.

Files from the Jaunt One camera system are first downloaded and organized on a laptop computer and then uploaded to Jaunt’s cloud server to be processed and create the stitching to make a 360 video. Omni-directionally captured audio can also be uploaded and mixed ambisonically, resulting in advanced directionality in the audio tied to the VR video experience.

Google and Facebook also have cloud-based resources for computational photography used for this sort of image stitching.

The Jaunt One 360-degree camera has a 1-inch 20MP rolling shutter sensor with frame rates up to 60fps with 3200 ISO max, 29dB SNR at ISO800. It has a 10 stops per camera module, with 130-degree diagonal FOV, 4/2.9 optics and with up to 16K resolution (8K per eye). Jaunt One at 60fps provides 200GB/minute uncompressed. This can fill a 1TB SSD in five minutes. They are forced to use compression to be able to use currently affordable storage devices. This compression creates 11GB per minute, which can fill a 1TB SSD in 90 minutes.

The actual stitched image, laid out flat, looks like a distorted projection. But when viewed in a stereoscopic viewer it appears to look like a natural image of the world around the viewer, giving an immersive experience. At one point in time the viewer does not see all of the image but only the image in a restricted space that they are looking directly at as shown in the red box in the figure below.

The full 360-degree image can be pretty high resolution, but unless the resolution is high enough, the resolution inside the scene being viewed at any point in time will be much less that the resolution of the overall scene, unless special steps are taken.

The image below shows that for a 4k 360-degree video the resolution in the field of view (FOV) may be only 1K, much less resolution and quite perceptible to the human eye.

In order to provide a better viewing experience in the FOV, either the resolution of the entire view must be better (e.g. the Jaunt One high-resolution version has 8K per eye and thus 16K total displayed resolution) or there must be a way to increase the resolution in the most significant FOV in a video, so at least in that FOV, the resolution leads to a greater feeling of reality.

Virtual reality, augmented reality and mixed reality create new ways of interacting with the world around us and will drive consumer technologies and the need for 360-degree video. New tools and stitching software, much of this cloud-based, will enable these workflows for folks who want to participate in this revolution in content. The role of a director is as important as ever as new methods are needed to tell stories and guide the viewer to engage in this story.

2017 Creative Storage Conference
You can learn more about the growth in VR content in professional video and how this will drive new digital storage demand and technologies to support the high data rates needed for captured content and cloud-based VR services at the 2017 Creative Storage Conference — taking place May 24, 2017 in Culver City.

Thomas M. Coughlin of Coughlin Associates is a storage analyst and consultant. He has over 30 years in the data storage industry and is the author of Digital Storage in Consumer Electronics: The Essential Guide.

It may still be the Wild West in the emerging virtual reality market, but adapting new and existing tools to recreate production workflows is nothing new for the curious and innovative filmmakers hungry for expanding ways to tell stories.

We asked directors at a large VR studio and at a nimble startup how they are navigating the formats, gear and new pipelines that come with the territory.

Patrick Meegan

Jaunt
Patrick Meegan was the first VR-centric filmmaker hired by Jaunt, a prolific producer of immersive content based in Los Angeles. Now a creative director and director of key content for the company, he will also be helping Jaunt refine and revamp its virtual reality app in the coming months. “I came straight from my MFA at USC’s interactive media program to Jaunt, so I’ve been doing VR since day one there. The nice thing about USC is it has a very robust research lab associated with the film school. I worked with a lot of prototype VR technology while completing my degree and shooting my thesis. I pretty much had a hacker mentality in graduate school but I wanted to work with an engineering and content company that was streamlining the VR process, and I found it here.”

Meegan shot with a custom camera system built with GoPro cameras on those first Jaunt shoots. “They had developed a really nice automated VR stitching and post workflow early on,” he says, “but I’d built my own 360 camera from 16 GoPros in grad school, so it wasn’t so dissimilar from what I was used to.” He’s since been shooting with the company’s purpose-built Jaunt One camera, a ground-up, modular design that includes a set of individual modules optimized with desirable features like global shutter, gunlock for frame sync and improved dynamic range.

Focusing primarily on live-action 3D spherical video but publishing across platforms, Jaunt has produced a range of VR experiences to date that include Doug Limon’s longer-form cinematic serial Invisible, (see VR Post) and short documentaries like Greenpeace’s A Journey to the Arctic and Camp4 Collective’s Home Turf: Iceland. The content is stored in the cloud, mostly to take advantage of scalable cloud-based rendering. “We’re always supporting every platform that’s out there but within the last year, to an increasing degree, we’re focusing more on the more fully immersive Oculus, HTC Vive, Gear VR and Google Daydream experiences,” says Meegan. “We’re increasingly looking at the specs and capabilities of those more robust headsets and will do more of that in 2017. But right now, we’re focused on the core market, which is 360 video.”

Invisible

When out on the VR directing jobs he bids on through Jaunt’s studios, Meegan typically shoots with a Jaunt One as his primary tool and rotates in other bespoke camera arrays as needed. “We’re still in a place where there is no one camera but many terrific options,” he says. “Jaunt One is a great baseline. But if you want to shoot at night or do aerial, you’ll need to consider any number of custom rigs that blend off-the-shelf cameras and components in different types of arrays. Volumetric and light field video are also on the horizon, as the headsets enable more interaction with the audience. So we’ll continue to work with a range of camera systems here at Jaunt to achieve those things.”

Meegan recently took the Jaunt One and a GoPro drone array to the Amazon Rain Forest to shoot a 10-minute 360-degree film for Conservation International, a non-profit organization with a trifold community, corporate partnership and research approach to saving our planet’s natural resources. An early version of the film screened this November in Marrakech during the UN’s Climate Change Conference and will be in wide release through the Jaunt app in January. “I’ve been impressed that there are real budgets out there for cause-based VR documentaries,” he says. “It’s a wonderful thing to infuse in the medium early on, as many did with HD and then 4K. Escaping into a nature-based experience is not an isolating thing — it’s very therapeutic, and most people will never have the means or inclination to go to these places in the first place.”

Pitched as a six-minute documentary, the piece showcases a number of difficult VR camera moves that ended up extending its run. “When we submitted 10-minute cuts to the clients, no one complained about length,” says Meegan. “They just wanted more. Probably half the piece is in motion. We did a lot of cable cams through the jungle, as if you are walking with the indigenous people who live there and encountering wildlife, and also a number of VR firsts, like vertical ascending and descending shots up along these massive trees.”

Tree climbing veterans from shows like Planet Earth were on hand to help set the rigs on high. “These were shots that would take three days to rig into a tree so we could deliver that magical float down through the layers of the forest with the camera. Plus, everything we had to bring into the jungle for the shoot had to fit on tiny planes and canoes. Due to weight limits, we had to cut back on some of the grip equipment we’d originally planned on bringing, like custom cases and riggings to carry and protect the gear from wildlife and the elements. We had to measure everything out to the gram.” Jaunt also customized the cable cam motors to slow down the action of the rigs. “In VR you want to move a lot slower than with a traditional camera so you get a comfortable feel of movement,” says Meegan. “Because people are looking around within the environment, you want to give them time to soak it all in.”

An example of the Jaunt camera at work – Let’s Go Mets!

The isolated nature of the shoot posed an additional challenge: how to keep the cameras rolling, with charging stations, for eight hours at a time. “We did a lot on the front end to determine the best batteries and data storage systems to make that happen,” he says. “And there were many lessons learned that we will start to apply to upcoming work. The post production was more about rig removal and compositing and less about stitching, so for these kinds of documentary shoots, that helps us put more of our resources into the production.”

The future of narrative VR, on the other hand, may have an even steeper learning curve. “What ‘Invisible’ starts to delve into,” explains Meegan, “is how do we tell a more elaborate, longer-form story in VR? Flash back to a year or so ago, when all we thought people could handle in the headset at one time was five or six minutes. At least as headsets get more comfortable — and eventually become untethered — people will become more engaged.” That wire, he believes, is one of VR’s biggest current drawbacks. “Once it goes away, and viewers are no longer reminded they are actually wearing technology, we can finally start to envision longer-form stories.”

As VR production technology matures, Meegan also sees an opening for less tech-savvy filmmakers to join the party. “This field still requires healthy hybrids of creative and technical people, but I think we are starting to see a shift in priorities more toward defining the grammar of storytelling in VR, not just the workflows. These questions are every bit as challenging as the technology, but we need all kinds of filmmakers to engage with them. Coming from a game-design program where you do a lot of iterations, like in visual effects and animation, I think now we can begin to similarly iterate with content.”

The clues to the future may already be in plain sight. “In VR, you can’t cut around performances the way you do when shooting traditional cinema,” says Meegan. “But there’s a lot we can learn from ambient performances in theater, like what the folks at Punchdrunk are doing in Sleep No More immersive live theater experience in New York.” The same goes for the students he worked with recently at USC’s new VR lab, which officially opened this semester.

“I’m really impressed by how young people are able to think around stories in new ways, especially when they come to it without any preconceived notions about the traditional structure of filmmaker-driven perspectives. If we can engage the existing community of cinematic and video game storytellers and get them talking to these new voices, we’ll get the best of both worlds. Our Amazon project reflected that; it was a true blend of veteran nature filmmakers and young kid VR hackers all coming together to tell this beautiful story. That’s when you start to get a really nice dialog of what’s possible in the space.”

Wairua
A former pro skateboarder, director of photography and post pro Jim Geduldick thrives on high-stakes obstacles on the course and on set. He combined both passions as the marketing manager of GoPro’s professional division until this summer, when he returned to his filmmaking roots and co-founded the creative production and technology company Wairua. “In the Maori tradition, the term wairua means a spirit not bound to one body or vessel,” he explains. “It fits the company perfectly because we want to pivot and shape shift. While we’re doing traditional 2D, mixed reality and full-on immersive production, we didn’t want to be called just another VR studio or just a technology studio. If we want to work on robotics and AI for a project, we’ll do that. If we’re doing VR or camera tech, it gives us leeway to do that without being pegged as a service, post or editorial house. We didn’t want to get pigeonholed into a single vertical.”

With his twinned background in camera development and post, Geduldick takes a big-picture approach to every job. “My partner and I both come from working for camera manufacturers, so we know the process that it takes to create the right builds,” he says. “A lot of times we have to build custom solutions for different jobs, whether that be high-speed Phantom set-ups or spherical multicam capture. It leaves us open to experiment with a blend of all the new technology out there, from VR to AR to mixed reality to AI to robotics. But we’re not just one piece of the puzzle; knowing capture through the post pipeline to delivery, we can scale to fit whatever the project needs. And it’s inevitable — the way people are telling stories and will want to tell them will drastically change in the next 10 years.”

Jim Geduldick with a spherical GoPro rig.

Early clients like Ford Motors are already fans of Wairua’s process. One of the new company’s first projects was to bring rally cross racer Ken Block of the Hoonigan Racing Division and his viral Gymkhana video series to VR. The series features Block driving against the clock the Ford Focus RS RX rallycross car he helped design and drove in the 2016 FIA World Rallycross Championship on a racing obstacle course, explaining how he performs extreme stunts like the “insane” and the “train drift” along the way. Part one of Gymkhana Nine VR is now available via the Ford VR app for iOS and Android.

“Those brands that are focused on a younger market are a little more willing to take risks with new content like VR,” Geduldick says. ‘We’re doing our own projects to test our theories and own internal pipelines, and some of those we will pitch to our partners in the future. But the clients who are already reaching out to us are doing so through word of mouth, partly because of our technical reputations but mostly because they’ve seen some of our successful VR work.” Guiding clients during the transition to VR begins with the concept, he says. “Often they are not sure what they want and often you have to consult with them and say, ‘This is what’s available. Are they going for a social reach? Or do you want to push the technology as far as it will go?’ Budgets, of course, determine most of that. If it’s not for a headset experience, it’s usually going to a platform or a custom app.”

Wairua’s kit, as you might expect, is a mix of custom tools and off-the-shelf camera gear and software. “We’re using GoPro cameras and the GoPro Odyssey, which is a Google Jump-ready rig, as well as the Nokia Ozo and other cameras and making different rigs,” he says. “If you’re shooting an interview, maybe you can get away with shooting it single camera on a panohead with one Red Epic with a fisheye lens or a Sony A7s ii. I choose camera systems based on what is the best for the project I’m working on at that moment.”

His advice for seasoned producers and directors — and even film students — transitioning to VR is try before you buy. “Go ahead and purchase the prosumer-level cameras, but don’t worry about buying the bigger spherical capture stuff. Go rent them or borrow them, and test, test, test. So many of the rental houses have great education nights to get you started.”

The shot of NYC was captured by a spherical array shoot on the top of the Empire State Building.

Once you know where your VR business is headed, he suggests, it’s time to invest. “Because of the level that we’re at, we’ve purchased a number of different camera systems, such as Red Epic, Phantom, tons of GoPros and even a Ricoh Theta S camera, which is the perfect small spherical camera for scouting locations. That one is with me in my backpack every time I’m out.”

Geduldick is also using The Foundry’s Cara VR plug-in with Nuke, Kolor’s Autopan Video Pro and Chris Bobotis’s Mettle plug-in for Adobe After Effects. “If you’re serious about VR post and doing your own stitching, and you already use After Effects, Mettle is a terrific thing to have,” he says. A few custom tetrahedral and ambisonic microphones made by the company’s sound design partners and used in elaborate audio arrays, as well as the more affordable Sennheiser Ambeo VR mic, are among Wairua’s go-to audio recording gear. “The latter is one of the more cost-effective tools for spatial audio capture,” says Geduldick.

The idea of always mastering to the best high-resolution archival format available to you still holds true for VR production, he adds. “Do you shoot in 4K just to future-proof it, even if it’s more expensive? That’s still the case for 360 VR and immersive today. Your baseline should always be 4K and you should avoid shooting any resolution less than that. The headsets may not be at 4K resolution per eye yet, but it’s coming soon enough.”

Geduldick does not believe any one segment of expanded reality with take the ultimate prize. “I think it’s silly to create a horse race between augmented reality and virtual reality,” he says. “It’s all going to eventually meld together into immersive storytelling and immersive technology. The headsets are a stopgap. 360 video is a stopgap. They are gateways into what will be and can come in the next five to 10 years, even two years. Yes, some companies will disappear and others will be leaders. Facebook and Google have a lot of money behind it, and the game engine companies also have an advantage. But there is no king yet. There is no one camera or or no single software that will solve all of our problems, and in my opinion, it’s way too soon to be labeling this a movement at all.”

Jim with a GoPro Omni on the Mantis Rover for Gymkhana.

That doesn’t mean that Wairua isn’t already looking well beyond the traditional entertainment marketing and social media space at the VR apps of tomorrow. “We are very excited about industrial, education and health applications,” Geduldick says. “Those are going to be huge, but the money is in advertising and entertainment right now, and the marketing dollars are paying for these new VR experiences. We’re using that income to go right back into R&D and to build these other projects that have the potential to really help people — like cancer patients, veterans and burn victims — and not just dazzle them.”

Geduldick’s advice for early adopters? Embrace failure, absorb everything and get on with it. “The takeaway for every single production you do, whether it be for VR or SD, you should be learning something new and taking that lesson with you to your next project,” he says. “With VR, there’s so much to learn — how the technology can benefit you, how it can hurt you, how it can slow you down as a storyteller and a filmmaker? Don’t listen to everybody; just go out and find out for yourself what works. What works for me won’t necessarily work for someone like Ridley Scott. Just get out there and experiment, learn and collaborate.”

Main Image: A Ford project via Wairua.

Beth Marchant has been covering the production and post industry for 21 years. She was the founding editor-in-chief of Studio/monthly magazine and the co-editor of StudioDaily.com. She continues to write about the industry.