Tag Archives: Jaunt One Camera

To those who have been following the virtual reality market from the beginning, one very interesting phenomenon is how the hardware development seems to have outpaced both the content creation and the software development. The industry has been in a constant state of excitement over the release of new and improved hardware that pushes the capabilities of the medium, and content creators are still scrambling to experiment and learn how to use the new technologies.

One of the products of this tech boom is the Jaunt One camera. It is a 360 camera that was developed with the explicit focus of addressing the many production complexities that plague real life field shooting. What do I mean by that? Well, the camera quickly disassembles and allows you to replace a broken camera module. After all, when you’re across the world and the elephant that is standing in your shot decides to play with the camera, it is quite useful to be able to quickly swap parts instead of having to replace the whole camera or sending it in for repair from the middle of the jungle.

Another of the main selling points of the Jaunt One camera is the streamlined cloud finishing service they provide. It takes the content creator all the way from shooting on set through stitching, editing, onlining and preparing the different deliverables for all the different publishing platforms available. The pipeline is also flexible enough to allow you to bring your footage in and out of the service at any point so you can pick and choose what services you want to use. You could, for example, do your own stitching in Nuke, AVP or any other software and use the Jaunt cloud service to edit and online these stitched videos.

The Jaunt One camera takes a few important details into consideration, such as the synchronization of all of the shutters in the lenses. This prevents stitching abnormalities in fast moving objects that are captured in different moments in time by adjacent lenses.

The camera doesn’t have an internal ambisonics microphone but the cloud service supports ambisonic recordings made in a dual system or Dolby Atmos. It was interesting to notice that one of the toolset apps they released was the Jaunt Slate, a tool that allows for easy slating on all the cameras (without having to run around the camera like a child, clapping repeatedly) and is meant to automatize the synchronization of the separate audio recordings in post.

The Jaunt One camera shows that the market is maturing past its initial DIY stage and the demand for reliable, robust solutions for higher budget productions is now significant enough to attract developers such as Jaunt. Let’s hope tools such as these encourage more and more filmmakers to produce new content in VR.

The HPA Tech Retreat, run by the Hollywood Professional Association in association with SMPTE, began with an insightful one-day VR seminar— Integrating Virtual Reality/Augmented Reality into Entertainment Applications. Lucas Wilson from SuperSphere kicked off the sessions and helped with much of the organization of the seminar.

The seminar addressed virtual reality (VR), augmented reality (AR) and mixed reality (MR, a subset of AR where the real world and the digital world interact, like Pokeman Go). As in traditional planar video, 360-degree video still requires a director to tell a story and direct the eye to see what is meant to be seen. Successful VR requires understanding how people look at things, how they perceive reality, and using that understanding to help tell a story. Some things that may help with this are reinforcement of the viewer’s gaze with color and sound that may vary with the viewer — e.g. these may be different for the “good guy” and the “bad guy.”

VR workflows are quite different from traditional ones, with many elements changing with multiple-camera content. For instance, it is much more difficult to keep a camera crew out of the image, and providing proper illumination for all the cameras can be a challenge. The image below from Jaunt shows their 360-degree workflow, including the use of their cloud-based computational image service to stitch the images from the multiple cameras.Snapchat is the biggest MR application, said Wilson. Snapchat’s Snapchat-stories could be the basis of future post tools.

Because stand-alone headsets (head-mounted displays, or HMDs) are expensive, most users of VR rely on smart phone-based displays. There are also some places that allow one or more people to experience VR, such as the IMAX center in Los Angeles. Activities such as VR viewing will be one of the big drivers for higher-resolution mobile device displays.

Tools that allow artists and directors to get fast feedback on their shots are still in development. But progress is being made, and today over 50 percent of VR is used for video viewing rather than games. Participants in a VR/AR market session, moderated by the Hollywood Reporter’s Carolyn Giardina and including Marcie Jastrow, David Moretti, Catherine Day and Phil Lelyveld, seemed to agree that the biggest immediate opportunity is probably with AR.

Koji Gardiner from Jaunt gave a great talk on their approach to VR. He discussed the various ways that 360-degree video can be captured and the processing required to create finished stitched video. For an array of cameras with some separation between the cameras (no common axis point for the imaging cameras), there will be area that needs to be stitched together between camera images using common reference points between the different camera images as well as blind spots near to the cameras where they are not capturing images.

If there is a single axis for all of the cameras then there are effectively no blind spots and no stitching possible as shown in the image below. Covering all the space to get a 360-degree video requires additional cameras located on that axis to cover all the space.

The Fraunhofer Institute, in Germany, has been showing a 360-degree video camera with an effective single axis for several cameras for several years, as shown below. They do this using mirrors to reflect images to the individual cameras.

As the number of cameras is increased, the mathematical work to stitch the 360-degree images together is reduced.

Stitching
There are two approaches commonly used in VR stitching of multiple camera videos. The easiest to implement is a geometric approach that uses known geometries and distances to objects. It requires limited computational resources but results in unavoidable ghosting artifacts at seams from the separate images.

The Optical Flow approach synthesizes every pixel by computing correspondences between neighboring cameras. This approach eliminates the ghosting artifacts at the seams but has its own more subtle artifacts and requires significantly more processing capability. The Optical Flow approach requires computational capabilities far beyond those normally available to content creators. This has led to a growing market to upload multi-camera video streams to cloud services that process the stitching to create finished 360-degree videos.

Files from the Jaunt One camera system are first downloaded and organized on a laptop computer and then uploaded to Jaunt’s cloud server to be processed and create the stitching to make a 360 video. Omni-directionally captured audio can also be uploaded and mixed ambisonically, resulting in advanced directionality in the audio tied to the VR video experience.

Google and Facebook also have cloud-based resources for computational photography used for this sort of image stitching.

The Jaunt One 360-degree camera has a 1-inch 20MP rolling shutter sensor with frame rates up to 60fps with 3200 ISO max, 29dB SNR at ISO800. It has a 10 stops per camera module, with 130-degree diagonal FOV, 4/2.9 optics and with up to 16K resolution (8K per eye). Jaunt One at 60fps provides 200GB/minute uncompressed. This can fill a 1TB SSD in five minutes. They are forced to use compression to be able to use currently affordable storage devices. This compression creates 11GB per minute, which can fill a 1TB SSD in 90 minutes.

The actual stitched image, laid out flat, looks like a distorted projection. But when viewed in a stereoscopic viewer it appears to look like a natural image of the world around the viewer, giving an immersive experience. At one point in time the viewer does not see all of the image but only the image in a restricted space that they are looking directly at as shown in the red box in the figure below.

The full 360-degree image can be pretty high resolution, but unless the resolution is high enough, the resolution inside the scene being viewed at any point in time will be much less that the resolution of the overall scene, unless special steps are taken.

The image below shows that for a 4k 360-degree video the resolution in the field of view (FOV) may be only 1K, much less resolution and quite perceptible to the human eye.

In order to provide a better viewing experience in the FOV, either the resolution of the entire view must be better (e.g. the Jaunt One high-resolution version has 8K per eye and thus 16K total displayed resolution) or there must be a way to increase the resolution in the most significant FOV in a video, so at least in that FOV, the resolution leads to a greater feeling of reality.

Virtual reality, augmented reality and mixed reality create new ways of interacting with the world around us and will drive consumer technologies and the need for 360-degree video. New tools and stitching software, much of this cloud-based, will enable these workflows for folks who want to participate in this revolution in content. The role of a director is as important as ever as new methods are needed to tell stories and guide the viewer to engage in this story.

2017 Creative Storage Conference
You can learn more about the growth in VR content in professional video and how this will drive new digital storage demand and technologies to support the high data rates needed for captured content and cloud-based VR services at the 2017 Creative Storage Conference — taking place May 24, 2017 in Culver City.

Thomas M. Coughlin of Coughlin Associates is a storage analyst and consultant. He has over 30 years in the data storage industry and is the author of Digital Storage in Consumer Electronics: The Essential Guide.

Thanks to an expanding rental plan, the Jaunt One cinematic VR camera is being made available through AbelCine, a provider of products and services to the production, broadcast and new media industries. AbleCine has locations in New York, Chicago and Los Angeles.

The Jaunt One 24G model camera — which features 24 global shutter sensors, is suited for low-light and fast-moving objects, and has the ability to couple with 360-degree ambisonic audio recording — will be available to rent from AbelCine. Creators will also have access to AbelCine’s training, workshops and educational tools for shooting in VR.

The nationwide availability of the Jaunt One camera, paired with access to the company’s end-to-end VR pipeline, provides filmmakers, creators and artists with the hardware and software (through Jaunt Cloud Services) solutions for shooting, producing and distributing immersive cinematic VR experiences (creators can submit high-quality VR content for distribution directly to the Jaunt VR app through the Jaunt Publishing program).

“As we continue to open the Jaunt pipeline to the expanding community of VR creators, AbelCine is a perfect partner to not only get the Jaunt One camera in the hands of filmmakers, but also to educate them on the opportunities in VR,” says Koji Gardiner, VP of hardware engineering at Jaunt. “Whether they’re a frequent experimenter of new mediums or a proven filmmaker dabbling in VR for the first time, we want to equip creators of all backgrounds with everything needed to bring their stories to life.”

Jaunt is also expanding its existing rental program with LA-based Radiant Images to increase the number of cameras available to their customers.