Search Unity

Stereo 360 Image and Video Capture

We are proud to announce that in 2018.1 creators can now capture stereoscopic 360 images and video in Unity. Whether you’re a VR developer who wants to make a 360 trailer to show off your experience or a director who wants to make an engaging cinematic short film, Unity’s new capture technology empowers you to share your immersive experience with an audience of millions on platforms such as YouTube, Within, Jaunt, Facebook 360, or Steam 360 Video. Download the beta version of Unity 2018.1 today to start capturing.

How to use this Feature

Our device independent stereo 360 capture technique is based on Google’s Omni-directional Stereo (ODS) technology using stereo cubemap rendering. We support rendering to stereo cubemaps natively in Unity’s graphics pipeline on both Editor and on PC standalone player. After stereo cubemaps are generated, we can convert these cubemaps to stereo equirectangular maps which is a projection format used by 360 video players.

To capture a scene in Editor or standalone player is as simple as calling
Camera.RenderToCubemap() once per eye:

During capture of each eye, we enable a shader keyword which warps each vertex position in the scene according to a shader ODSOffset() function which does the per eye projection and offset.

Stereo 360 capture works in forward and deferred lighting pipelines, with screen space and cubemap shadows, skybox, MSAA, HDR and the new post processing stack. For more info, see our new stereo 360 capture API.

Using Unity frame recorder, a sequence of these equirect images can be captured out as frames of a stereo 360 video. This video can then be posted on video websites that support 360 playback, or can be used inside your app using Unity’s 360 video playback introduced in 2017.3.

For the PC standalone player, you need to enable the “360 Stereo Capture” option in your build (see below) so that Unity generates 360 capture enabled shader variants which are disabled by default in normal player builds.

In practice, most of 360 capture work can be done on the PC in Editor/Play mode.

Technical Notes on Stereo 360 Capture

For those of you using your own shaders or implementing your own shadowing algorithms, here are some additional notes to help you integrate with Unity stereo 360 capture.

We added an optional shader keyword: STEREO_CUBEMAP_RENDER. When enabled, this keyword will modify
UnityObjectToClipPos() to include the additional shader code to transform positions with
ODSOffset() function (see UnityShaderUtilities file in 2018.1). The keyword will also let engine setup the proper stereo 360 capture rendering.

If you are implementing screen space shadows, there is the additional issue that the shadow computation of reconstructed world space from depth map (which has post ODS Offset applied) and view ray is not the original world space position. This will affect shadow lookup in light space which expects the true world position. The view ray is also based on the original camera and not in ODS space.

One way to solve this is to render the scene to create a one-to-one mapping of world positions with screen space shadow map and write out the world positions (unmodified by ODS offset) into a float texture. This map is used as true world positions to lookup shadow from light space. You can also use 16-bit float texture if you know the scene fits within 16-bit float precision based on scene center and world bounds.

We’d love to see what you’re creating. Share links to your 360 videos on Unity Connect or tweet with #madewithunity. Also, remember this feature is experimental. Please give us your feedback and engage with us on our 2018.1 beta forum.

Please, please, please can we have the cube-map capture take the camera rotation into account? I set up a nice smooth path for my camera to follow (always looking in the direction of travel) only to find that the resulting 360 capture was always looking in the same direction. And trying to rotate the cube map in post will muck up the stereo separation I believe.

Has anyone figured out how to save out the actual cubemap? When I try to save the raw cubemap to file, I only ever get a single cubemap side, but the equirect Version still turns out fine so the data should be hidden somehwere…?

And is there any way to make the recording take the camera rotation into account?
Rotating the entire scene around the camera would be a lot of effort and a drain on performance, and rotating the sphere that the video will play on would mean that I’d sometimes have the pole distortions right in the center of my view.

I quickly built a sample out of this blog entry. Please note that the render textures for each eye have to have the dimension Cube, the equirect is a simple 2D render texture. To view the result either click on a render texture and view it in editor, or save it somewhere. :)

We need more of a break down. To none coders this doesn’t make much sense… And where do these frames export to? Are we expected to know what kind of code to use to set up the file directory? Please make these breakdowns more friendly to the whole creative community, and not just developers.

Here here. I am a coder, and I appreciate it when fairly spoon-fed exactly what it takes to do what they are saying.

This code appears to create cube and equirect images per-frame. I would venture that ImageConversion.EncodeToPNG() would help, at least with stills. Save them numbered sequentially then assemble with FFMPEG? Is that the intent? Or expect updates from, say AVPro on the Asset store?

Here’s an example stereo 360 capture video from our blog:https://youtu.be/K6uGXtPCjEw
You can use Google Cardboard/Daydream and Youtube app to view it in stereo or on GearVR, use Samsung Internet app from Oculus store. Be sure to view with highest Quality setting.

We publish content for planetariums. The way we’ve had to do it is a little hacky – by using 5 stitched fisheye cameras and outputting the video stream using Spout. This has a lot of limitations, like performance and needing a Spout-receiving application to pass the video to the display. Could this be used to enable one camera to output 180° live video directly to the display?