Digital Storytelling: Full, Immersive VR

We’ve looked at using beacons and we’ve began our discussion of digital storytelling using Augmented Reality. Now we’ll look at a fully immersive experience using Virtual Reality Gear. This section is less about code and more about the possibilities inherent in telling stories in this medium using a small set of technologies:

Leap Motion to provide a set of interaction gestures

WebGL 3D environments to build the world we’ll be interacting in

A-Frame

Possible voice interfaces to add further interfaces to the stories we tell

Speech Synthesis API

Unity 3D environment

As we explore these tools we’ll also look at:

Navigation and travel through digital worlds

Bots and other clues for the virtual tourist

We’ll start working with A-Frame, from Mozilla. It does an excelent job of abstracting the underlying Three.js and WebGL code into a tag-based language, similar to HTML. At its simplest an A-Frame scene looks like the code below. We link to the A-Frame library and then write tags to create the content we want:

A scene to hold our content

A box

A sphere

A cylinder

A plane to put the content in

A sky element with a set color

A-Frame does it all in less than 10 lines of HTML-like tags. The creator doesn’t need to know what goes on “under the hood” to see the results of the code.

We can get as complex as we want to. The example below shows how to do animation in A-Frame. The example below from the A-Frame site provides a much richer experience without increasing the lines of code.

Starting with code like this we can build any type of structure that we can in Three.js. We can also use lower level constructs like WebGL shaders, the low level scripts that make WebGL elements change and do things beyond what stock WebGL allows for.

A-Frame’s discussion of materials (one of the two components for objects in WebGL) discuses how to add custom Shaders to enhance the materials we use in our content.

A-Frame gives you a declarative way to create 3D content and scenes without loosing the flexibility of Three.js and Shaders when we need to use them. We’ll see whether using declarative markup or procedural code works better.

Creating these environments presents another question. How do we navigate from one scene to another. For example, if our story takes place in a house, how do we navigate from one room to another and how do we restrict what places we can and cannot go to?

In an ideal world we’d load all the assets we need as a large application and then transition seamlessly between areas that works but puts a lot of assets in the user’s computer that might or might not be neecessaary.

The simplest possibility is to tie the movement across sectors or boundaries. For example If we set the story in a house we can trigger movement from the house to a different location by loading the destination’s assets when the user touches the door or waslk near it; similar to the way in which World of Warcraft and other MMORPGs load assets for a new section of the world.

Another way to do load assets is to load an initial set of assets, where the user begins the experience, and then stream new assets to the user’s machine as needed and with a large splash screen to separate major areas of the world.

The final example I want to show is what a Leap Motion integration looks like in an A-Frame application. I’ve take the example from the aframe-leap-hands Github repository. Technologies like Leap, Oculus Touch and the Valve controllers help give a fuller virtual experience by allowing hand getstures and other haptic feed back for the user.

The new entities create a right and left hands for the Leap Motion controller and, once again, hides all the complexity of setting up the Three.js environment

A-Frame has an audio component that is affected by the position of the object it’s attached to. Combined with events this opens a wide variety of additional possibilities for the stories we tell… More research is needed.

Some of the devices I want to explore with require Unity as the development engine and provide additional code and libraries to make the experience easier. At first I was a little concerned on the requirement, yet another language to learn for uncertain return, but as I’ve started looking at the code it’s become less scary than I expected.

I’m still trying to figure out C Sharp and how to best learn it and, because I’m lazy, how much of the code is actually hidden by the Unity Editor and how much code I have to write.

The idea is that we create the code to go with each marker and then when a mobile camera, like the one on a mobile phone, looks at the marker the corresponding AR model (in this case a rotating sphere of the world) will show up and rotate as an animation on your phone.