When I was on a field trip to London back in high school, I played my first virtual reality (VR) game: Zone Hunter. I was immediately hooked and I knew I wanted to work in VR! I started my VR career more than 12 years ago working on industrial VR training applications and VR software tools.

I am now the founder and president of a company called “i’m in VR“. We offer tools to simplify the creation of VR applications such as MiddleVR, a VR middleware that enables 3D applications (like Unity) to run on any VR system (HMD, caves etc.). I’ve been blogging about VR long before it was cool, and you can also find me on twitter (@Cb_VRGeek)

Now, you may think creating VR applications is easy: simply add camera rotations using the Oculus Rift tracker and you’re done. This can work for some applications, but it will fail for the vast majority of them.

VR is all about presence in a virtual world. If you can’t keep your player immersed into it, you’re not doing it right. You can trick your brain into thinking it is in another reality, but this is more difficult than it sounds. This feeling of presence is very fragile.

Articles dealing with VR often adopt a too technical approach. I think VR is first about what’s happening in the user’s mind. In this article I am going to focus on some fundamental points about this presence in another world and why it is important to design your application for this goal.

VR in 2013

Virtual reality allows you to immerse people in a 3D environment, with head-mounted displays (HMDs or VR goggles), or other immersive systems. That’s why we often call it immersive VR (iVR) — to differentiate it from virtual worlds like Second Life or World of Warcraft. VR was hyped in the early ’90s, but failed to deliver the experience the public expected.

Training in VR can be much more efficient than in real life: you can control the training environment very precisely, view replays, and actually safely practice real gestures in many different, potentially dangerous, scenarios. This is used for training surgeons, soldiers, policemen, firefighters, dentists and even workers applying coatings on houses! This allows companies to save expensive materials while delivering better feedback about gestures.

All major car manufacturers have their VR systems where they can test designs and ergonomics of products that don’t yet exist, and iterate very quickly compared to a physical mock-up. This is now also applied for planes, boats, tractors, production lines, factories, and even kitchens! See the VR applications and systems from Peugeot or Ford.

Communicating around a digital mock-up is very natural: you can get immersed in your future building, or live urban planning years before modifications. See this Enodo demo reel.

It is also a great tool for market research for the retail industry: you get a real feeling of your shop before it is built or rearranged. You can track all the customer’s movements and record where they look. This is useful to test the layout of furniture or make sure that the design of your product is visible among other products.

Treating phobias in VR is an efficient method: if you’re subject to a fear of heights, we can create a virtual cliff and you will actually experience your phobia. Then a real therapist can help you dealing with it more efficiently than going to a real cliff. The same applies for fear of taking the plane, fear of spiders, dogs, and speaking in public for example. See the Cyberpsychology lab from Stéphane Bouchard.

And, of course, VR can be used for games! But since the mid-’90s, very few games have been created with this technology; most were developed at research labs or by enthusiasts. Doing so required the skills and hardware to assemble a VR system and program the game themselves. To my knowledge, no commercial VR game has been created in the past 10 years.

Here’s an on-going list of pre-Oculus VR games. But now, thanks to the arrival of the Oculus Rift, everyday is Christmas! We’re just starting to see new VR games and experiences (like the virtual guillotine).

Why (Not) Create a VR Game?

The first question to ask is whether your game would be relevant in VR. It’s like with 3D. Not everything is interesting in 3D, and if it is not appropriate it can get worse in VR!

So why go VR?

The objective of VR is that you feel like you’re present in another reality, whether it is realistic or not. For me, presence is the definition of VR. Without presence, there is no VR!

Obvious game genres that would be great in VR are all the first-person games, like first-person shooters. Imagine Mirror’s edge or Call of Duty as VR games! Some third person games like Assassin’s Creed, Splinter Cell, or Gears of War could potentially be converted to first-person, so we can actually be the hero. Of course, I’m sure we will see a revival of puzzle and exploration games. We will also probably see very different VR games in the future: God games? Guitar Hero?

But I think the games that will benefit the most from VR are those that try to generate emotions in the player.

Survival horror games would be extremely intense. Also take Heavy Rain, for instance. The game is great; I felt really present, and I experienced a lot of emotions while playing it. However, the game was sometimes ruined by non-natural interaction, and lacked half the presence — the physical presence. And this is where VR can help!

VR as a New Medium

I should say a word of warning here before continuing: adapting existing games to VR is difficult if they weren’t designed for this from the outset. VR is like radio or TV at their beginnings: radio was only used to broadcast opera, and TV was only used to broadcast theatre plays. Slowly, people started to create content specifically tailored for these new media. Camera movement, zoom, and cuts created a new grammar for film, for instance.

The same will happen with VR! At first, there will be a lot of adaptations of existing games that don’t take full advantage of presence, and might even damage the field: adding VR will only marginally improve immersion, thanks to the display, but awkward controls and gameplay unsuited to VR could potentially make the experience poorer than it originally was.

I’m happy to see that a lot of indie developers are creating new games with VR in mind from the beginning, which is the right way to do it. And why wouldn’t they? VR is the ultimate experience! Those of us with experience we will happily provide feedback for your game, so don’t hesitate to contact me.

Presence

As I said, presence is, for me, what defines VR. Without this feeling of actually being somewhere else, your system is just an interactive 3D system, not a true VR system — even if it costs millions of dollars. Trust me, I’ve tested a few of those, and it’s a tragedy.

Once you get presence, your player will experience natural reactions and emotions: if you’re on top of a high cliff, you will experience the fear of heights (guaranteed). If a virtual ball is thrown at you, you will try to catch it. If an avatar saves you from certain death, you might actually smile at him. True story!

Presence is a complex and subtle topic. Mel Slater is one of the scientists conducting some of the most interesting research on presence. In a well-known article, he splits presence in two: cognitive (mind)and perceptive (senses).

Most people report presence when playing a game, watching a movie, reading a book, or just hearing a story (the roots of VR!). This is actually cognitive presence — where their mind takes them to another world.

Perceptive Presence

All of these experiences lack perceptive presence, which is in fact fooling your senses in a realistic way. Vision, but also sound, touch, smell, proprioception… Keep in mind that humans are not able to perceive the world perfectly: the human brain makes all sorts of simplifications. Knowing the limits of human perception, which is a fundamental part of understanding VR, allows you to create perceptive illusions, such as redirected walking or impossible spaces.

So how do you achieve that?

For me, the most basic way of creating perceptive presence is by using head tracking. Moving your head and, as a result of this movement, seeing the world from a different viewpoint, is the basis for the action/perception loop.

So you need to be able to move, and those moves must have an effect on the virtual world. Your body is engaged: as Antonio Damasio says, “the mind is embodied, not just embrained.”

Breaks in Presence

This also means that if, as a result of your actions, you’re not getting the result that you’re expecting your brain will know something is wrong. This is called a “break in presence” (BIP).

If you have only one goal when creating a VR game, it would be to create and maintain presence. Feeling present in an empty room is VR. Not feeling present in Gears of War is not VR.

Minimal VR system

My recommendation would be to support head tracking (rotations + translations), tracking of at least one hand (rotations + translations), and a joystick with a couple of buttons. From my personal experience, when you have this minimum setup, you cross a threshold, and your brain much more easily accepts this other reality.

This means that, for me, the Oculus Rift by itself is not (yet) a minimum VR platform. It’s missing head position tracking and doesn’t provide any kind of hand tracking. I know you can easily add it yourself with devices such as the Razer Hydra or others. But unless we have a complete VR platform, game developers can’t rely on the fact that players all have the same standard hardware.

Latency

The first enemy of VR is latency. If you move your head in the real world and the resulting image takes one second to appear, your brain will not accept that this image is related to the head movement. Moreover as a result, you will probably get sick. John Carmack reports that “something magical happens when your lag is less than 20 milliseconds: the world looks stable!”

Some researchers even advise a 4ms end-to-end latency from the moment you act to the moment the resulting image is displayed. To give you an idea of what this means, when your game runs at 60 frames per second it’s 16ms from one frame to another. Add to that the latency of your input device, which can range from a few milliseconds to more than 100ms with the Kinect, and the latency of the display, which also ranges from a few milliseconds to more than 50ms for some consumer HMDs.

And if you want to run your game in stereoscopy, keep in mind that the game needs to compute the left and right images for each frame. As a game developer, you can’t do much for the input and display latency, but you have to make sure that your game runs fast!

A Coherent World, Not Necessarily a Realistic One

We have seen that perceptive presence requires you to fool your senses in the most realistic way. Cognitive presence — fooling the mind, not the senses — results from a sense that your actions have effects on the virtual environment, and that these events are credible. This means that you must believe in the “rules” of the simulation. For this, you must make sure that your world is coherent, not necessarily realistic. If a player can grab a particular glass, for example, but can’t grab another one, it will break presence because the rules are not consistent. Once cognitive presence is broken, it’s very difficult to “fix” it. The player is constantly reminded that the simulation is not real, and it will take some time to accept it again as reality.

If you’re targeting a visually realistic environment, it is more likely to generate breaks in presence. This is because your brain will expect many things that we are not yet able to achieve technically: perfect physics, sound, force feedback so that your hand doesn’t penetrate an object, objects breaking in pieces, smell, etc. Having a non-realistic environment will lower your expectations that everything should be perfect, resulting in a more consistent presence feeling.

This is why the application I still find the most immersive is “Verdun 1916-Time Machine.” It fools many senses at a time: vision, smell, touch… But the most important point is that, by design of the “experience,” the interactions are extremely simple: you can only rotate your head, because you’re a wounded soldier.

Given that extreme limitation, it’s extremely simple to keep the player from experiencing a break in presence. You can’t move your hand, so it cannot penetrate objects; you aren’t forced to navigate with an unnatural joystick. It has been reported several times that some people smiled at another virtual soldier that came to save the player in the simulation!

Measure Presence

The problem is that it’s very difficult to concretely measure whether a player feels present in the world. There are currently no absolute indicators for that. You can measure the heart rate or skin conductance if you want to evaluate anxiety. But this is only relevant for stressful simulations.

What you can try to evaluate though is if the player is responding naturally. We already mentioned a few natural reactions: trying to catch a ball, fear of heights near a cliff, fear for your virtual body if somebody is trying to hurt you, trying to avoid collisions…

Tips for VR Games

Enough with the philosophical considerations, for now. Here are a few practical tips:

Scale 1. The scale of the world has to be realistic. You should feel like you are the right height (unless you want your player to be a child, as in Among the Sleep). Head movements should not be amplified (unless you’re using redirecting techniques).

The easiest way to achieve Scale 1 is to make sure that 1 world unit is 1 meter. The field of view should exactly match the field of view of your HMD. In an ideal world (or big industrial VR system) the distance between your two eyes should also be correctly measured and used. Your brain picks up all these cues; you might not be able to create or maintain presence — and even make people feel sick! — if you don’t strictly follow this rule.

Know your hardware. Know the range of tracking: Can my hardware track translations, or only rotations? If the tracker also reports positions, up until what distance? What’s its precision? When does the tracking data stop being usable? Know the field of view: As you are supposed to follow the “Scale 1″ rule, you shouldn’t distort the virtual field. If the field of view is narrow, the user will have to move her head much more to see around than with a bigger field of view, and might miss some important action in the periphery. Know the resolution: if you want the user to read information, she will have to come much closer with a low-resolution display than with a high-resolution display. As with Android development, your game might end up running on different hardware. We will soon have a HMD war with lots of HMDs, each with different characteristics. Using tools like MiddleVR will help you work with many different VR systems.

Have a consistent viewpoint. If your game is a first person game, avoid playing cinematics or making the player drive a vehicle from a third person view. It breaks immersion.

Break habits. Longtime video game players have bad habits: when they wear an HMD they will stand still, as if they’re seated in front of a TV. Those who are less experienced with games people will naturally look around. Gamers need to unlearn the constraints of current games. In a tutorial, you should force the user to look around and move his hands. Your game should also take advantage of those new possibilities. For example, in a recent game prototype I worked on, we had enemies appearing to the right, left and above the player; there is no joystick/mouse to navigate and look around. It forces the user to look around and aim with his hand to get all the enemies. In another game prototype I worked on, the only interactive object is a candle in a very dark environment. This was a great way to force the player to explore: he naturally took the candle, and used it to explore the dark environment, pushed some objects, and burned others to solve the puzzles.

Try to keep the player active. In Heavy Rain, for example, you’re almost always playing. There are numerous cutscenes that look like videos, but suddenly you’ll have to perform an action. If you don’t have the game controller in your hands at this moment, you’ll fail the action. This forces you to always be alert and ready.

Another very interesting feature of Heavy Rain is that the game happens in real-time, which means you sometimes have to take decisions and act fast: do I have to shoot this guy before he kills my partner? You’re forced to act quickly — and as in real life, once you make a decision, you’ll never know if it was right or not.

Create realistic puzzles. Still another example from Heavy Rain: You’re in a rush and have to phone a room in a motel. Can you remember, in less than 15 seconds, the room number? Just as in real life, you have to dig it from your memory while experiencing a lot of stress.

Finally, work hard on presence. Presence is not easy to build. Start light, test often. Build presence slowly, make small additions, test again. The experience is what happens in the user’s brain! Your simulation enables the experience: it is not the experience! Presence should be natural. Observe the reactions and modify the game. Don’t throw in every possible gimmick just because it will make a cool video. A lot cool videos really end up being terrible experiences.

Conclusion

There is of course a lot more to be said about developing a VR application, but I hope this article got your attention on some fundamental points. I leave you with this quote that I hope you will apply:

“Our approach is to treat virtual reality as something quite new with its own unique conventions and possibilities that provide a media where people respond with their whole bodies, treating what they perceive as real.” – Mel Slater