Blog

Simply put: because a program for the Vive needs to run at 90 frames per second on two screens. Let’s see where this number comes from and put it in some perspective.

Your average show on cable tv runs at around 25 frames per second, which is enough to give a credible illusion of movement if you’re just watching. In computer games, you’re not just passively watching, you’re also interacting with the game, and so a higher frame rate is desirable. After all, at 25 frames per second, any input you provide to the game might take up to 40 ms to have any visible effect in the game, which can make the game feel unresponsive. In most video games, up to 60 frames per second will suffice to make the game feel good.

In Virtual Reality, the game doesn’t only need to deal with button presses, but with users moving their heads and expecting the image to change in response to this. In the real world, this response is near instant, the limiting factor is the speed at which our brains are capable of processing new imagery. Failing to deliver this responsiveness in VR will not only make the game feel sluggish, it can cause nausea over time, and make the entire experience an unpleasant one, an effect that is called “Virtual Reality Sickness”, and has some similarity with motion sickness.

For this reason, the displays inside the HTC Vive run at a refresh rate of 90Hz, which means they draw a new image 90 times per seconds. To take full advantage of this, the program needs to keep up with this, and be able to provide a new image 90 times per second.

This is not all there is to it though: VR headsets contain two screen, one for each eye. These screens display different images, corresponding to the different positions of the eyes in the face, which is what gives us a sense of depth in vision. Thus, 90 times per second the program has to update the state of the world, i.e. where the user is, what the orientation of his head is, where objects in the world are, etc. But for each of those frames, the program has to generate two separate images, each from a different position in the world, and so 180 different images are generated per second. A PC needs quite a hefty CPU and GPU to be able to do this.

This requirement is not only a challenge to the consumer’s wallet, it is also a challenge to developers of VR applications, as a lot of work needs to be put into optimizing the program to achieve this frame rate. More on this in a later blog post.

VR is a new medium, one for which we haven’t exactly figured out yet what the best way to do things is. An interesting example is building GUI’s (Graphical User Interfaces). In almost any computer program, you will have a central area on screen where actual stuff happens, and a lot of menu’s, options, sidebars, etc littered around the edges of the screen. This is convenient, because they don’t demand a lot of attention if you don’t need them, and it’s easy to focus on them if you do.

In computer games, the HUD (Head-up Display) is the standard way to present information to the player, check out for instance this screenshot of Counter Strike Global Offensive:

There is a bunch of information on the edges of the screen; a minimap, health, ammo, team status.

In VR, this method of presenting information to the user does not work. When users move their head, the screen they look at moves with it, because it is mounted on their heads. This means the only way to focus on pieces of UI on the edges of the screen is keep your head still and only move your eyes. Try holding your head still, hold up some writing in your peripheral vision, and then read it. If you can do it at all, it will not be comfortable. Now imagine that wherever you move your head, this piece of writing will stay in the exact same spot in your peripheral vision. This will drive you nuts in the long run.

They way to go then in Virtual Reality, is to somehow integrate the UI in the world that people are immersed in. A poster on the wall could provide information. It would not move, and the user could focus on it by moving up to it and reading it. However, if the user moves away from the poster, the information is no longer accessible, and so this method only lends itself to presenting information that is highly situational.

Another option is to tie the UI to the controllers the user is holding. That way, the user can always access the information when he wants to, although it may inhibit the user in performing whatever he is executing. Consider cooking a meal from a recipe on an iPad; you access the recipe wherever you want to, but to do so, you will have to stop the actual cooking for a moment. Interesting examples of this method are QuiVR, where the current score screen is on the back of your hand, and turning your hand so you face your back magnifies this score screen, and The Brookhaven Experiment, where your current ammo is projected on the side of your gun.

While these are interesting options to present UI, one might also ask the question: should we aim to have as much UI as do in traditional games? VR is supposed to be more realistic than traditional flat-screen gaming, and a lot of the information that is presented to the user in traditional games is simply not there in real life. There are no minimaps, health bars, ammo indicators, quest logs, inventories, etc.

With any new technology, concepts of existing technologies will be applied in the beginning, until some pioneers find really new and innovative ways to exploit the capabilities of the new piece of technology. Think about the first websites, that were basically electronic leaflets or smartphones that started out primarily as devices for placing phone calls and sending text messages. It will be interesting to see how UI’s will develop in Virtual Reality, and if maybe a couple of years from now there will be a standard way of doing things that today nobody thought of yet.

Me, I’m sticking to a health bar and quest log that are attached to your controller for now.

Unity is a wonderful program, that allows people with very different backgrounds and specialization to contribute to the building of a game. And building a game indeed does require a lot of different skills, which I as a software developer don’t necessarily possess:

Game design, ensuring the game is mechanically challenging, balanced, and fun to play.

Sound design, because sound and music does so much for atmosphere.

Graphic design, creating all those buildings, chairs, plants, skeletons, etc that the player has to look at.

And yes, also software development, to make sure that things behave as intended.

Building a game by yourself means you need to have at least some superficial knowledge of all these aspects. Thankfully, Unity also provides an Asset Store, where people that are working solo and have different specializations can offer their work for sale. This means I get to buy asset packs containing for instance a whole lot of 3D models of walls and floors and furniture, that I can use to build my level.

This week I spent some time looking at the in-game music. As anyone will realize from watching movies, background music is very important in setting the atmosphere, even if you’re not actively aware of it. It is the same in games, where ideally you’d want the music to shift with the action. Only, in a game you want music to react to what’s happening in the game in real-time, so you can not just compose a piece of music that shifts at predefined moments and play that in the background.

Enter Unity’s Audio Mixer, which allows you to fade audio tracks in and out, much like a real mixer does. Except this one can be controlled from code, so that if specific events happen in-game, we can fade in one track and fade out another.

To be able to fade tracks in and out, without the transition being very noticeable, they need to be essentially from the same piece of music, and be perfectly in sync. Fortunately, I was able to download some music from the Asset Store, that was split up perfectly. They entire piece looks like this:

The track is made up of four parts, which are provided as separate tracks. Each of these separate tracks can be looped (there’s no noticeable transition when the track ends and starts playing from the start). Also, all separate tracks are of the exact same length. This means that if we just start playing all four tracks when the game starts, they will stay in sync forever. Switching from one track to another is a simple matter of fading in the track we want to hear, and muting the track that was already playing.

Suddenly the boss fight at the end of the first level feels that much more epic!

With development nearly three months underway, it is time to reflect a bit. Why build a VR game? What kind of game will it be, and why?

Why build a VR game?

There’s multiple reasons I’m building my own Virtual Reality game, as ambitious as it may sound. First, it’s a way for me to learn Unity and its endless possibilities. Unless you’re part of a large team, and have the resources to build your own custom engine, using a program like Unity or the Unreal Engine is imperative. It is to building a game as Word is to writing a document; it takes a lot of work out of your hands.

Second, it is my contribution to making Virtual Reality a success. With the introduction of VR devices like the Oculus Rift, Gear VR, Playstation VR and the HTC Vive, VR has found its way to the consumer market. The success of a new medium like this depends on the amount, quality and diversity of available content. Therein however, lies a chicken-and-egg problem: content creators are hesitant to turn to a medium that has not been adopted by the greater public yet; the potential audience is simply too small. Hardware creators take large risks when developing hardware that still needs content to become successful. And most users will not see the benefit of investing in this new medium until it is of sufficient quality and enough content is available. Thus, turning VR into a successful medium, requires ambitious hardware developers, content creators willing to jump into it, and early adopters that are excited by the potential of the new medium.

Dungeon Crawler

Legend of the Shadow Crystals (working title) is going to be a dungeon crawler adventure game, revolving around magic and casting spells. A lot of VR action games are variations on one of two concepts: wave shooters and first-person castle defense games. In wave shooters a player in a static position is attacked by groups of enemies at a time, and has to use his weapons to defeat them. In castle defense games, it is not the player but rather a castle that is attacked, and it is up to the player to defeat the enemies before the castle is sacked.

It is no surprise that many of these games popped up: with tracked controllers, we can let players experience shooting laser guns and bow and arrow quite like it would be in real life. On the other hand, locomotion is still an issue: moving a player avatar in VR while the player himself it standing still can cause motion sickness. Teleporting is a viable alternative, but it is a mechanic quite different from what is standard in non-VR games, and this has to be taken into account when designing the game. Wave shooters and castle defenders incorporate the awesome new possibilities without having to worry about locomotion.

While these games serve well to showcase the possibilities of VR, their replay value is limited. One would rather envision a game that takes advantage of the possibilities of VR to create a compelling and immersive world for the player to explore.

Magic

Virtual Reality allows users to experience things that are impossible to experience in real life, and one thing that speaks to the imagination of almost anyone is magic. Magic is a recurring topic in popular culture, and movies like Harry Potter, Lord of the Rings, and various cartoons give most people some idea of what it would be like to wield arcane powers. What would be more awesome than being able to wield magic powers in a Virtual Reality game, and feel like the heroes of these movies.

Of course, there is a problem here: magic is not real, and so the laws that govern it are up to the imagination of whoever creates a universe wherein magic plays a role. Wielding magic usually seems to require some verbal component, some gesturing with the hands, and sometimes wielders seem to draw some form of power from within themselves.

The question is then, how to translate this to a Virtual Reality game? To do justice to the possibilities of VR, we can not just let the user press a bunch of buttons to cast spells, but rather would have to let the user use motions and his voice to do so. This requires voice and speech recognition, as well as recognizing different motions made by the user and differentiating between them, both difficult problems that are the topic of many a PhD research. Rather than focus all my effort on trying to implement this, for now this will be on the backburner, hopefully to be tackled later on.