Spent a little time on working with a large collection of comics / illustrations in VR and this was the results. My primary interest is working with large bodies of materials and quickly being able to find a particular point of interest by visually looking for it in a 3D space.

Tuesday, 9 September 2014

I imagine that those who find themselves at the top of the chain at Oculus have their good days and their bad days.

On a good day, it is easy to think that they must live a pretty thrilling existence. They are leading a team and running a company that is currently in a dazzling spotlight, working on technology that hold the promise to change a great deal in the way we interact with computers and the world itself. Sitting down to be interviewed by Wired, you must undoubtedly know that you've "arrived". There must be joy in that.

That would be on a good day.

The good days are likely accompanied by some rather bad days. Sleepless mornings at 3AM, unable to sleep and filled with worry.

Since 2007 and Apple's yearly hardware refresh cycle, a new expectation has been created of hardware companies. You aren't just expected to incrementally improve anymore, there is an expectation that a "hot" hardware company can deliver earth shattering innovation like clockwork. You aren't just expected to improve. You need to surprise and delight. Over and over again.

How often? Once a year, at the very least. If you are Samsung, you're pushing for a 6 month cycle. Sustainable? No. Not a chance in hell, but that's where we are at right now in the industry.

So again, back to bed and imagine you are running Oculus and wake up one day to find yourself bound to this heavy train of expectation. Hurtling down the track as your "year of innovation grace period" speeds on by. You look over your prototypes, all very reasonable devices made of glass, plastic, code sweat and ingenuity. Is it enough?

Of course not. It never will.

Somehow, God help you, you need to give the public what they've been told to expect from virtual reality. I'd use Snowcrash, Star Trek and the Matrix as the low hanging, popular culture examples. All of them offer a VR experience that is every bit as good as reality in terms of fidelity and freedom. No matter what Oculus comes up with, there will always be a long list of gotcha's surrounding each iteration:

I can look around but it doesn't look real

Ok, it looks real but I can't touch anything

Ok, I can touch thing but everything feels the same

Ok, things feel real but nothing can touch me

Ok, things look real, I can touch things fine and be touched which is great, but what about smell and taste? When's the update?

I imagine over time and trial, Oculus will be less and less likely to say very much about anything out of fear of fueling expectations. Who could blame them for trying to protect their sanity?

VR development shares a resemblance to the exploration of outer space. A challenge that threatens to humble even the most formidable minds among us. You are faced with a task of unlimited scope and as much complexity as you are willing to bite off and chew.

So how do you fall back asleep? How do you rest comfortably when burdened by such an undertaking?

Perhaps it could be in the thought that this is a long, immense and shared journey that we've embarked on. One that will certainly be filled with a litter of failed experiments, dead ends and endless delay. It also will be filled with some exceedingly beautiful, awe filled moments that will constantly remind us all of why we are doing this despite the struggle and stress.

If God built the earth in 7 days, can we not afford to give Oculus a few years to build a universe?

If anyone wants to say hi, I can been reached on Twitter at @ID_R_McGregor, or you link to me via Google+.

If you are heading to Oculus Connect, I want to meet you! Drop me a line here or Twitter! Email also works: id.r.mcgregor@gmail.com.

Monday, 8 September 2014

"If your simulation is trying to deal with an unrestrained, tracked humanoid you are going to lose, each and every time."

When I was very young, I read a story by Willard Price that featured a scene in which the main characters try to eat lunch while sitting at the bottom of the Pacific Ocean. They learn how difficult even simple tasks are to perform when your operating in an alien environment:

"Hal and Skink followed his example, and the process was repeated until all the sausages were gone. But there still remained the puzzling problem of how to drink a bottle of Coca-Cola ten fathoms beneath the sea.

When Dr Bake prised off the cap of his bottle a strange thing happened. Since the pressure outside was so much greater than that inside the bottle, sea water immediately entered and compressed the contents. But a little sea water did no harm, and Dr Blake pressed the mouth of the bottle to his lips.

By breathing out into the bottle he displaced the contents which thereupon flowed into his mouth. He drained the bottle. When he took it from his lips the sea water filled it with a sudden thud. Hal and Skink faithfully followed the same procedure."

I am often reminded of this recently when reviewing the current state of VR development. What a tremendous challenge we've chosen to pursue . It makes an underwater meal look very easy in comparison.

Thanks to John Carmack,. Michel Abrash, Palmer Luckey, Valve, Oculus and a great many others, I feel that we are well on our way at this point at getting where we need to go on the visual side of the equation. We've gathered the steel and forged a sword that cuts - now all that remains is to sharpen the blade.

Meanwhile many are turning their attention to attempting to give us our hands and feet in the virtual space. I'm a little less optimistic on this side of things at the moment.

Here's the simple problem: If you developed an excellent 1:1, low latency tracking system for hands, imagine how potentially unsatisfying it would be to interact with a 1x1 foot virtual cube. Your real hands, unrestrained by the rules of the virtual world cannot be kept one foot apart stay apart while clutching the sides of the box. This means that your simulated hands will:

Pass through the box. Unpleasant.

Stop being simulated and appear to stop while your real hand moves. Most unpleasant.

Be subjected to some kind of trickery perhaps, you could attempt to simulate the offending hand passing over or under the box as a compromise between the virtual world simulation violation and the real world movement. Perhaps borrowing an idea or two from here, but on a such smaller scale.

I don't feel that any of these options hold a lot of promise, nothing that would feel particularly satisfying at any rate. You also need to resign yourself to the knowledge that the end user will purposely try to poke holes in your simulation, they won't work with you, they will pick your world apart given the chance.

If your simulation is trying to deal with an unrestrained, tracked humanoid you are going to lose, each and every time.

So, what to do?

Let's go back to the example of having lunch under the sea for a moment. Humans underwater face a number of challenges:

They must carry and use breathing apparatus to stay underwater. Heavy compressed tanks on their back and a regulator in their mouth. This means they cannot talk.

Human eyes see poorly when exposed directly to water. They must wear a face mask at all times in order to focus on the world around them. They view the world through glass.

The human body is buoyant in the water, they feel like they weigh less than we do on land. They cannot readily stand without a weighted belt, They cannot readily walk due to the water surrounding them. They use flippers to help them move at a decent pace from place to place.

and so on...

Each one of these tools represent a compromise that we've made when operating underwater, we accept that you can't take a stroll underwater, so we adapted, learned from the fish and adopted fins. Fins are great, they make a lot of sense underwater and allow us to move in a way that works with, not against, our surroundings.

I think, for the present, a similar tact needs to be considered for VR. Lean heavily on its strengths (of which there are many...) and choose your battles wisely in terms of what aspects of reality you are trying to simulate.

As the DK2 have rolled in, we've seen a wave of users equipping their workstations with flight sticks and steering wheels. This is interesting. Last time I saw joysticks this popular was about 20 years ago.

There's a good reason for this though, these devices can be represented 1:1 in the VR world. You turn the wheel, the wheel in VR moves, you pull back on the throttle, your virtual throttle responds in kind and so much the better if these virtual controls are attached to your virtual hands. There's no cheating here, the wheel feels solid in you hands and it moves as it should in VR. This is inherently very satisfying to the user.

Which of course, very quickly brings me to this:

This is the Powerloader from Aliens as I expect any visitor to the blog to know, and I think it makes a very good target for a credible VR experience if you absolutely insist on attempting to track limbs. Use 3-axis motion sensing to track arm and leg movement but DON'T attempt to simulate the user's limbs directly interacting with the environment. You need a layer of abstraction between your world and the user's movements.

If they try to move their arm through the floor, they are met with a solid CLANG of the metal arm colliding with the floor. The user will understand and they won't feel cheated, it will feel very real. They are retrained by the limits of the device they are simulated as controlling. You could still feel VERY free as a user while using this simulation, but the designer has a "sanity check" when trying to constrain the limits of what the user can do with their limbs.

The user can still work out plenty of ways to attempt to violate the position of their arms with the what they are seeing in the simulation, but in the user's mind, the reason why it does not work is because of a limit of the machine they are controlling rather than a fault in the reality of the simulated world that the are trying to believe in. You might want to reread that last awkwardly written sentence, because it really sums up what I'm trying to get across here:

As a developer, give yourself a break and build in some constraints into your world that fit with the narrative of the environment. One of the last things you want to try to do is simulate unbridled reality. Aim to simulate a tiny slice and then do it very, very well.

VR users always need a layer of abstraction between them and the environment. Allow them to manipulate a mechanism, be it a car, tank or exoskeleton but don't dare let them actually try to interact with the world 1:1 with their own limbs without some kind mechanism between them and the world.

You are no longer developers, you are magicians, you need to get your audience to suspend their disbelief and like any good magician, you do this by carefully limiting what they can see and do at all times.

If anyone wants to say hi, I can been reached on Twitter at @ID_R_McGregor, or you link to me in Google+ if you happen to actually have a Google+ account that you surprisingly use for Google+ things.