Tech —

Hands-on with HoloLens: On the cusp of a revolution

It's an amazing device, and now it needs amazing software to go with it.

Playing Young Conker. Filmed by Esy Casey, produced and edited by Nathan Fitch.

Environmentally friendly

Robo Raid makes good use of the environment. The hotel room I was in had a large column in the middle. I could duck behind this to dodge the bullets being shot at me by the aliens, at the price of not being able to see the aliens that were behind the column. HoloLens occludes the view of the aliens when they're behind a physical object, and this worked remarkably well, with precise clipping along the edge of the column and no parts of the alien poking out when they shouldn't. This ability to not merely use physical movement—as "room scale" VR applications also do—but to actually take advantage of my surroundings and integrate them into the gaming experience worked really well in this game.

Sound is a big part of this, too; the speakers in the visor offer some of the most dramatic positional audio I've ever heard.

Young Conker was a little more awkward, and while I think it could be the kernel of a fun game, it also highlighted some of the limitations of the ways you can control HoloLens. The squirrel Conker runs around the room to pick up coins and collect other things that are scattered about the place. In this regard, the game also made good use of the physical environment; he could jump onto sofas and window sills, things could hide behind tables, and he'd bang into walls. But to control him, I didn't have a gamepad or anything like that; instead, he ran toward wherever I was gazing. I could also give him voice commands—saying "hurry up" made him burst forward, which was handy for catching the objects I was chasing.

HoloLens supports and works with mice, keyboards, gamepads, and other input peripherals, and while the gaze, gesture, and voice control all worked well, in the Conker game I would have liked something a little more precise. For example, I found it hard to "hurry up" at exactly the right time to pounce on an object. A straightforward button would have been much easier.

My understanding is that there will be some limitations here, at least initially. The gestures that HoloLens supports are handled by the HPU. This means that they're very consistent, since applications don't have to handle them at all, but it also means that apps don't seem to be able to track my hands and fingers to offer their own gestures.

For example, in the 3D model-building Holo Studio app, wherein you can build 3D scenes and send them to be 3D printed, common tasks include grabbing objects to move, rotate, and resize. Currently you do this by either air tapping the respective move, rotate, or resize tool in the toolbox or by issuing equivalent voice commands to enter move, rotate, or resize mode. I feel that this is crying out for something more, such as holding my hand out to turn a virtual knob to rotate things or using two hands to grab the corners of something to stretch it or crush it to a new size. After all, HoloLens leaves both my hands free, so why not?

Better together

HoloLens impresses immensely when used alone. What really took it to the next level was using it collaboratively. I did a second development session similar to the one I did last year, but with some added extras. Specifically, the toy apps we all built were network connected. Each group of six HoloLens users had their units connect to a server that relayed information between us all. With this, we could see shared 3D objects.

This seems very clever. The HoloLens units have no system of absolute positioning; they don't know where they are relative to each other. All they know is what the space around them looks like, thanks to their Kinect-like spatial mapping, and how they're being moved around, thanks to their accelerometers. The server application took this spatial data and integrated it so that it could figure out where within the room each HoloLens actually was relative to each other. This enabled two things; a 3D object could be placed within our shared space, and we'd all see it in the same location. Second, we each had a little robot flying above our heads and following us around. We could see each other's robots and even throw things at them to knock them out and make them see stars.

This usage for me was transformational. As much as I like the single-user experience, it felt a little strange; I could see and respond to these things, but nobody in the room with me had any idea what I was looking at. This is nowhere near as alienating as I find VR feels—AR is still very much grounded in reality, you can make eye contact with the people you're talking to, and you remain in the here and now—but it still feels slightly awkward.

That awkwardness completely evaporates with the collaboration. It almost flipped around entirely; the Microsoft staff supervising the development session were the odd ones, because they couldn't see or interact with these objects that seemed so very real to the rest of us.