Tech —

Hands-on with HoloLens: On the cusp of a revolution

It's an amazing device, and now it needs amazing software to go with it.

The HoloLens in action. Filmed by Esy Casey, produced and edited by Nathan Fitch

Since it was first revealed in Redmond at the start of last year, I've used HoloLens a couple more times. I did a development session that walked through developing a simple app for the headset using Unity, and I've also seen Microsoft's touring HoloLens experience that it was offering developers last year. Each time I've been subject to tight controls; all usage of the device was strictly supervised, and I wasn't permitted to take pictures or video of the device, nor even take screenshots of the images it produces.

With the HoloLens Developer Edition starting to make its way into developers' hands, Microsoft has at least eased up on the restrictions, and earlier this week I was able to use the device for a couple of hours on my own; not in a tightly scripted, controlled environment, but without any supervision at all. And for the first time, we were allowed to take pictures and photographs when we did.

The hardware itself hasn't changed a whole lot in the fifteen months since Microsoft unleashed it on the world. Even at that announcement, the company had a model of how the first production units would look (though the prototypes we played with were altogether more bulky and uncomfortable), and when I used the hardware a second and third time, it more or less matched that design. An adjustable band fits around your head, holding the thing in place, allowing the working parts to be adjusted up and down and back and forward to find a comfortable position.

Practice makes perfect

Getting it optimally attached to your head will take a bit of practice; it's tempting to put the adjustable band too low (it should sit on your hairline, more or less) or rest the visor part on your nose (there are nose-pieces available for the HoloLens, but the 579g/1.37lb weight is more than you want to perch on your conk), but once you have it properly situated, it's... well, it's not uncomfortable. It's not tight, it doesn't pinch or rub anywhere, and nothing digs in. But you do start to feel the weight after extended usage, and I got quite sweaty under the band.

When it's on, though, and you have it set up correctly the images it produces continue to be astounding. You'll need to use it in a room that's not too brightly lit—against reasonably bright San Francisco sunlight the images were barely perceptible—and bright scenes tend to look better than dark ones—there's no such thing as black light, so it can't produce anything darker than whatever happens to be behind its images—but subject to these constraints, the result is spectacular. The 3D imagery it produces looks solid and convincing, and critically, it stays exactly where it should.

Augmented reality (AR), or as Microsoft calls it, mixed reality, blends computer-generated imagery with the real world, and that poses a problem that virtual reality (VR) doesn't have to care about: objects that are positioned in the real world need to maintain their relative position in the real world even as you move around. Keeping them anchored in place is essential to making them seem real.

At this task, the HoloLens is a triumph. The room I was in had various objects pinned to the walls and sitting on tables, and they remained solidly in place throughout my testing. A key part of this is latency; HoloLens apps have to produce 60 frames per second (though this drops to 30fps when recording on the device) to ensure that the application can keep its 3D scene updated to show the right things in the right places.

But the hardware goes further. The mysterious holographic processing unit (HPU), a custom chip found in each device that integrates the accelerometer and spatial data from the device's sensors to keep track of the world around you, minutely adjusts the display output at 240 frames per second in order to ensure that the 3D objects are drawn exactly where they should be.

And it really works.

New ways to interact

The HoloLens I was using was the Developer Edition. It runs Windows 10, with a similar sort of tile-based program launcher, which looks familiar but immediately presents you with a twist. HoloLens runs 2D applications, too, but they have to be put somewhere. It's like desktop window management, except this time the windows are 2D panes in 3D space, so you have to put them down where you want them before you can actually use them.

Being a production unit, the software experience has some additional polish and features that I've not seen before. There's a tutorial that walks you through the basic gestures and explains how to navigate. This had some new parts to it; in past uses of the HoloLens we've been taught the "air tap" gesture, where you fold your outstretched index finger as if you were pressing down on an invisible button. A couple of other gestures are also now possible; a pinch and hold, for scrolling, and one that Microsoft calls "bloom;" start with all five fingertips brought together, and then open your hand, as if petals unfurling from a flower. This gesture instantly quits the app you're in and goes back to the Start menu.

Navigation, to move the cursor around, continues to be based on "gaze," or more specifically the direction your face is pointing (there's no eye tracking, so it isn't really your gaze direction, just some approximation of it).

Also new is a device calibrator app to adjust it for your interpupillary distance. To optimize the 3D imaging the device needs to know exactly how far apart your eyes are. Microsoft has always said that it will include some built-in way of doing this, but for the sake of expediency, past demos have measured the distance using a pupilometer (the same kind of device you use when getting a pair of glasses made), and just dialled the number directly into the software. This time around, with a bit more time to play with, I finally got to use the built-in feature; an app that makes you look first with one eye and then the other, and align your finger with some targets.

I used a handful of applications—watching some video, poking around Settings, browsing in Edge—and two 3D games, Robo Raid (formerly "Project X-Ray") and Young Conker.

Robo Raid is I think my favorite game that I've played on the device. The first step—and indeed, the first step in many of the gaming experiences—is to scan the room you're in so that the device can build a model of where you are and what obstacles and objects are around you. With this done, a range of robotic aliens will punch holes through the wall and either crawl along the walls or fly around the room, and you have to shoot at them. The Clicker peripheral was useful here; it's a little Bluetooth button that does an air tap each time you click it. This shows off the 3D very well; you can go up to the holes in the walls and peer inside, and it's a joy every time. It looks like there really is a hole!