Microsoft HoloLens: How Real Is It?

Microsoft introduced their HoloLens to the world just over 100 days ago. How real is the device? At Build, the folks from Microsoft brought a large number of devices so people could get a look. They indicated that they had brought hundreds of devices; however, there were thousands of people attending. As such, only a limited number of people got a chance to not only see the HoloLens, but to also put one on. I was lucky enough to not only put one on, but to also walk through a sixty minute (roughly) session as well.

The State of the Hardware

The devices are real.

Unlike what was presented 100 days ago, the devices brought to Microsoft Build were untethered devices that seemed to be working fully. Simply put, the hardware looked fantastic.

In a word, the HoloLens appeared to be functioning, “ready to go” devices. Having said that, the presentations and hands-on demos were done in an extremely controlled environment. Not only was there security, but there was also no cameras or other devices allowed near the room where the interactions took place. Additionally, although we got to touch, code, and try the HoloLens, it was with a very scripted presentation. Even with all of this, there were glitches.

When working with the demo, there were a few setup issues, such as aligning an IP address and setting a value outside of the system to align the device with your eyes. I assume these types of steps are a result of this being a hardware beta. I also assume easier ways to calibrate and automate the settings will be achieved before the product launches. As mentioned, the tether from previous presentations is gone and the device stands on its own. Like a mobile phone, the HoloLens plugged into a USB, which would allow programs from Visual Studio to be deployed as well as charge the device. The device could easily be unplugged and placed on your head for free movement.

Coding for Microsoft HoloLens

How do you code applications for the HoloLens? As Microsoft mentioned, the HoloLens runs Windows 10, so working with it programmatically was very similar to working with mobile phones. You build your programs into Visual Studio, and then simply deploy to the device. Because the application is Windows 10, it then runs.

As a side note, the only way we saw applications running on the device was by deploying them directly from Visual Studio on a “Run without debugging” mode. We never launched an application directly on Windows 10 running on the HoloLens, nor did we see any interaction on the HoloLens outside of the application we ran directly from Visual Studio. I assume that, because it is a functioning Windows 10 device, you’ll be able to interact with the OS and run a variety of stored applications directly from it. This, however, is an assumption because nothing was shown regarding this and Microsoft was not answering any questions.

The best starting point for a HoloLens application seems to be Unity. Microsoft has been pushing Unity for a while now as a great game development environment. As such, it is no surprise to see that building assets and worlds in Unity is the starting point of what we were shown for building HoloLens applications. In Unity, you simply replace the primary camera with a view from the HoloLens, and then you can create your asset and world. You also can create scripts within Unity for doing the variety of interactions you’d expect using the Hololens. This ranges from adding your sound, your gestures, collisions, world edges, and more.

You also can add the ability to detect items from the real world. In the coding that we were shown, code was included to detect existing objects in the real world. These surfaces were then detected in the script so that our objects would react to them. In the first example we worked with, the application had a simple paper ball (origami) that would drop with a gesture or with a voice command. Without collision detection, this simply dropped and disappeared at some point. In the gaming (app) world, it would have kept falling like any object created in Unity. The application was then modified so that objects from the real world could be detected as surfaces. This allowed the ball to roll off tables (real tables in the room) and then stop when they hit the floor. The coding for this was relatively straightforward and all done within Unity.

Once the program assets, scripts, and overall world were created in Unity, they were simply built into a Visual Studio project. From Visual Studio, they were compiled and deployed. It was about as straightforward as you could hope.

What’s Next for HoloLens

The coding for HoloLens was relatively simple and straightforward. It was done in Unity and appeared similar to what Unity developers have been doing already. The devices functioned, but did have minor glitches. Even though things seemed to go smoothly for most people in the room, the ones I used did end up with glitches that caused the devices to immediately be replaced. Such glitches included the holograms not staying where they were put. This seemed to be a hardware issue rather than a software one. The overall devices seem a bit fragile, but that is the offset of being lightweight.

Overall, they seemed well on their way to being ready. I wouldn’t be surprised to see them at Christmas, but then again, due to the level of complexity in the device, I wouldn’t be surprised if they didn’t happen until 2016 as well. Microsoft wouldn’t answer any questions, nor give any indications of release dates, cost, or other questions outside of the specific presentations and code being shown. As such, there is no way to know what is left to do. Questions such as resolution and battery life remain unanswered.

The one thing I do know about the HoloLens is that I want one!

Additional Comments on HoloLens

In talking about HoloLens and what I saw, there are a few things that have come up in discussion. Following are simply a few additional comments regarding some of these discussions:

The graphic resolution was “game-like”. You can see demos of graphics in the keynotes of Microsoft Build and other presentations online. The rendered graphics you see in those demos match what I saw in the device. Microsoft was not answering questions on resolution; however, it is extremely doubtful that you’ll be able to do “Halo-level” graphics at this time with the device, let alone real world rendering.

Perspective was not 360 degrees. Said differently, the holograms are not always in your view because the screen doesn’t cover your eyes. When you turn to where the hologram is in your peripheral vision, it gets cut out. As such, you need to be looking where you put the hologram to see it.

Perspective and distances done in Unity and with the HoloLens are set to real world measurements. You can set a meter in your application to be a meter in the real world. As such, I could write an application to have a hologram sit two meters in front of me and it will stay two meters in front of me. I also can set a hologram to a specific spot and, because distance is being tracked, it can stay in that spot. In the demos we created, we were able to set a hologram on a table and then walk around that table and around the hologram to see it from all sides.

Coding gestures, voice recognition, collisions, and such was very straightforward. An existing developer would have no issue with the coding.

I don’t recall anything being said regarding networking of the devices and sharing holograms; however, these are Windows 10 devices, so that should all be programmable. Sharing worlds seems like a basic concept for the device.

I want to see someone code a Unity world that is a tour a virtual world such as a Star Trek Enterprise ship. The graphics coding would take quite a bit, but it would be very cool to walk around in! The concept of the Star Trek holodeck is feasible with the only difference being you’d have to be wearing the hololens!

Top White Papers and Webcasts

Harmony Platform was built to make it possible for anyone to rapidly build mobile apps for their workforce. Harmony contains many pre-built, configurable components that customers can use to easily assemble and distribute apps to their teams. Under the covers, the Harmony microservices APIs make it easy to plug-in different cloud services, including IBM's Compose RabbitMQ, an open source messaging queue that ensures the various microservices technologies are communicating and passing data to one another.
In …