Looking to the future of mixed reality (Part II)

Thoughts about design. Part I of this series summarized the current and future issues facing mixed reality and its adoption in the mainstream. In this second part of this series we explore the design challenges facing developers with the new and promising mixed reality medium.

Transition phase. We can all agree that in the near future physical screens will begin to disappear and be replaced with displays that blend into the environment (some descriptions here, here and here). Whether it happens in 5 years as we predict, or in 10 years as Mark Zuckerberg thinks, the next vector driving the digital world will be augmented reality glasses. With that being said, we recognize that there will be a transition period both technologically and socially, and for social reasons we need to design this next phase so that users and non-users can coexist.

The social awkwardness of wearing MR glasses will most likely force the initial uses to be in the home, the workplace or a location where all users are required to wear the glasses (such as theme parks, museums, stadiums, planes…etc.). The form factor will be a key tool to mitigate the awkwardness and drive the speed of adoption since the glasses will need to be as stylish and thin as possible in order for users to accept them into their daily routine. In the short term, we hypothesize that the first models will complement smartphones, mostly in order to decentralize the computational burden, improve autonomy, and to minimize the impact of change on the user. As an example, a smartphone can act as a trackpad for glasses in a workplace situation, and then revert back to a smartphone when no longer needed or when the batteries in the glasses die.

We need to think ‘Human centric’. What does the user really need? This seems like an obvious question to ask, however, the current state of MR development has a large number of MR demos whose sole purpose seems to be presenting floating objects with little actual value to the user. Let’s be honest, no one wants to live in the dystopian future that we were warned about so brilliantly by Keiichi Matsuda in his short film “Hyperreality”. And while MR apps need to become more relevant for the user, mixed reality doesn’t need a ‘killer app’. For comparison, just think of the internet where there is no single killer app, but rather the internet itself is the killer app. And just like the internet, it will be a multitude of everyday uses that will render MR devices indispensable.

Nowadays, making a useful mixed reality application is clearly a challenge because of the form factor and limitations of current technology. But in the not so distant future, user hands will be free, the glasses will know what the user is looking at, what object they are touching, how it is being handled and what the user wants to do with it. So our group is looking into the design considerations that will be relevant beyond this point, when all this becomes established.

Observing and understanding the user will allow us to develop proactive scenarios that will trigger on the user’s intention in order to facilitate their life, preferably without changing their habits. Physical tech, like eye tracking, will play a big role in understanding the user’s intent, but design will be even more important in minimizing complexity and avoiding disconnecting people from the real world. Smartphones are already disconnecting people from the real world and distracting them to the point of becoming dangerous, imagine what mixed reality might do if not properly designed. It will be crucial for safety and usability that everyone can experience their own mixed reality while remaining connected to the tangible world around them.

We also need to think “Object centric”. Although there are multiple ways of imagining MR, we should focus primarily on the awakening of matter, meaning that objects in our surroundings augment themselves to communicate with us.

We are not Tom Cruise! Staying upright and making big gestures is the opposite of our everyday interaction habits. The mouse-keyboard combo has yet to be replaced because it is the most efficient way to execute a maximum number of actions with minimum movement in a seated position. The question becomes how do we interact with digital matter if input devices such as keyboard, mouse, and gamepad disappear? The “World as a device” approach not only creates an entry point to contextual services, but also transforms each object into an interactive controller. Physical objects will inherit the attributes, interactions and behaviors of a virtual replica, a 3D asset that we call a smart object.

For example, a simple business card could give both direct access to an application like LinkedIn and be a user interface that avoids the user having to take out their smartphone or use a computer.

Concept video of a business card contextual application.

By being user and object centric, we could accomplish what David Smith tells us makes the traditional mouse & keyboard so successful: intention amplification – where small movements and gestures result in large, significant actions. This magnifies our efforts and lowers fatigue allowing us to focus on the objects themselves rather than the movements we use to manipulate them. By centering design around a user’s intent and using intelligent objects that respond to that intent by modifying themselves, our thought process is not limited by physical laws, but rather will allow us to achieve a new freedom. Plus we don’t “pollute” reality with noisy floating gizmos or unnecessary UI.

The world as a playground. With these principles of interaction it will be very easy to gamify our environment. For example, to gamify any particular object, it will be sufficient to modify the 3D object within Unity, which would then propagate the game behaviors to the real world object. Imagine gamifying the world on both a smaller personal scale and on a larger more public scale; something we refer to as micro and macro interactions respectively:

For micro-interactions:

Instead of using a pen on the back of a cereal box to trace the path of a character through a labyrinth, we could use the box itself and have the user pitch and yaw the box to create movement.

Playing board games can be transformed by adding interactions to both the board and cards.

For macro-interactions:

Imagine an Escape Room where tipping a specific book in the library triggers the opening of a dimensional portal in the middle of the room.

Or touching water from a real life fountain to regain life points during life-size role-playing games.

Illusion and disillusion. Contrary to appearance, augmented reality is far from being a new technology. The literature and research on the subject are rather dense. However, the revival of virtual reality has given a boost to technological innovations that will soon allow us to fill the gaps in augmented reality and produce advanced mixed reality solutions that ultimately make alternative realities indistinguishable from the real world.

Unfortunately, at the moment, this is far from being the case. There is a gap between what people perceive through the many demo videos posted on social media and the reality once the phone is in their hands. The gap is even more pronounced for the current generation of AR glasses. Users are often underwhelmed by the technology given their high expectations. To bridge the gap and have a feeling of presence (presence of the object) there remains work on at least the following:

Secondly, we need to find a way to handle lighting and occlusion vis see through displays.

Indeed, current AR glasses are based on an additive light system since the technology cannot remove light, which renders black as transparent and displays ghostly content in full light. As an example, consider the videos above which were edited in order to create the realistic vision; below is an honest rendering of screenshots from the videos if we were to use the technology available at the moment.

Screenshot from the cereal box game that shows some ghostly content from a see through device perspective.

It is evident that this problem must be solved as it is a necessary condition for the adoption of glasses and to ensure that the use cases are in phase with the desires and collective imaginations of users. In the meantime, we develop our hypotheses and scenarios for a future where these technical constraints have been solved.

Being a tool rather than an end product leaves us the freedom to think holistically rather than limit ourselves to a particular technology as would a manufacturer, or to a particular market segment as would a startup. We therefore plan to regularly post short videos such as those in this article, in order to illustrate use cases that demonstrate a UX design vision.

Stay tuned! In part III we will outline our vision on technologies that could provide the foundations for solutions to future issues facing mixed reality and its adoption in the mainstream, and some inspirational development we are researching for future MR creators.

4 Comments

Mixed Reality has great future though the Investors of MR has Great future, Mixed Reality Market worth 453.4 Million USD by 2020

Most of the large players in the reality market have invested in the mixed reality technology. For instance in January 2015, the Microsoft Corporation (U.S.) announced the launch of its mixed reality prototype product named HoloLens. The device features a see-through, holographic display and advanced sensors that map the physical environment. Moreover, other players such as Atheer, Inc. (U.S.), Meta Company (U.S.), Daqri LLC (U.S.), and Magic Leap, Inc. (U.S.) have developed prototypes to cater to various industrial and enterprise related needs.
Download PDF Brochure: http://cutt.us/anZsg

The objective of the research study was to analyze the market trends for each of the industries, the growth rates of the various applications (industrial, aerospace & defense, medical, architecture, consumer, and others as well as the demand comparison of these industries.

Great work Greg and team! It’s comforting to know that Unity is thinking far beyond present day tech.

While you do that, I agree with Jashan Chittesh that it is important to keep the terminologies straight.

I think the stereo rendering feature is one of the best (worst) examples. To enable the stereo features of a unity Camera component you have to enable VR. Why? Call it what it is, “Stereo” and separate out the VR stuff. I am particularly annoyed at this because I use stereo for video projection. Unity’s stereo is intended for VR and it renderes directly to the device. It stops working if you target a RenderTexture. Because of this, I’ve had to roll my own solution with two cameras which rules out single pass stereo.

I truly wish people would stop talking about “Mixed Reality”. There’s currently Virtual Reality, with dedicated hardware that works really well for what it’s supposed to be. Of course, this tech will gradually improve over time – but we really are at the point in time now, where it works good enough for what it is supposed to do.

Then, there’s Augmented Reality, which already works really well as an app for mobile devices, like Pokemon Go, in fighter jets and modern cars. And, let’s be honest, works about as well as Virtual Reality 20 or 30 years ago in devices like MS Hololens. Of course, this doesn’t mean it will take another 20 or 30 years for “HMD-based AR” to get where current VR is, I’d guess 3-5 years is much more likely.

But more importantly, even though of course AR and VR share certain requirements and concepts, and even though on paper (if it’s a theoretical, scientific paper, that’s kind of disconnected from the real world of developing these things ;-) ), there is a continuum from AR adding objects to the real world to capturing the whole view … in practice, for almost all possible games and apps, you need an almost completely different approach to design for most aspects for AR compared to VR (or vice versa). There are a few exceptions – but those should never drive design decisions.

One important thing they both always share is tracking. So … yeah, let’s call things related to tracking “tracking”. Tracking is useful even for non AR/VR games and apps, so no need to put all of this under a faulty MR-umbrella.

Any HMD-related apps will render two images for each frame, one for each eye. Same as 3D TV, which has nothing to do with AR/VR. So, again, everything related to that aspect of AR/VR should rather go into “stereo” and has nothing to do with “MR”. Also, that whole concept of stereo rendering is completely irrelevant for “mobile device based AR” (unless the mobile device is turned into an HMD). In other words: Currently, it’s irrelevant for most AR, and if it’s relevant, it’s relevant because your targeting an HMD, not because it’s AR (or VR).

Finally, VR has objects in a 3D environment. Just like any game or 3D interactive application. AR, on the other hand, adds objects into the real world; often needs to recognize certain real world objects (which is something that’s completely irrelevant for VR), and needs to properly render virtual objects on the canvas of the real world (again, completely irrelevant for VR). The whole way rendering works is completely different in both approaches to AR (mobile display / HMD), vs. VR.

Design is all about understanding the problem. Even using the term “Mixed Reality” will make coming up with a truly elegant design that solves the actual problems at hand very difficult.

“Mixed Reality” currently is usually used either scientifically, for a continuum that doesn’t really exist in the real world, or for greenscreen based VR gameplay videos, which have nothing at all to do with Augmented Reality. And by Microsoft for their new VR headsets (that have nothing to do with Augmented Reality or “Mixed Reality” at all, except that they use the same tracking technology as Hololens).

This is a wonderful direction for Unity to be heading with its further development as a massively cross platform engine.

I truely believe VR is a fad, it just has too many not code issue oddities about it to last. The nintendo power glove was supposed to be the future too, yeah…

However AR I see so many wonderful uses for. It is a perfect immersion system where you still also get to interact with the real world. It is the true step up from the smart phone. This is the space you need to be growing along side of! As a side effect many of the tools you produce for AR development will also be useful for VR, so those guys who are all in on it still benefit. Keep it up!