What’s the Difference Between VR, AR and MR?

Now, imagine you can play Tony Stark in Iron Man or the Joker in Batman. That’s virtual reality.

Advances in VR have enabled people to create, play, work, collaborate and explore in computer-generated environments like never before.

VR has been in development for decades, but only recently has it emerged as a fast-growing market opportunity for entertainment and business.

Now, picture a hero, and set that image in the room you’re in right now. That’s augmented reality.

AR is another overnight sensation decades in the making. The mania for Pokemon Go — which brought the popular Japanese trading card game for kids to city streets — has led to a score of games that blend pixie dust and real worlds. There’s another sign of its rise: Apple’s 2017 introduction of ARKit, a set of tools for developers to create mobile AR content, is encouraging companies to build AR for iOS 11.

Microsoft’s Hololens and Magic Leap’s Lightwear — both enabling people to engage with holographic content — are two major developments in pioneering head-mounted displays.

It’s not just fun and games either — it’s big business. Researchers at IDC predict worldwide spending on AR and VR products and services will rocket from $11.4 billion in 2017 to nearly $215 billion by 2021.

Yet just as VR and AR take flight, mixed reality, or MR, is evolving fast. Developers working with MR are quite literally mixing the qualities of VR and AR with the real world to offer hybrid experiences.

Imagine another VR setting: You’re instantly teleported onto a beach chair in Hawaii, sand at feet, mai tai in hand, transported to the islands while skipping economy-class airline seating. But are you really there?

In mixed reality, you could experience that scenario while actually sitting on a flight to Hawaii in a coach seat that emulates a creaky beach chair when you move, receiving the mai tai from a real flight attendant and touching sand on the floor to create a beach-like experience. A flight to Hawaii that feels like Hawaii is an example of MR.

VR Explained

The Sensorama

The notion of VR dates back to 1930s literature. In the 1950s, filmmaker Morton Hellig wrote of an “experience theater” and then later built an immersive video game-like machine he called the Sensorama for people to peer into. A pioneering moment came in 1968, when Ivan Sutherland is credited for developing the first head-mounted display.

Much has changed since. Consumer-grade VR headsets have made leaps of progress. Their advances have been propelled by technology breakthroughs in optics, tracking and GPU performance.

Consumer interest in VR has soared in the past several years as new headsets from Facebook’s Oculus, HTC, Samsung, Sony and a host of others offer substantial improvements to the experience. Yet producing 3D, computer-generated, immersive environments for people is about more than sleek VR goggles.

Obstacles that have been mostly overcome, so far, include delivering enough frames per second and reducing latency — the delays created when users move their head — to create experiences that aren’t herky jerky and potentially motion-sickness inducing.

Software plays a key role, too. The NVIDIA VRWorks software development kit, for example, helps headset and application developers access the best performance, lowest latency and plug-and-play compatibility available for VR. VRWorks is integrated into game engines such as Unity and Unreal Engine 4.

To be sure, VR still has a long way to go to reach its potential. Right now, the human eye is still able to detect imperfections in rendering for VR.

One approach, known as foveated rendering, reduces the quality of images delivered to the edge of our retinas — where they’re less sensitive — while boosting the quality of the images delivered to the center of our retinas. This technology is powerful when combined with eye tracking that can inform processors where the viewing area needs to be sharpest.

VR’s Technical Hurdles

Frames per second: Virtual reality requires processing of 90 frames per second. That’s because lower frame rates reveal lags in movement detectable to the human eye. That can make some people nauseous. NVIDIA GPUs make VR possible by enabling rendering that’s faster than the human eye can perceive.

Latency: VR latency is the time span between initiating a movement and a computer-represented visual response. Experts say latency rates should be 20 milliseconds or less for VR. NVIDIA VRWorks, the SDK for VR headsets and game developers, helps address latency.

Field of view: In VR, it’s essential to create a sense of presence. Field of view is the angle of view available from a particular VR headset. For example, the Oculus Rift headset offers a 110-degree viewing angle.

Positional tracking: To enable a sense of presence and to deliver a good VR experience, headsets need to be tracked in space within under 1 millimeter of accuracy. This minimum is required in order to present images at any point in time and space.

Vergence accommodation conflict: This is a viewing problem for VR headsets today. Here’s the problem: Your pupils move with “vergence,” meaning looking toward or away from one another when focusing. But at the same time the lenses of the eyes focus on an object, or accommodation. The display of 3D images in VR goggles creates conflicts between vergence and accommodation that are unnatural to the eye, which can cause visual fatigue and discomfort.

Eye-tracking tech: VR head-mounted displays to date haven’t been adequate at tracking the user’s eyes for computing systems to react relative to where a person’s eyes are focused in a session. Increasing resolution where the eyes are moving to will help deliver better visuals.

Despite a decades-long crawl to market, VR has made a splash in Hollywood films such as the recently released Ready Player One and is winning fans across games, TV content, real estate, architecture, automotive design and other industries.

VR’s continued consumer momentum, however, is expected by analysts to be outpaced in the years ahead by enterprise customers.

AR Explained

AR development spans decades. Much of the early work dates to universities such as MIT’s Media Lab and pioneers the likes of Thad Starner — now the technical lead for Alphabet’s Google Glass smart glasses — who used to wear heavy belt packs of batteries attached to AR goggles in its infancy.

Google Glass (2.0)

Google Glass, the ill-fated consumer fashion faux pas, has since been rebirthed as a less conspicuous, behind the scenes technology for use in warehouses and manufacturing. Today Glass is being used by workers for access to training videos and hands-on help from colleagues.

And AR technology has been squeezed down into low-cost cameras and screen technology for use in inexpensive welding helmets that automatically dim, enabling people to melt steel without burning their retinas.

For now, AR is making inroads with businesses. It’s easy to see why when you think about overlaying useful information in AR that offers hands-free help for industrial applications.

Smart glasses for augmenting intelligence certainly support that image. Sporting microdisplays, AR glasses can behave like having that indispensable co-worker who can help out in a pinch. AR eyewear can make the difference between getting a job done or not.

Consider a junior-level heavy duty equipment mechanic assigned to make emergency repairs to a massive tractor on a construction site. The boss hands the newbie AR glasses providing service manuals with schematics for hands-free guidance while troubleshooting repairs.

Some of these smart glasses even pack Amazon’s Alexa service to enable voice queries for access to information without having to fumble for a smartphone or tap one’s temple.

Jensen Huang on VR at GTC

Business examples for consumers today include IKEA’s Place app, which allows people to use a smartphone to view images of furniture and other products overlayed into their actual home seen through the phone.

Today there is a wide variety of smart glasses from the likes of Sony, Epson, Vuzix, ODG and startups such as Magic Leap.

MR Explained

Now, imagine the junior-level heavy duty mechanic waist deep in a massive tractor and completely baffled by its mechanical problem. Crews sit idle unable to work. Site managers become anxious.

Thankfully, the mechanic’s smart glasses can start a virtual help session. This enables the mechanic to call in a senior mechanic by VR around the actual tractor model to walk through troubleshooting together while working on the real machine, tapping into access to online repair manual documents for torque specs and other maintenance details via AR.

That’s mixed reality.

To visualize a consumer version of this, consider the IKEA Place app for placing and viewing furniture in your house. You have your eye on a bright red couch, but want to hear the opinions of its appearance in your living room from friends and family before you buy it.

So you pipe in a VR session within the smart glasses. Let’s imagine this session is in an IKEA Place app for VR and you can invite these closest confidants. But first they must sit down on any couch they can find. Now, however, when everyone sits down it’s in a virtual setting of your house and the couch is red and just like the IKEA one.

Voila, in mixed reality, the group virtual showroom session yields a decision: your friends and family unanimously love the couch, so you buy it.

The possibilities are endless for businesses.

Mixed reality, while in its infancy, might be the holy grail for all the major developers of VR and AR. There are unlimited options for the likes of Apple, Microsoft, Facebook, Google, Sony, Netflix and Amazon. That’s because MR can blend the here and now reality with virtual reality and augmented reality for many entertainment and business situations.

Take this one: automotive designers can sit down on an actual car seat, with armrests, shifters and steering wheel, and pipe into VR sessions delivering the dashboard and interior. Many can assess designs together remotely this way. NVIDIA Holodeck actually enables this type of collaborative work.