Menu

Connecting Virtual Worlds: Hyperlinks in WebVR

WebVR development continues to happen at a fever pace. And while much of the work has been around refining aspects of the API that define how content is rendered into headsets and managing input devices, one of the most exciting recent developments is the added ability to navigate between WebVR experiences.

The story of the Web is really the story of the hyperlink. Without the hyperlink, the Web would not exist. The elemental piece that allows for web pages to link from one to another. Enabling users to move from one interest to the next in a organic and exploratory fashion. It’s how you end up eyeballs deep in some arcane subject that you never knew you were interested in — often way too late into the evening, when you should be sound asleep. It’s why we enjoy the Web.

This is exciting stuff! WebVR content will no longer have to exist as their own siloed experiences and expand into larger interconnected experiences.

In this article, we’ll dig into how this navigation behavior works and see a basic working example.

It’s still super early

Make no mistake; we’re still in the very early stages of what this will look like. The specification accounts for only the mechanical aspects of how you move from one experience to the next. So no <a href="…"></a> markup or blue underlined text or outlines just yet, nor have we finalized how the interaction model will work.

For now, this leaves WebVR content to freely express what a link looks like and how users interact with them. But, more about that later.

Link traversal experience

We’re not going to cover how the scene itself is built (for that, see this), but we’ll focus on the code responsible for how content navigates and how we manage the VR rendering during the navigation.

Built by developer Erica Layton, this experience uses A-Frame (three.js) with a simple gaze-based cursor, which, when targeted to a sphere and clicked, navigates the user to another WebVR scene (by setting window.location.href from JavaScript).

In-headset impressions

It takes about 1-3 seconds moving from page to page, during which time, the headset displays black until the next scene is loaded and begins rendering content into the headset. As an early-stage experiment, it’s a serviceable experience, considering the content that is being loaded and the pieces needed to support it (i.e., webvr-polyfill, three.js, A-Frame, etc.) all must currently load synchronously before the scene is entirely constructed, the WebGL content is rendered to a canvas, and finally the WebVR API renders the content to the VR headset.

There is still room for improvement in how VR content is displayed into the headset and performance with timing between page navigations.

We’re just starting to scratch the surface with progressive loading of content, Service Worker caching (using the Cache API), and so forth.

How it works

Let’s dig deeper and see how navigation works in WebVR:

It’s worth noting that, as of this writing, the WebVR interfaces and events, including the exact mechanisms for navigation, are currently in flux. But, today, traversing links does work (i.e., automatic presentation of VR upon navigation) in Firefox Nightly, and navigation will soon work the same in a consistent manner across all WebVR-capable browsers.

Here’s a quick walkthrough with pseudo-code snippets:

On initial page load, we check to see if we have navigated from WebVR content by checking the active referring displays and automatically present (without any user-gesture requirement):

// If there was a VR display to which content was previously being rendered,
// use that VR display for render contenting upon page navigation
// (assuming the `canvas` and other dependencies have been loaded and are ready).
var canvas = document.querySelector('canvas#vr-canvas');
navigator.vr.getReferringDisplays().then(function (displays) {
if (!displays.length || displays[0].isPresenting) {
return;
}
return displays[0].requestPresent([
{source: canvas}
]).then(
enterVR(display)
);
});

If there is no active VR display to render to, then we enumerate and set up VR content to be displayed and provide a button on the page for users to enter VR mode.

The visual treatment and affordances that will help users identify links within a scene (possible solutions can include concepts already familiar with Web users, such as blue outlines, labels, and cursor interactions)

Interactivity

How users will trigger navigation behavior (i.e., should this always be a standard and consistent interaction?)

Focussable areas of content within the page (e.g., #anchor could move the camera to and/or expose an <div id="anchor"> element, or an event listener could be written in JavaScript to handle when the page’s hash changes [i.e., when hashchange fires])

Moving from place to place within a world using common UX patterns in VR (e.g., teleportation, interacting with objects, walking to/through/ hotspots/portals, etc.)

Identifiability: trustworthy ways of the User Agent (browser) knowing to where users are going before following a link (you would never book a flight ticket on a 2D page from an unknown or untrusted location, so why would you do that in VR?)

Browser-chrome UI to present to the user the destination URL

Sharing

One of the key benefits of the URL is the ability to share a location of a page in a way that is universally understood by Web browsers, regardless of the platform and device

Permanency

Though WebVR will at least initially (and hopefully only temporarily) inherit one of the issues of links: the destination page can be taken offline or malfunction (potentially, we can work with efforts to distribute and decentralize the Web so there are no single points of failure in WebVR)

No pressure. We’re mostly here to get you excited about moving between WebVR worlds!

It’s a team effort

Here at Mozilla, the VR team has been researching, prototyping, and thinking about VR-first browser experiences and navigating between experiences for years now. It’s a huge deal. We would like to first thank Erica Layton a ton for her work in putting together the demo scenes. We would also like to acknowledge the WebVR implementers, namely Kip Gilbert (Mozilla Firefox), Brandon Jones (Google Chrome), Justin Rogers (Oculus VR), Michael Blix (Samsung WebVR & Gear VR), and Laszlo Gombos (Samsung WebVR & Gear VR) for being part of making this possible.