Building Blocks for AR: A Conversation with Ubiquity6

AR Insider Interviews is an ongoing series that profiles the biggest innovators in XR. Narratives are based on interviews with subjects but opinions and analysis are that of AR Insider. See the rest of the series here.

One aspect of the AR world we all envision is that it’s persistent and synchronous so that we can experience things together. Popular AR to date is mostly asynchronous by comparison, such as recorded videos of selfie masks on Snapchat, shared and viewed later by a friend.

But synchronous sharing, along with image persistence and other key functions, are the promise of the AR cloud and the next phase of AR. Another way to think about this is a sort real-world version of a multiplayer online game like World of Warcraft, says Ubiquity6 CEO Anjney Midha.

“Our goal is to bring people together in real-world spaces for valuable, shared experiences made by any kind of creator,” he told AR Insider. “Today, we’re focused on a product that lets anyone easily create and launch shared AR moments using the mobile camera they have in their pocket.”

This vision will materialize through the company’s spatial browser (user-facing) and Reality Editor platform (creator-facing). By putting deliberately low-friction tools into the hands of users and creators, it hopes to engender a scalable and self-propelled ecosystem for AR experiences.

Democratizing AR

This ecosystem consists of Ubiquity6’s three main constituents: Curators, creators and users. Curators establish and moderate AR experiences. Creators build the AR graphics to populate experiences. And users consume and edit the experiences. These can all overlap.

Ubiquity6 embodies the democratization principle especially by building the platform around a web stack. That means any web developer can create AR experiences using the skill set they already have. This will be key to jumpstart the ecosystem by populating it with content faster.

“The barriers to making shared or persistent experiences in real-world spaces are incredibly high, especially for the 20 million+ creatives and developers who make things for the web,” said Midha. “They have to pick a game engine, learn a new language like C#, understand the black magic of computer vision, multiplayer networking, server infrastructure and then duct tape it all together.”

Friction is also reduced for users, given that they only need a web link to launch an experience. Today’s app paradigm conversely won’t work for AR — especially synchronous AR where dynamic “pick-up” experiences are hobbled by requirements that everyone stop and download an app.

Proof of Concept

To further accelerate and seed adoption, Ubiquity6 wants to jumpstart AR experiences that are built on the Reality Editor platform. To do this, it’s designing and deploying shared AR experiences in well-traveled places — a sort of inspiration or proof of concept to get the ball rolling.

“We think creators should be easily able to publish experiences for users to find in public spaces,” said Midha. “Users should be able to easily take those experiences with them to their own private spaces when they want to. We support this flow as a first-class citizen.”

The largest-scale effort so far was at SF MOMA in August, where Ubiquity6 created a shared AR experience that let patrons add virtual blocks (literally, “building blocks”) to a digital installation. The piece built up cumulatively and persistently over time from users’ collective contributions.

Beyond the intended effect of inspiring creators and users, the exercise revealed key behavioral and business model indicators. The biggest takeaway was that there can be exponential growth in user engagement and value as the number of players increases — a classic network effect.

For example, the average session length was 45 minutes, which far exceeds typical AR “snacking” use cases. Midha believes this is due to specific game mechanics that arise with large-scale collective play. These include both competitive and collaborative dynamics between players.

“We learned that it’s not about the quality of graphics, depth of gameplay, or clever ways to give people badges or points: The shared experience itself is the reward,” said Midha. “The ability to place a creation, precise to a few centimeters, in a real-world space in a way that appears and updates for hundreds of other viewers in real time was hugely rewarding for people.”

Beyond sociological aspects, there are spatial dynamics. With such experiences, there’s an opportunity to derive value from the fundamental economic principle of scarcity. And users with access to a given AR space feel a sort of exclusivity… potentially to the point of paying for it.

Multiplayer AR network effects. Image source: Ubiquity6, YouTube

Spaces and Faces

This all raises the question of how Ubiquity6 will make money. Midha, a former investment partner who founded Kleiner Perkins’ Edge seed fund (Magic Leap TheWaveVR), is more pragmatic and finance-minded than most. With $37.5 million in funding, he’s forthright about monetization paths.

He’s certain that the business model won’t be advertising. Like we’ve heard from 6D.ai and Magic Leap, ad models aren’t aligned with user interests and are a minefield of data collection conflicts. Instead, Midha wants to cultivate the innate human need to personalize our spaces and faces.

This can be seen in everything from fashion (analog) to Fortnite (digital). At the intersection of those phenomena is a likely sweet spot to administer a marketplace for accouterments that adorn AR experiences. This could also incentivize AR creators to build things with Ubiquity6.

Along with lowering technical barriers, that incentive could accelerate a network effect by getting over the classic “chicken & egg” hump of supply-constrained markets. This challenge is rampant in online marketplaces, much less ones that are built to the physical scale of the inhabitable earth.

It’s a large feat, but Ubiquity6 has the approach and pedigree to pull it off. That starts with building blocks on museum ceilings, then use cases that creators run with. The key is an AR-native toolset to enable that creativity, and to connect it with users: “We’re doing it AR-first,” said Midha.