Components

My door component for @aframevr was stuck with the 0.5 version. I finally took some time to fix it, and it is now compatible with the last version of the core (0.8.2). Some changes are explained in the documentation. https://t.co/CuN491u6Vg— Philippe #StargateNow (@Stargayte) December 8, 2018

So here's a little prototype I built during a lazy Sunday: @BeatSaber in a browser, using @aframevr

- Collision tracking is iffy, needs improvement- Getting a stable framerate was a bit of a challenge- Made for fun, may demo during meetups, no public release planned pic.twitter.com/v01aMt60jF— VRuben (@rvdleun) December 2, 2018

Today, we’re making available an early developer preview of a browser for the Magic Leap One device. This browser is built on top of our Servo engine technology and shows off high quality 2D graphics and font rendering through our WebRender web rendering library, and more new features will

Today, we’re making available an early developer preview of a browser for the Magic Leap One device. This browser is built on top of our Servo engine technology and shows off high quality 2D graphics and font rendering through our WebRender web rendering library, and more new features will soon follow.

While we only support basic 2D pages today and have not yet built the full Firefox Reality browser experience and published this into the Magic Leap store, we look forward to working alongside our partners and community to do that early in 2019! Please try out the builds, provide feedback, and get involved if you’re interested in the future of mixed reality on the web in a cutting-edge standalone headset. And for those looking at Magic Leap for the first time, we also have an article on how the work was done.

Firefox Reality 1.1 is now available for download in the Viveport, Oculus, and Daydream app stores. This release includes some major new features, including localization to seven new languages (including voice search support), a new dedicated theater viewing mode, bookmarks, 360 video support, and significant improvements to the performance and quality of our user interface.

We also continue to expand the Firefox Reality content feed, and are excited to add cult director/designer Keiichi Matsuda’s video series, including his latest creation, https://youtube.com/watch?v=v=daK-nrAgCus

.

Looking ahead, we are exploring content sharing and syncing across browsers (including bookmarks), multiple windows, tab support, as well as continuing to invest in baseline features like performance. We appreciate your ongoing feedback and suggestions — please keep it coming!

Reminder that with the web (#WebXR), creating sites that run quality equivalent to native apps is possible! Video is using the #Exokit Engine to demonstrate high FPS, non-laggy fun with the @aframevr's A-Painter, download Exokit and get building today! https://t.co/52YyRFkfPW— Nick Loomis #Exokit (@NickJLoomis) November 27, 2018

Reminder that with the web (#WebXR), creating sites that run quality equivalent to native apps is possible! Video is using the #Exokit Engine to demonstrate high FPS, non-laggy fun with the @aframevr's A-Painter, download Exokit and get building today! https://t.co/52YyRFkfPW— Nick Loomis #Exokit (@NickJLoomis) November 27, 2018

Part 2 of 2 of catching up. Sorry for the delays! I’ve built A-Frame Weekender, a tool for myself to get these out faster without spending hours per week.

Work has started on getting WebXR support into A-Frame, though progress on the spec is yet ongoing, browser support is limited, and feature parity with WebVR 1.1 is still not there. But something to have to do going forward. 0.9.0 should release before end of year!

Part 2 of 2 of catching up. Sorry for the delays! I’ve built A-Frame Weekender, a tool for myself to get these out faster without spending hours per week.

Work has started on getting WebXR support into A-Frame, though progress on the spec is yet ongoing, browser support is limited, and feature parity with WebVR 1.1 is still not there. But something to have to do going forward. 0.9.0 should release before end of year!

Fun times learning responsive controls in @aframevr: on my Go I have two controllers, one floating above the other. On Vive+Firefox I have 1 controller in hand, & 2 hands pinned to the floor. I can teleport, but I have to leave my fingers behind! 😂 https://t.co/G3OGYJY6Cm— Blinded by Headset (@510home) October 25, 2018

Miscellaneous

It was just a teeny, tiny bug fix, but my first ever contribution to @aframevr just got merged! I am disproportionally proud of that fact.— Steve Lewis (@stlewis1121) October 22, 2018

Your first patch merged on an open source project you like is super satisfying. When I’m on auto-pilot I often forget to celebrate more new @aframevr contributions 😕 Thank you all and keep them coming 🙏— Diego (@dmarcos) October 23, 2018

I love how building with @aframevr and other WebVR development tools make it so easy to add support for a wide range of devices. Happy to bring broader support for Frame to devices like Google Cardboard, and to users on mobile w/ no headset. #webvr#webxr#gltfhttps://t.co/1ucFaohUso— Gabriel Baker (@gabrieljbaker) November 1, 2018

Over the past few months, we're continuing to leverage the features of ARKit on iOS to enhance the WebXR Viewer app and explore ideas and issues with WebXR. One big question with WebXR on modern AR and VR platforms is how to best leverage the platform to provide a frictionless

Over the past few months, we're continuing to leverage the features of ARKit on iOS to enhance the WebXR Viewer app and explore ideas and issues with WebXR. One big question with WebXR on modern AR and VR platforms is how to best leverage the platform to provide a frictionless experience while also supporting the advanced capabilities users will expect, in a safe and platform independent way.

We recently released an update to the WebXR Viewer that that fixes some small bugs and updates the app to iOS 12 and ARKit 2.0 (we haven't exposed all of ARKit 2.0 yet, but expect to over the next coming months). Beyond just bug fixes, two features of the new app highlight interesting questions for WebXR related to privacy, friction and platform independence.

First, Web browsers can decrease friction for users moving from one AR experience to another by managing the underlying platform efficiently and not shutting it down completely between sessions, but care needs to be taken to not to expose data to applications that might surprise users.

Second, some advanced features imagined for WebXR are not (yet) available in a cross platform way, such as shareable world maps or persistent anchors. These capabilities are core to experiences users will expect, such as persistent content in the world or shared experiences between multiple co-located people.

In both cases, it is unclear what the right answer is.

Frictionless Experience and User Privacy

Hypothesis: regardless of how the underlying platform is used, when a new WebXR web page is loaded, it should only get information about the world that that would be available if it was loaded for the first time and not see existing maps or anchors from previous pages.

Consider the image (and video) below. The image shows the results of running the "World Knowledge" sample, and spending a few minutes walking from the second floor of a house, down the stairs to the main floor, around and down the stairs to the basement, and then back up and out the front door into the yard. Looking back at the house, you can see small planes for each stair, the floor and some parts of the walls (they are the translucent green polygons). Even after just a few minutes of running ARKit, a surprising amount of information can be exposed about the interior of a space.

If the same user visits another web page, the browser could choose to restart ARKit or not. Restarting results in a high-friction user experience: all knowledge of the world is lost, requiring the user to scan their environment to reinitialize the underlying platform. Not restarting, however, might expose information to the new web page that is surprising to the user. Since the page is visited while outside the house, a user might not expect is to have access to details of the interior.

In the WebXR Viewer, we do not reinitialize ARKit for each page. We made the decision that if a page is reloaded without visiting a different XR page, we leave ARKit running and all world knowledge is retained. This allows pages to be reloaded without completely restarting the experience. When a new WebXR page is visited, we keep ARKit running, but destroy all ARKit anchors and world knowledge (i.e., ARKit ARAnchors, such as ARPlaneAnchors) that are further than some threshold distance from the user (3 meters, by default, in our current implementation).

In the video below, we demonstrate this behavior. When the user changes from "World Knowledge" sample to the "Hit Test" sample, internally we destroy most of the anchors. When the user changes back to the "World Knowledge" sample, we again destroy most of the anchors. You can see at the end of the video that only the nearby planes still exist (the plane under the user and some of the planes on the front porch). Further planes (inside the house, in the case) are gone. (Visiting non-XR pages does not count as visiting another page, although we also shut down ARKit after a short time, to save battery, if the browser is not on an XR page, which destroys all world knowledge as well).

While this is a relatively simplistic approach to this tradeoff between friction and privacy, issues like these need to be considered when implementing WebXR inside a browser. Modern AR and VR platforms (such as Microsoft's Hololens or Magic Leap's ML1) are capable of synthesizing and exposing highly detailed maps of the environment, and retaining significant information over time. In these platforms, the world space model is retained over time and exposed to apps, so even if the browser restarts the underlying API for each visited page, the full model of the space is available unless the browser makes an explicit choice to not expose it to the web page.

Consider, for example, a user walking a similar path for a similarly short time in the above house while wearing a Microsoft Hololens. In this case, a map of the same environment is shown below.

This image (captured with Microsoft's debugging tools, while the user is sitting at a computer in the basement of the house, shown as the sphere and green view volume) is significantly more detailed that the ARKit planes. And it would be retained, improved and shared with all apps in this space as the user continues to wear and use the Hololens.

In both cases, the ARKit planes and Hololens maps were captured based on just a few minutes of walking in this house. Imagine the level of detail that might be available after extended use.

Platform-specific Capabilities

Hypothesis: advanced capabilities such as World Mapping, that are needed for user experiences that require persistence and sharing content, will need cross-platform analogs to the platform silos currently available if the platform-independent character of the web is to extend to AR and VR.

ARKit 2.0 introduces the possibility of retrieving the current model of the world (the so-called ARWorldMap used by ARKit for tracking planes and anchors in the world. The map can then be saved and/or shared with others, enabling both persistent and multi-user AR experiences.

In this version of the WebXR Viewer, we want to explore some ideas for persistent and shared experiences, so we added session.getWorldMap() and session.setWorldMap(map) commands to an active AR session (these can be seen in the "Persistence" sample, a small change to the "World Knowledge" sample above).

These capabilities raise questions of user-privacy. The ARKit's ARWorldMap is an opaque binary ARKit data struction, and may contain a surprising amount of data about the space that could extracted by determined application developers (the format is undocumented). Because of this, we leverage the existing privacy settings in the WebXR Viewer, and allow apps to retrieve the world map if (and only if) the user has given the page access to "world knowledge".

On the other hand, the WebXR Viewer allows a page to provide an ARWorldMap to ARKit and try and use it for relocalization with no heightened permissions. In theory, such an action could allow a malicious web app to try and "probe" the world by having the browser test if the user is in a certain location. In practice, such an attack seems infeasible: loading a map resets ARKit (a highly disruptive and visible action) and relocalizing the phone against a map takes an indeterminate amount of time regardless of whether the relocalization eventually succeeds or not.

While implementing these commands was trivial, exposing this capability raises a fundamental question for the design of WebXR (beyond questions of permissions and possible threats). Specifically, how might such capabilities eventually work in a cross-platform way, given that each XR platform is implementing these capabilities differently?

We have no answer for this question. For example, some devices, such as Hololens, allow spaces to be saved and shared, much like ARKit. But other platforms opt to only share Anchors, or do not (yet) allow sharing at all. Over time, we hope some common ground might emerge. Google has implemented their ARCore Cloud Anchors on both ARKit and ARCore; perhaps a similar approach could be take that is more open and independent of one companies infrastructure, and could thus be standardized across many platforms.

Looking Forward

These issues are two of many issues that are being discussed and considered by the Immersive Web Community Group as we work on the initial WebXR Device API specification. If you want to see the full power of the various XR platforms exposed and available on the Web, done in a way that preserves the open, accessible and safe character of the Web, please join the discussion and help us ensure the success of the XR Web.

In this Q&A, independent UX designer and creative catalyst Nadja Haldimann talks about how she approached working with Mozilla on the new Firefox Reality browser for virtual reality (VR). Before launch, Nadja and Mozilla’s Mixed Reality team worked with Seattle-based BlinkUX to do user testing. Here’s what

In this Q&A, independent UX designer and creative catalyst Nadja Haldimann talks about how she approached working with Mozilla on the new Firefox Reality browser for virtual reality (VR). Before launch, Nadja and Mozilla’s Mixed Reality team worked with Seattle-based BlinkUX to do user testing. Here’s what they learned, and the solutions they found, to create a web browser that people can use strapped to their faces.

How difficult is it to design for an immersive, 3D environment, compared to 2D software?It’s not necessarily more difficult – all the same design principles still apply – but it is quite different. One of the things that you have to account for is how the user perceives space in a headset – it seems huge. So instead of designing for a rectangular window inside a rectangular display, you’re suspending a window in what looks to be a very large room. The difficulty there is that people want to fill that room with a dozen browser windows, and maybe have a YouTube video, baseball game or stock ticker running in the background. But in reality, we only have these 2-inch screens to work with, one for each eye, and the pixels of just half a cell phone screen. But the perception is it’s 1,000 times bigger than a desktop. They think they’re in a movie theater.

OK, so here you have this massive 3D space. You can put anything in there you want. What did you create?That was a really big question for us: what is the first thing people see when they open the browser? We built two things for the Firefox Reality home page. First, we worked with digital artists to create scenes users could choose as the background, because, just like on a 2D desktop browser, we found people want to customize their browser window with themes and images that mean something to them. The goal was to create environments that were grounding and inviting, especially for people who might be experiencing an immersive environment for the first time.

Second, we created a content feed to help people find great new 3D experiences on the web. Immersive media is just getting off the ground, so content is somewhat limited today but growing quickly. The content feed showcases quality, family-friendly content that supports the WebVR API, so it’s easy to view on multiple devices.

What kinds of limitations or challenges did you run into while designing the browser’s UI?In VR, the most important thing is to make the user comfortable. In the past, a significant number of people have had trouble with nausea and motion sickness — and women are more susceptible, according to research. You can avoid that by delivering a smooth, responsive experience, where the display can render the content very, very quickly. The best experience is one where the user actually forgets they’re in a VR environment. They’re happy spending time there and they want to keep exploring.

The first problem we ran into was that people felt like they were floating above the floor. Part of that was because we had the camera height set to 5’ 6”, which is roughly the height of an adult standing up. But in user testing, people were sitting down. So there was a disconnect between what people were seeing in the headset and where they knew their physical bodies to be. The other part was that we were using colors to indicate floor, without enough texture. It’s textures that let our brains identify distance in VR. We created low poly environments with limited textures, so people could perceive the floor, and that helped people feel more comfortable in the environment.

Another surprise was how people perceive an app window size in the immersive environment. In 2D, people talk about making a window “smaller” or “bigger”, and everyone knows how to change that. In 3D, users were more likely to say they wanted to put a window “farther away” or “bring it closer”. It’s the same fix, design-wise: you just give people a way to resize the window. But it’s interesting how differently people relate to objects in 3D. It’s a more tactile, interactive mindset.

Who were you designing this browser for?That’s a good question because, in the beginning, we didn’t know exactly. The Firefox Reality browser is one of the first standalone VR browsers that lets people surf the 3D web, and it is built to work with newer standalone headsets that are super-affordable and wireless, devices like the Oculus Go, HTC VIVE Focus, and Lenovo Mirage Solo (Google Daydream). So it’s a pretty new market.

Based on business and personal use cases, we came up with personas, most of which were familiar with VR and 3D already: Gamers, architects, students, business travelers, and grandparents. But really the market for this product is extremely wide. We expect that VR will create a new genre of media that I believe will become a new standard. And our testing bore that out: People were interested in watching video in VR, with friends, in a theater-like setting, so it’s interactive. One person was excited to watch in bed, because it was easier to stare straight up to the ceiling with his headset on than it was to mess around with a laptop.

What was the biggest design surprise?We ran into a lot of issues with having a virtual keyboard in the interface. People complained that the keyboard was too wide and it was awkward to select the letters. It was too difficult to find special characters like umlauts.

We made a bunch of tweaks so the virtual keyboard was easier to use. We also accelerated our timeline for voice input. In the initial release, we added a microphone icon to the URL bar so the user can click on that and talk to the browser, instead of typing in a search query.

What else did you learn from user testing?People brought up privacy. Could we add profiles, like Netflix has? Can they save programs for later viewing? Could they have a guest account? Also there’s a need to have parental controls, because adult content is a big interest in VR. VR content is still quite limited, but people are already thinking about how to manage access to it in their homes.

What design tools did you use to create a 3D UI?We’re designers, not programmers and short of learning Unity which has a steep learning curve, we needed to find some in-VR design tools that allowed us to import 2D and 3D objects and place them in space. The design tools for 2D, like Adobe Illustrator, Photoshop, Sketch, and InVision, don’t work for 3D, and there are only a few immersive 3D design tools out there today. We tried Google Blocks, Gravity Sketch, and Tvori before landing on Sketchbox. It’s an early-stage in-VR design tool with just enough functionality to help us get a feel for size, distance, and spacing. It also helped us communicate those coordinates to our software engineers.

What’s next?We’re now working on adding multi-window support, so people can multitask in a VR browser the same way they do in desktop browsers today. We’re also planning to create a Theater Video setting, to give people an option to watch movies in a theater mode that’s a bigger screen in a large dark room. So it’ll be a lot like a physical movie theater, but in a VR headset. In the next 1.1 release, we’re planning to add support for 360-degree movies, bookmarks, repositioning the browser window, and exploring additional voice input options as well as early design work for augmented reality devices. It’s a work in progress!

Virtual and Augmented Reality (VR and AR) — known together as Mixed Reality (MR) — introduce a new dimension of physicality to current web security and privacy concerns. Problems that are already difficult on the 2D web, like permissions and clickjacking, become even more complex when users are immersed in a 3D

Virtual and Augmented Reality (VR and AR) — known together as Mixed Reality (MR) — introduce a new dimension of physicality to current web security and privacy concerns. Problems that are already difficult on the 2D web, like permissions and clickjacking, become even more complex when users are immersed in a 3D experience. This is especially true on head-worn displays, where there is no analogous concept to the 2D “window,” and everything a user sees might be rendered by the web application. Compounding the difficulty of obtaining permission is the more intimate nature of the data collected by the additional sensors required to enable AR and VR experiences.

To enable immersive MR experiences, devices have sensors that not only capture information about the physical world around the user (far beyond the sensors common on mobile phones), but also capture personal details about the user and (possibly) bystanders. For example, these sensors could create detailed 3D maps of the physical world (either by using underlying platform capabilities like the ability to intersect 3D rays with a model of world around the user, or by direct camera access), infer biometric data like height and gait, and potentially find and recognize nearby faces in the numerous cameras typically present on these devices. The infrared sensor that detects when a head-mounted device is worn could eventually disclose more detailed biometrics like perspiration and pulse rate, and some devices already incorporate eye-tracking.

For each sensor, there are straightforward uses in MR applications. A model of the world allows devices to place content on surfaces, hide content under tables, or warn users if they’re about to walk into a wall in VR. A user’s height and gait are revealed by the precise 3D motion of their head and hands in the world, information that is essential for rendering content from the correct position in an immersive experience. Eye-tracking can support natural interaction and allow disabled people to navigate using just their eyes. Access to camera data allows applications to detect and track objects in the world, like equipment being repaired or props being used in a game.

Unfortunately, there are concerns associated with each sensor—a data leak involving users’ home data could violate their right against unreasonable search; height and gait can be used as unique personal identifiers; a malicious application could use biometric data like pupil tracking and perspiration to infer users’ political or sexual preferences or track the location of bystanders who have not given consent and may not even be aware they are being seen. This is particularly worrying when governments may have access to this data.

At Mozilla, our mission is empowering people on the internet. The web is an integral part of modern life, and individual security and privacy are fundamental rights. When there are potential negative consequences, browsers typically request consent. However, as we collect and pass more data over the internet, we’ve fallen behind on ensuring users give informed consent. This trend could have far-reaching impact on users as more and more of their interactions move onto MR devices.

Informed Consent

The idea of informed consent originates in medical ethics and the idea that individuals have the right to exercise control over critical aspects of their lives. The internet is now a fundamental piece of people’s lives and society in general, and at Mozilla we strongly believe that informed consent is a right on the internet as well. Unfortunately, providing informed consent for internet users suffers from similar issues as informed consent in medicine, where users may not understand what they are being told and may not be motivated to consider their choices in the moment. Most importantly, the immersive web must have a foundation of trust to start from.

Obtaining informed consent requires disclosure, comprehension, and voluntariness. In order to be informed, people must have all necessary information, presented in a way they understand; in this context, that includes the data being collected or transmitted and the risks of unauthorized disclosure. To be able to consent, a person must not only be able to understand the disclosed information, but also be able to make a decision free of coercion or manipulation.

Completely and accurately presenting the information required for informed consent is challenging. Permissions have already become too complex to easily communicate to users what data is gathered and the potential consequences of its use or misuse. For example, PokémonGo uses access to the accelerometer and gyroscope in the the phone to align the Pokémon with the player’s orientation in the world and determine if they might be driving (i.e. they shouldn’t be playing the game). However, it can also be used by a bad actor to recover your password. These more subtle risks may be linked to more severe consequences.

Interactions between multiple sensors presents an additional permissions challenge—what happens when we combine accelerometer data with biometric data and microphone access? What happens if we add camera access? Individually, these sensors have complex threats; taken together, it is difficult to convey the full breadth of possible risks without sounding hyperbolic.

Given the new challenges of the immersive web, we have an opportunity to rework how we approach permissions and consent to better empower people. While we don’t yet understand what to tell users, we propose four principles as the basis for approaching this problem: permissions should be progressive, accountable, comfortable, expressive (PACE).

Principles

Progressive

The idea of progressive web applications is well understood in the web community, referring to the design of websites that work on a variety of devices and take advantage of the capabilities of each, creating progressively more capable sites as the capabilities of the device better match their needs. In Mixed Reality, the capabilities of devices are much more varied, requiring more dramatic changes to sites that want to support as many people as possible. Beyond just device capabilities, the intimate (even invasive) nature of AR sensing means that users may not want to grant the full capabilities of their device to all websites.

To both support a diversity of devices and respect user privacy, browsers need to embrace the idea of progressive permissions—giving people better controls over permission granting—by providing context for sensor access and enhancing the capabilities granted to websites gradually. This principle is closely related to the concept of informed consent; by requesting dangerous permissions out of context, sites risk providing incomplete disclosure and impacting comprehensibility. For example, most applications and sites request all necessary permissions at install or startup, then persist those permissions indefinitely.

The idea of providing context to permissions is not new; some mobile apps and websites already present people with an explanations of the permissions it will request at startup, providing a description of why access is required. Users can then select and approve/deny each permission at this point. If the user later accesses a feature that required a denied permission, then the application could re-present the request.

Part of “progressivity” is responsibly collecting data only the when needed and not persisting sensor use when not necessary. A person who has accepted microphone access to allow verbal input has not accepted unfettered microphone access for eavesdropping.

Therefore, progressive permissions should also be bidirectional, allowing users to turn permissions on and off repeatedly throughout the lifetime of a web app. In this example, a user might reasonably expect a site to use the microphone during input, and then stop using the sensor when input is complete—even if it still has permission to use it.

Also consider an application that requests camera access. At home, I grant it. At work, I open the application and it immediately uses the camera, compromising confidential information. We don’t want to keep prompting, but want the user to be aware of, and have control over, when sensor data is available to the application, changing permissions as they desire, depending on their preferences, context and needs (in contrast to current permissions, such as the camera permissions in the figure below). This principle is mutually reinforced by accountability.

Accountable

Accountability pertains to what happens after a permission is granted. All active or granted permissions should be easy to inspect and easy to change. We envision a user interface that is simple to access that lists:

current permissions

when each permission was approved/denied

data currently collected/monitored by the page

a toggle that allows easy switching between approval/denial of each permission (without requiring page reload)

Revocation should be straightforward, and only impact related features (revoking camera access should only affect features that require the camera, not prevent use of the entire site).

Additionally, when a website uses device resources, such as accessing files, there should be a method to hold the site accountable for resources accessed and/or modified. As browsers adopt new architectures to improve security through techniques like site isolation, identifying which pages are using which resources becomes easier, allowing browsers to report more accurate and granular usage data to users.

Examples of browsers continuing to execute JavaScript even after the browser is closed or the screen is turned off are troubling and violate accountability expectations. Some sensors, including motion and light sensors, aren’t protected by permissions and are exposed to JavaScript. These sensors also represent potential side channels for retrieving sensitive data and should be considered when designing accountability measures.

Comfortable

Users already report fatigue about excessive permissions requests. Embracing progressivity and accountability without taking this fatigue into account runs the risk disrespecting users’ attention and increasing this fatigue. Therefore permissions must also be comfortable. When we talk about permissions being comfortable, we’re explicitly referring to this need to balance user control with reduced friction. Interrupting users’ tasks, asking for permission at the wrong times, and excessive permissions requests can lead people to “give up” and automatically accept permissions to “get on with it.”

As we increase the amount and variety of information being sensed, we should consider alternatives to simple permission dialogs. For example, in some cases, browsers could use implicit permissions based on user action (e.g., pressing a button to take a picture might implicitly give camera access). In 3D immersive MR, where the user is using a head-worn display, permission requests that are presented in the immersive environment should provide a comfortable UX that is easily identified as being presented by the browser (as opposed to the page). If requests are jarring or visually uncomfortable, users may not take their time and consider them, but quickly accept (or dismiss) them to return to the immersive experience. Over time, we hope the web community will develop a consistent design language for various permissions across multiple browsers and environments.

Approaches to comfort can build on the previous principles: implicitly granting one kind of permission can be balanced by maintaining accountability and visibility of what data the site has access to, and by providing a simple and obvious way to examine and modify permissions.

Expressive

Expressiveness relates to the browser handling different permissions for different sensors differently, instead of assuming one size fits all (i.e., presenting a similar sort of prompt for any capability that needs user permission). The current permissions approach divides sensors into two categories: dangerous (requiring a prompt) and not (generally accessible without additional user input). Unfortunately, interactions between “not-dangerous” sensors, like the accelerometer and the touch screen used for input, can leak data like passwords (by watching the motion of the device when the user types)[1]. In an immersive context, devices have considerably more powerful sensors, resulting in more complex and difficult to predict interactions.

A possible solution to more expressive permissions is permission bundling, grouping related permissions together. However, this risks violating user expectations and could result in a less progressive approach.

Entering immersive mode will automatically require activating certain sensors; for example, a basic VR application will use motion sensors for rendering and be given an estimate of where the floor is so it can avoid placing virtual objects below the floor; from these, an application will be able to infer your height . These sorts of secondary inferences are not always so obvious. Even in a small study of 5 users, three participants believed that the only data collected by their VR device was either data they provided when creating an account or basic usage data (such as how frequently they use an application). Only two participants were aware that the device sensors collected and transmitted much more data. The richer the application, the more likely one or more of the sensors involved will be transmitting data that can be used to uniquely identify individuals.

One of these three participants explicitly stated that their VR system, an Oculus Rift, could not collect audio data.

Looking Forward

Accurately and completely explaining the data that’s being collected and potential consequences is central to acquiring informed consent, but there’s a danger that permissions prompts will become opaque legal waivers. As we add more sensors to devices and collect more personal and environmental data, it’s tempting to simply add more permission prompts. However, permission fatigue is already a serious issue.

When possible, we should identify opportunities for implicit consent. For example, you don’t have to give permission every time you move or click a mouse on the 2D web. When we do require explicit consent, platforms should provide a comfortable and consistent user experience.

The goal of permissions should be to obtain informed consent. In addition to designing technical solutions, we need to educate the public about the types of data collected by devices and potential consequences. While this should be required for making informed choices about permissions, it’s not sufficient. We need to combine the three aspects of informed consent (disclosure, comprehension, voluntariness) with the four PACE principles (progressive, accountable, comfortable, expressive) to provide an immersive web experience that empowers people to take control of their privacy on the internet.

The strength of the web is the ability for people to casually and ephemerally browse pages and follow links while knowing that their browser makes this activity safe—this is the foundation of trust in the web. This foundation becomes even more important in the immersive web due to the potential new pathways for abuse of the rich, intimate data available from these devices.

Current events demonstrate the dangers of rampant data collection and misuse of personal data on the web; mixed reality devices, and the new kinds of data they generate present an opportunity to change the conversation about permissions and consent on the web.

We propose the PACE principles to encourage MR enthusiasts and privacy researchers to consider new approaches to data collection that will inform and empower users while respecting their time and energy. These solutions will not all be technical, but will likely include education, advocacy, and design leadership. As VR and AR devices enter the mainstream tech environment, we should proactively explore the viability of new directions, rather than waiting and reacting to the greater damage that might come from future data breaches and abuse.

in this specific case, and for this reason, the devicemotion API has been deprecated in favor of a new sensor API ↩︎

This article is part four of the series that reviews the user testing conducted on Hubs by Mozilla, a social XR platform. Previous posts in this series have covered insights related to accessibility, user experience, and environmental design. The objective of this final post is to give an overview of

This article is part four of the series that reviews the user testing conducted on Hubs by Mozilla, a social XR platform. Previous posts in this series have covered insights related to accessibility, user experience, and environmental design. The objective of this final post is to give an overview of how the Extended Mind and Mozilla collaborated to execute this study and make recommendations for best practices in user research on cross platform (2D and XR) devices.

PARTICIPANTS WILL MAKE OR BREAK THE STUDY

Research outcomes are driven by participant quality so plan to spend a lot of time up front recruiting. If you don’t already have defined target users, pick a user profile and recruit against that. In this study, Jessica Outlaw and Tyesha Snow of The Extended Mind sought people who were tech savvy enough to use social media and communicate on smartphones daily, but did not require that they owned head-mounted displays (HMDs) at home.

The researchers’ approach was to recruit for the future user of Hubs by Mozilla, not the current user who might be an early adopter. Across the ten participants in the study, a broad range of professions were represented (3D artist, engineer, realtor, psychologist, and more), which in this case was ideal because Hubs exists as a standalone product. However, if Hubs were in an earlier stage where only concepts or wireframes could be shown to users, it would have been better to include people with VR expertise because they could more easily imagine the potential it.

In qualitative research, substantial insights can be generated from between six and twelve users. Beyond twelve users, there tends to be redundancy in the feedback, which doesn’t justify the extra costs of recruiting and interviewing those folks. In general, there is more value in running two smaller studies of six people at different iterations of product development, rather than just one study with a larger sample size. In this study, there were ten participants, who provided both diversity of viewpoints and enough consistency that strong themes emerged.

The researchers wanted to test Hubs’ multi-user function by recruiting people to come in pairs. Having friends and romantic partners participate in the study allowed The Extended Mind to observe authentic interactions between people. While many of them were new to XR and some were really impressed by the immersive nature of the VR headset, they were grounded in a real experience of talking with a close companion

For testing a social XR product, consider having people come in with someone they already know. Beyond increasing user comfort, there is another advantage in that it was more efficient for the researchers. They completed research with ten people in a single day, which is a lot in user testing.

Summary of recruiting recommendations

Recruit participants who represent the future target user of your product (identifying user profiles is often a separate research project in user-centered design)

The farther along the product is in development, the less technologically sophisticated users need to be

You can achieve important insights with as few as six participants.

To test social products, consider bringing in people in pairs. This can also be efficient for the researchers.

COLLECTING DATA

It’s important to make users feel welcome when they arrive. Offer them water or snacks. Pay them an honorarium for their time. Give them payment before the interviews begin so that they know their payment is not conditional on them saying nice things about your product. In fact, give them explicit permission to say negative things about the product. Participants tend to want to please researchers so let them know you want their honest feedback. Let them know up front that they can end the study, especially if they become uncomfortable or motion sick.

The Extended Mind asked people to sign a consent form for audio, video, and screen recording. All forms should give people the choice to opt out from recordings.

In the Hubs by Mozilla study, the format of each interview session was:

Welcome and pre-Hubs interview on how participants use technology (20 min)

Use Hubs on 3 different devices (40 min)

Closing interview on their impressions of Hubs (30 min)

Pairs were together for the opening and closing interviews, but separated into different conference rooms for actual product usage. Jessica and Tyesha each stayed with a participant at all times to observe their behaviors in Hubs and then aggregated their notes afterward.

One point that was essential was to give people some experience with the Oculus Go before actually showing them Hubs. This was part of the welcome and pre-Hubs interview in this study. Due to the nascent stage of VR, participants need extra time to learn about navigating the menus and controllers. Before people to arrive in any XR experience, people are going to need to have some familiarity with the device. As the prevalence of HMDs increases, taking time to give people an orientation will become less and less necessary. In the meantime, setting a baseline is an important piece for users about where your experiences exist in the context of the device’s ecosystem.

Summary of data collection recommendations

Prioritize participant comfort

Signal that you are interested in their genuine feedback

Ask participants for consent to record them

Conduct pre-test and post-test interviews with participants to get the most insights

Allow time for people to get used to navigating menus and using the controller on new HMDs before testing your experience.

GENERATING INSIGHTS

Once all the interviews have been completed, it’s time to start analyzing it the data. It is important to come up with a narrative to describe the user experience. In this example, Hubs was found to be accessible, fun, good for close conversations, and participants’ experiences were influenced by the environmental design. Those themes emerged early on and were supported by multiple data points across participants.

Using people’s actual words is more impactful than paraphrasing them or just reporting your own observations due to the emotional impact of a first-person experience. For example,

There are instances where people make similar statements but each used their own words, which helps bolster the overall point. For example, three different participants said they thought Hubs improved communication with their companion, but each had a different way of conveying it:

[Hubs is] “better than a phone call.”“Texting doesn’t capture our full [expression]”“This makes it easier to talk because there are visual cues.”

Attempt to weave together multiple quotes to support each of the themes from the research.

User testing will uncover new uses of your product and people will likely spontaneously brainstorm new features they want and more. Expect that users will surprise you with their feedback. You may have planned to test and iterate on the UI of a particular page, but learn in the research that the page isn’t desirable and should be removed entirely.

Summary of generating insights recommendations

Direct quotes that convey the emotion of the user in the moment are an important tool of qualitative research

Pictures, videos, and screen captures can help tell the story of the users’ experiences

Be prepared to be surprised by user feedback

Mozilla & The Extended Mind Collaboration

In this study, Mozilla partnered with The Extended Mind to conduct the research and deliver recommendations on how to improve the Hubs product. For the day of testing, two Hubs developers observed all research sessions and had the opportunity to ask the participants questions. Having Mozilla team members onsite during testing let everyone sync up between test sessions and led to important revisions about how to re-phrase questions, which devices test on, and more.

Due to Jessica and Tyesha being outside of the core Hubs team, they were closer to the user perspective and could take a more naturalistic approach to learning about the product. Their goals were to represent the user perspective across the entire project and provide strategic insights that the development team could apply.

This post has provided some background on the Hubs by Mozilla user research study and given recommendations on best practices for people who are interested in conducting their own XR research. Get in touch with contact@extendedmind.io with research questions and, also, try Hubs with a friend. You can access it via https://hubs.mozilla.com/.

|

This is the final article in a series that reviews user testing conducted on Mozilla’s social XR platform, Hubs. Mozilla partnered with Jessica Outlaw and Tyesha Snow of The Extended Mind to validate that Hubs was accessible, safe, and scalable. The goal of the research was to generate insights about the user experience and deliver recommendations of how to improve the Hubs product. Links to the previous posts are below.

Today we’re thrilled to announce the beta release of Spoke: the easiest way to create your own custom social 3D scenes you can use with Hubs.

Over the last year, our Social Mixed Reality team has been developing Hubs, a WebVR-based social experience that runs right in your browser. In Hubs, you can communicate naturally in VR or on your phone or PC by simply sharing a link.

Along the way, we’ve added features that enable social presence, self-expression, and content sharing. We’ve also offered a variety of scenes to choose from, like a castle space, an atrium, and even a wide open space high in the sky.

However, as we hinted at earlier in the year, we think creating virtual scenes should be easy for anyone, as easy as creating your first webpage.

Spoke lets you quickly take all the amazing 3D content from across the web from sites like Sketchfab and Google Poly and compose it into a custom scene with your own personal touch. You can also use your own 3D models, exported as glTF. The scenes you create can be published, shared, and used in Hubs in just a few clicks. It takes as little as 5 minutes to create a scene and meet up with others in VR. Don’t believe us? Check out our 5 minute tutorial to see how easy it is.

With Spoke, all of the freely-licensed 3D content by thousands of amazing and generous 3D artists can be composed into places you can visit together in VR. We’ve made it easy to import and arrange your own 3D content as well. In a few clicks, you can meet up in a custom 3D scene, in VR, all by just sharing a link. And since you’re in Hubs, you can draw, bring in content from the web, or even take selfies with one another!

We’re beyond excited to get Spoke into your hands, and we can’t wait to see the amazing scenes you create. We’ll be adding more capabilities to Spoke over the coming months which will open up even more possibilities. As always, please join us on our Discord server or file a GitHub issue if you have feedback.

In previous research, The Extended Mind has documented how a 3D space automatically signals to people the rules of behavior. One of the key findings of that research is that when there is synchrony in the design of a space, it helps communicate behavioral norms to visitors. That means that when there is complementarity among content, affordances, and avatars, it helps people learn how to act. One example would be creating a gym environment (content), with weights (affordances), but only letting avatars dress in tuxedos and evening gowns. The contraction of people’s appearances could demotivate weight-lifting (the desired behavior).

This article shares learnings from the Hubs by Mozilla user research on how the different locations that they visited impacted participant’s behavior. Briefly, the researchers observed five pairs of participants in multiple 3D environments and watched as they navigated new ways of interacting with one another. In this particular study, participants visited a medieval fantasy world, a meeting room, an atrium, and a rooftop bunker.

To read more about the details and set up of the user study, read the intro blog post here.

The key environmental design insights are:

Users want to explore

The size of the space influences the type of conversation that users have

Objects in the environment shaped people’s expectations of what the space was for

The rest of the article will provide additional information on each of the insights.

**Anticipate that people will want to explore upon arrival**Users immediately began exploring the space and quickly taught themselves to move. This might have been because people were new to Hubs by Mozilla and Social VR more generally. The general takeaway is that XR creators should give people something to discover once they arrive. Finding something will will be satisfying to the user. Platforms could also embrace novelty and give people something new to discover every time they visit. E.g., in Hubs, there is a rubber duck. Perhaps the placement of the duck could be randomly generated so people would have to look for it every time they arrive.

One thing to consider from a technical perspective was that the participants in this study didn’t grasp that by moving away from their companion it would be harder to hear that person. They made comments to the researchers and to each other about the spatialized audio feature:

“You have to be close to me for me to hear you”

While spatialized audio has multiple benefits and adds a dimension of presence to immersive worlds, in this case, people’s lack of understanding meant that they sometimes had sound issues. When this was combined with people immediately exploring the space when they arrived earlier than their companion, it was sometimes challenging for people to connect with one another. This leads to the second insight about size of the space.

**Smaller spaces were easier for close conversations**When people arrived in the smaller spaces, it was easier for them to find their companion and they were less likely to get lost. There’s one particular world that was tested called a Medieval Fantasy book and it was inviting with warm colors, but it was large and people wandered off. That type of exploration sometimes got in the way of people enjoying conversations:

“I want to look at her robot face, but it’s hard because she keeps moving.”

This is another opportunity to consider use cases for for any Social VR environment. If the use case is conversation, smaller rooms lead to more intimate talks. Participants who were new to VR were able to access this insight when describing their experience.

"The size of the space alludes to…[the] type of conversation. Being out in this bigger space feels more public, but when we were in the office, it feels more intimate."

This quote illustrates how size signaled privacy to users. It is also coherent with past research from The Extended Mind on how to configure a space to match users’ expectations.

…when you go to a large city, the avenues are really wide which means a lot of traffic and people. vs. small streets means more residential, less traffic, more privacy. All of those rules still apply [to XR].

The lesson for all creators is that the more clear that they are on the use case of a space, the easier it should be to build it. In fact, participants were excited about the prospect of identifying or customizing their own spaces for a diverse set of activities or for meeting certain people:

“Find the best environment that suits what you want to do...

There is a final insight on how the environment shapes user behavior and it is about objects change people’s perceptions, including around big concepts like privacy.

**Objects shaped people’s expectations of what the space was for**There were two particular Hubs objects that users responded to in interesting ways. The first is the rubber duck and the second is a door. What’s interesting to note is that in both cases, participants are interpreting these objects on their own and no one is guiding them.

The rubber duck is unique to Hubs and was something that users quickly became attached to. When a participant clicked on the duck, it quacked and replicated itself, which motivated the users to click over and over again. It was a playful fidget-y type object, which helped users understand that it was fine to just sit and laugh with their companion and that they didn’t have to “do something” while they visited Hubs.

However, there were other objects that led users to make incorrect assumptions about privacy of Hubs. The presence of a door led a user to say:

“I thought opening one of those doors would lead me to a more public area.”

In reality, the door was not functional. Hubs’ locations are entirely private places accessible only via a unique URL.

What’s relevant to all creators is that their environmental design is open to interpretation by visitors. And even if creators make attempts to scrub out objects and environments sparse, that will just lead users to make different assumptions about what it is for. One set of participant decided that one of the more basic Hubs spaces reminded them of an interrogation room and they constructed an elaborate story for themselves that revolved around it.

Summary

Environmental cues can shape user expectations and behaviors when they enter an immersive space. In this test with Hubs by Mozilla, large locations led people to roam and small places focused people’s attention on each other. The contents of the room also influence the topics of conversations and how private they believed their discussions might be.

All of this indicates that XR creators should consider the subtle messages that their environments are sending to users. There’s value in user testing with multiple participants who come from different backgrounds to understand how their interpretations vary (or don’t) from the intentions of the creator. Testing doesn’t have to be a huge undertaking requiring massive development hours in response. It may uncover small things that could be revised rapidly – such as small tweaks to lighting and sound could impact people’s experience of a space. For the most part, people don’t feel like dim lighting is inviting and a test could uncover that early in the process and developers could amp up the brightness before a product with an immersive environment actually launches.

The final article in this blog series is going to focus on giving people the details of how this Hubs by Mozilla research study was executed and make recommendations for best practices in conducting usability research on cross platform (2D and VR) devices.

This article is part three of the series that reviews the user testing conducted on Mozilla’s social XR platform, Hubs. Mozilla partnered with Jessica Outlaw and Tyesha Snow of The Extended Mind to validate that Hubs was accessible, safe, and scalable. The goal of the research was to generate insights about the user experience and deliver recommendations of how to improve the Hubs product.

To read part one of on accessibility, click here.To read part two on the personal connections and playfulness of Hubs, click here.

D4: I finally got #PoseNet running within an @AFrameVR instance! I should be able to puppeteer a 3D avatar with PoseNet now, which will enable me to stream myself coding on SeeClarke via mixed reality…it's going to be epic!

Friday afternoon demo:Hacky AR app which tracks a maker on my head to overlay sagittal MRI images, which are scrolled through with the slider at the bottom. Astoundingly, just javascript & html using @aframevr! pic.twitter.com/cSdu99KVPy— 𝐝𝐚𝐧𝐧𝐲 (@walkerdanny) August 31, 2018

Though I should tweet more about my research: I'm currently adapting a visual search interface (using #dataviz for result representation) into an #VR environment with @aframevr. Screenshot is showing VERY early work. Still need to figure out input and evaluation. Input welcome! pic.twitter.com/jCuDQDIXZv— Maurice Schleußinger (@m_schleussinger) September 6, 2018

I've entered my first #WebXR@Virtuleap hackathon, and my first hackathon ever with my VRBookReader app and platform. It's still in alpha but check it out and vote in case you find it promising or even just interesting. https://t.co/XkFdqaQ6oC— Stefan Petrovic (@stefanpetr_dpi) July 18, 2018

I don't know why I created this but I am sure I gonna use it some where in the game just around 2kb for this (+ aframe lib) - Random color generated and gives as light color to see the colors on the wall ! #gamedevpic.twitter.com/UcDOsvL6bD— Karan Ganesan (@karanganesan) August 22, 2018

Hubs by Mozilla lets people meet in a shared 360-environment using just their browser. Hubs works on any device from head-mounted displays like HTC Vive to 2D devices like laptops and mobile phones. Using WebVR, a JavaScript API, Mozilla is making virtual interactions with avatars accessible via Firefox and other

Hubs by Mozilla lets people meet in a shared 360-environment using just their browser. Hubs works on any device from head-mounted displays like HTC Vive to 2D devices like laptops and mobile phones. Using WebVR, a JavaScript API, Mozilla is making virtual interactions with avatars accessible via Firefox and other browser that people use every day.

In the course of building the first online social platform for VR and AR on the web, Mozilla wanted confirm it was building a platform that would bring people together and do so in a low-friction, safe, and scalable way. With her years of experience and seminal studies examining the successes and pitfalls of social VR systems across the ecosystem, Jessica Outlaw and Tyesha Snow of The Extended Mind, set out to generate insights about the user experience and deliver recommendations of how to improve the Hubs product.

BACKGROUND ON THE RESEARCH STUDYIn July 2018, The Extended Mind recruited five pairs of people (10 total) to come to their office in Portland, OR and demo Hubs on their own laptops, tablets, and mobile phone. We provided them with head-mounted displays (HTC Vive, Oculus Rift & Go) to use as well.

Users were a relatively tech savvy crowd and represented a range of professions from 3D artist and engineer to realtor and psychologist. Participants in the study were all successful in entering Hubs from every device and had a lot of fun exploring the virtual environment with their companion’s avatar. Some of the participants in their early twenties also made a point to say that Hubs was better than texting or a phone call because:

“This makes it easier to talk because there are visual cues.”

And…

“Texting doesn’t capture our full [expression]”

In this series blog posts, The Extended Mind researchers will cover some of the research findings about the first-time user experience of trying Hubs. There are some surprising findings about how the environment shaped user behavior and best practices for usability in virtual reality to share across the industry.

BROWSER BASED VR (NO APP INSTALL REQUIRED)Today, the focus is on how the accessibility of Hubs via a browser differentiates it from other social VR apps as well as other 2D communication apps like Skype, BlueJeans, and Zoom.

The process for creating a room and inviting a friend begins at hubs.mozilla.com. Once there, participants generated a link to their private room and then copied and pasted that link into their existing communication apps, such as iMessage or e-mail.

Once their companion received the link, they followed instructions and met the person who invited them in a 360-environment. This process worked for HMDs, computers, and mobile phone. When participants were asked afterward about the ease of use of Hubs, accessibility via link was listed as a top benefit.

“It’s pretty cool that it’s as easy as copy and pasting a link.”

And

“I’m very accustomed to virtual spaces having their own menu and software boot up and whole process to get to, but you open a link. That’s really cool. Simple.”

Some believed that because links are already familiar to most people, they would be able to persuade their less technologically sophisticated friends & family members to meet them in Hubs.

Another benefit of using the browser is that there is already one installed on people’s electronic devices. Obstacles to app installation range from difficulty finding them in the app store, to lack of space on a hard drive. One person noted that IT must approve any app she installs on her work computer. With Hubs, she could use it right away and wouldn’t need to jump that hurdle.

Because Hubs relies on people’s existing mental models of how hyperlinks work, only requires an internet browser (meaning no app installation), and is accessible from an XR or 2D device it the most accessible communication platform today. It could possibly be the first digital experience that people have which gets them familiar with the with the concepts of 360 virtual spaces and interacting with avatars, which subsequently launches them into further exploration of virtual and extended reality.

Now that you've got a sense of the capabilities of Hubs, the next blog posts will cover more specific findings of how people used it for conversation and how the environment shaped interections.

Sorry, I haven’t been able to get these roundups out! Been focusing on building an A-Frame game and working on A-Frame itself. This is part 1 of 2 of catching up. I plan on building a tool to help me get these out faster and more frequently.

Sorry, I haven’t been able to get these roundups out! Been focusing on building an A-Frame game and working on A-Frame itself. This is part 1 of 2 of catching up. I plan on building a tool to help me get these out faster and more frequently.

Miscellaneous

@supermediumvr made WebVR suddenly viable in my mind, by making it feel like a native desktop experience.

Have started tinkering with @aframevr, super easy to get a scene going, not as easy to get interaction. I've gotten distracted since but will have to pick it up again soon.— Andreas Aronsson (@BOLL7708) October 7, 2018

Firefox Reality 1.0.1 is now available for download in the Viveport, Oculus, and Daydream app stores. This is a minor point release, focused on fixing several performance issues and adding crash reporting UI and (thanks to popular request!) a reclined viewing mode.

We’ve been collecting feedback from users, and are working on a more fully-featured version for November with performance improvements, bookmarks, and an improved movie/theater mode (including 180/360 video support).

Keep the feedback coming, and don't forget to check out new content weekly!

In many user experience (UX) studies, the researchers give the participants a task and then observe what happens next. Most research participants are earnest and usually attempt to follow instructions. However, in this study, research participants mostly ignored instructions and just started goofing off with each other once they entered

In many user experience (UX) studies, the researchers give the participants a task and then observe what happens next. Most research participants are earnest and usually attempt to follow instructions. However, in this study, research participants mostly ignored instructions and just started goofing off with each other once they entered the immersive space and testing the limits of embodiment.

The goal of this blog post is to share insights from Hubs by Mozilla usability study that other XR creators could apply to building a multi-user space.

The Extended Mind recruited pairs of people who communicate online with each other every day, which led to testing Hubs with people who have very close connections. There were three romantic partners in the study, one pair of roommates, and one set of high school BFFs. The reason that The Extended Mind recruited relatively intimate pairs of people is because they wanted to understand the potential for Hubs as a communication platform for people who already have good relationships. They also believe that they got more insights about how people would use Hubs in a natural environment rather than bringing in one person at a time and asking that person to hang out in VR with a stranger who they just met.

The two key insights that this blog post will cover are the ease of conversation that people had in Hubs and the playfulness that they embodied when using it.

Conversation Felt Natural

When people enter Hubs, the first thing they would do would be to look around to find the other person in the space. Regardless of if they were on mobile, laptop, tablet or in a VR headset, their primary goal was to connect. Once they located the other person, they immediately gave their impressions of the other person’s avatar and asked what they looked like to their companion. There was an element of fun in finding the other person and then discussing avatar appearances. Including one romantic partner sincerely telling his companion:

“You are adorable,”

…which indicates that his warm feelings for her in the real world easily translated to her avatar.

The researchers created conversational prompts for all of the research participants such as “Plan a potential vacation together,” but participants ignored the instructions and just talked about whatever caught their attention. Mostly people were self-directed in exploring their capabilities in the environment and wanted to communicate with their companion. They relished having visual cues from the other person and experiencing embodiment:

“Having a hand to move around felt more connected. Especially when we both had hands.”

“It felt like we were next to each other.”

The youngest participants in the study were in their early twenties and stated that they avoided making phone calls. They rated Hubs more highly than a phone conversation due to the improved sense of connection it gave them.

[Hubs is] “better than a phone call.”

Some even considered it superior to texting for self-expression:

“Texting doesn’t capture our full [expression]”

The data from this study shows that communication using 2D devices and VR headsets has strong potential for personal conversation among friends and partners. People appeared to feel strong connections with their partners in the space. They wanted to revisit the space in the future with groups of close friends and share it with them as well.

Participants Had Fun

Due to participants feeling comfortable in the space and confident in their ability to express themselves, they relaxed during the testing session and let their sense of humor show through.

The researchers observed a lot of joke-telling and goofiness from people. A consequence of feeling embodied in the VR headset was acting in ways to entertain their companion:

“Physical humor works here.”

Users also discovered that Hubs has a rubber duck mascot that will quack when it is clicked and it will replicate itself. Playing with the duck was very popular.

“The duck makes a delightful sound.”

“Having things to play with is good.”

Here's one image to illustrate the rubber ducks multiplying quickly:

It could be a future research question to determine exactly what is the balance of giving people something like the duck as a fidget activity versus a formal board game or card game. The lack of formality in Hubs appeared to actually bolster the storytelling aspects that users brought to it. Two users established a whole rubber duck Law & Order type tv show where they gave the ducks roles:

“Good cop duckie, bad cop duckie.”

People either forgot or ignored the researchers’ instructions to plan a vacation or other prompts because they were immersed in the fun and connection together. However, the watching the users tell each other stories and experiment in the space was more entertaining and led to more insights.

While it wasn’t actually tested in this study, there are ways to add media & gifs to Hubs to further enhance communication and comedy.

Summary: A Private Space That Let People Be Themselves

The Extended Mind believes that the privacy of the Hubs space bolstered people’s intimate experiences. Because people must have a unique URL to gain access, it limited the number of people in the room. That gave people a sense of control and likely led the them feeling comfortable experimenting with the layers of embodiment and having fun with each other.

The next blog post will cover additional insights about how the different environments in Hubs impacted their behavior and what other XR creators can apply to their own work.

This article is part two of the series that reviews the user testing conducted on Mozilla’s social XR platform, Hubs. Mozilla partnered with Jessica Outlaw and Tyesha Snow of The Extended Mind to validate that Hubs was accessible, safe, and scalable. The goal of the research was to generate insights about the user experience and deliver recommendations of how to improve the Hubs product.

To read part one of the blog series overview, which focused on accessibility, click here.

As we covered in our last update, we recently added the ability for you to bring images, videos, and 3D models into the rooms you create in Hubs. This is a great way to bring content to view together in your virtual space, and it all works right in your browser.

We’re excited to announce two new features today that will further enrich the ways you can connect and collaborate in rooms you create in Hubs: drawing and easy photo uploads.

Hubs now has a pen tool you can use at any time to start drawing in 3D space. This is a great way to express ideas, spark your creativity, or just doodle around. You can draw by holding the pen in your hand if you are in Mixed Reality, or draw using your PC’s mouse or trackpad.

The new pen tool shines when combined with our media support. You can draw on images together or make a 3D sketch on top of a model from Sketchfab. You can also draw all over the walls if you want!

You can easily change the size and color of your pen strokes. You can write out text or even model out a rough 3D sketch.

If you’re using a phone, we’ve also added an easy way to quickly upload photos or take a snapshot with your phone’s camera. Just tap the photos button at the bottom of the screen to jump right into a photo picker.

This is a great way to share photos from your library or take a quick picture of something nearby. Selfies can be fun too, but don’t be surprised if people draw on your photo!

We hope you have fun with these new features. As always, please join us in the #social channel on the WebVR Slack or file a GitHub issue if you have feedback!