At Leap Motion, we’re making VR/AR development easier with Widgets: fundamental UI building blocks for Unity. This is part 5 of our Planetarium series.

Daniel here again! This time around, I’ll talk a bit about how we handled integrating the UI Widgets into the data model for Planetarium, and what this means for you.

The first iteration of Widgets we released to developers was cut almost directly from a set of internal interaction design experiments. They’re useful for quickly setting up a virtual reality interface, but they’re missing some pieces to make them useable in a robust production application. When we sat down to build Planetarium, the need for an explicit event messaging and data-binding layer became obvious.

We made a lot of use of editor fields to make customizing and connecting widgets easier.

There are two ways a developer might want to interact with a UI Widget. The first is detecting interaction events with the widget – like “Pressed,” “Released,” and “Changed.” Events are quick and dirty and they’re great for simple interactions, like kicking off sound effects, or buttons used to open doors in a game level. The other is connecting the Widget directly to a data model, having it always display the current state of the data model, and having any user input to the Widget be reflected in that model. This is the pattern we use when we’re controlling data like asterism opacities and false-color saturation.

Wilbur did a great job of building obvious interaction end-point functions into the original Widgets. There are clearly named, short functions like OnButtonPressed. In the original release, these functions are where developers would add their code detailing what the Widgets controlled. Making life even easier for us, C# has some simple patterns for generating and subscribing to events. I defined a few interfaces that we agreed every Widget would have to implement – ones that required definitions for Start, End, and Change events – and added implementations to the existing widgets. There’s a nice inheritance structure to the Widgets that meant we could implement the events once in classes like ButtonBase and SliderBase, and have them work in our more specialized versions of the Widgets. The events carry a payload of a WidgetEventArg object that wraps the relevant data about the Widget’s new state after the interaction.

However, when using events while trying to stay in sync with a data model that’s changed by multiple sources or requires data validation, problems tend to crop up where your UI and your data fall out of sync. To solve this, we developed a relatively light-weight data-binding layer to connect generic Widgets directly to the application’s specific data. This involved creating an abstract class for a DataBinder to be implemented by the end-user-developer. (Why abstract classes rather than interfaces? To allow for easy integration with the Unity editor which can’t serialize interfaces or generic types into accessible fields.)

With this setup, developers need only implement a getter and setter for the piece of data being interacted with by the Widget. Widgets have open, optional fields in the Unity editor where developers can drag in a data-binder, and from then on the Widget will automatically update its view if the data changes, and update the data if the user modifies the Widget. It handles all the pushing and pulling of state behind the scenes using the assessors that you define.

Having these easy-to-hook-up data-binders connected with the Planetarium data meant that I could work on building new features for the planetarium, while Barrett could work on a new feature in the Arm HUD. We had a well defined set of expectations about when and how data would flow through the system. When we’d go to have our code meet in the middle, we rarely had to do more than drag-and-drop a few items in the editor, which let us move a lot more quickly than if all our systems were tightly bound to each other’s architectures.

Next up, Wilbur Yu will talk a bit about how the Widgets were structured, which was a big part of what made adding data-binding as straightforward as it was.

Touchscreens, knobs and buttons, motion interfaces – there are lots of ways that human beings and technology can work together. As a developer experience engineer at Leap Motion, Daniel loves to explore new ways that we can communicate with (and through) technology, particularly in large spaces where the personal and public collide.