UPDATE: The speculation didn’t last long. Valve has just released their OpenVR SDK which includes documentation for the Compositor. The actual implementation differs in some interesting ways, but the Use and Features section, below, is still a good summary of what Valve and Oculus are trying to achieve here. More details are at the end of this article.

INTRODUCTION

In March, Valve released a new concept into SteamVR called the VR Compositor. Like everything else at this point, the specification is not yet public. (So insert the standard speculative disclaimers here. If I flubbed something, please be forgiving, but let me know.) It shouldn’t be too hard for us to tease together what its function and purpose might be.

VR Compositor:

This is a new component of SteamVR that simplifies the process of adding VR support to an application.

Continues to draw an environment even if the application hangs.

Simplifies handing off from one application to another without full screen context changes by owning the window on the headset.

-Programmer Joe (Valve)

Let’s break that down a bit. The compositor grabs the VR display, owns it, and continues running. When a compositor-aware application wants to use the HMD, it goes to the compositor to request access to the HMD. The compositor hands a buffer to the application and tells the application to render into that buffer.

The buffer that the Compositor hands to the application can be thought of as a layer in Photoshop. I realize that the next analogy may be a little dated, but it is also like using multiple transparent overlays on a classroom overhead projector. Each transparency is a layer that adds something to the overall scene, but also can be independently added or removed at any time.

So you’re probably wondering how layers work for VR. That’s the tricky part. Each layer is rendered and then placed over the previous layer, with the highest priority layer being placed on top. Anything drawn in one layer will cover up anything that was rendered directly below it. If the order of the layers matches the depth of the objects on screen, this works out very well.

If the order of the layers is not respected, then you could have a distant object that covers up part of a closer object. Such a conflict may create a 3D image that the human brain cannot correctly process, resulting in discomfort.

The image of the left is from Valve’s HelloVR. The image from the right is also from HelloVR, but the Compositor placed a layer from a completely different program on top of it. The entire scene tracks with any movement of the HMD. (Apologies for the picture quality. Because a second layer does not mirror onto the desktop in SteamVR, each picture was taken inside of the left eye of a DK2.)

The Compositor is not a tool which should be used casually — layers must be well thought out. Luckily, you can still take advantage of some of the Compositor’s other benefits even without carving your display in multiple layers.

It important to know that the Compositor is ultimately responsible for ordering all of the layers and sending them to the GPU for rendering. The application (or applications) no longer directly task the GPU.

USES AND FEATURES

So, how do we see the Compositor being used? The most well-known example is the Chaperone, Valve’s warning system for the HTC Vive, which is used to indicate to when you’re reaching the physical edge of your working area. Earlier, before you launched your compositor-aware application, the Chaperone had already asked for and received its own layer to work with.

The Chaperone is, in fact, an independent process which runs on your PC. It is already pre-programmed with the boundaries of your working area. In the background, it is regularly monitoring your absolute position via the SteamVR API, and the program is responsible for making you aware of the boundary as you approach it. How does it warn you?

When it detects that you are closing in on a boundary, it simply renders a representation of the boundary into the buffer. Because it uses the Compositor, the image of the boundary is automatically pasted into the scene which is being generated by your other application and is sent to your HMD.

What is slightly unusual in this situation is that the Chaperone is likely to be drawing in the same area of your display that your application is. This can result in mismatched depth cues. I have not seen the Chaperone in action, so it is unclear to me how this issue is resolved. I suspect that some degree of transparency (transparency of the boundary markers, or transparency of the scene) would be used to reduce the visual conflict.

The Chaperone starts to illustrate how we can let multiple applications to share the HMD at the same time, but having to draw into the same space is unfortunate. What if we could define specific portions of the screen (static or dynamic) that we want other applications to be responsible for?

“The Facebook advertisements are coming through the compositor!” (Just kidding, guys. I don’t think we’ll ever live that down.) But give some thought to what innovative things a multi-application approach might allow you to do.

So, what else does the Compositor do for us? The Compositor’s responsibility for rendering is, by itself, a feature. Instead of the application being responsible for knowing all of the underlying details and optimizations of any particular HMD, that task is offloaded to the Compositor. It is now the Compositor’s job to figure that out for you.

If done correctly, that should allow developers to focus more on content and less on some of the intricate details of a particular display (including a subset of the arcane and evolving optimization tricks that are out there). All of those goodies will now find their home inside of the Compositor.

Worth noting: this means that new performance tuning algorithms can be dynamically added which increase the performance of applications after they’ve already been published. At the same time, this opens the possibility to changes which can break applications after they’ve been published. This isn’t unprecedented. Companies like Nvidia incorporate some very specific performance tweaks into their video card drivers. It works well, as long as it is done carefully.

Joe mentioned two other simple functions which add to the overall quality of the VR experience. If your application hangs, the Compositor is still responsible for processing the scene. As a result, your display won’t lock up on you.

It will still track with your HMD (although there won’t be any new content or outside movement to render). This also means that you’ll still get a Chaperone warning even if the game you are playing has locked up. Perhaps you could still summon a SteamVR overlay and exit out? In any case, the Compositor provides continuity.

The Compositor also provides continuity in a different way. We’re not having to re-initialize the display each time we hand off the HMD to another program. So if you are going from a launcher like SteamVR and into a game, your HMD doesn’t need to black out and come back to life each time that happens. It can be a very smooth experience when you go from one application to another.

As it turns out, Oculus has also been working towards its own compositor. In a March 5th presentation at GDC 2015 titled, “Developing VR Apps with Oculus SDK”, Anuj Gosalia gave a presentation which included details on how their VR Compositor would work. (The Oculus VR Compositor “VRC” is anticipated to be available in an upcoming release of the Oculus SDK.)

In his presentation, he gives us yet another use for the Compositor: a single application can use layers to render different parts of the same scene at different resolutions.

In their theoretical example with Elite Dangerous, a small and quality-sensitive element like text could use high sampling and high resolution, while the remainder of a more complex scene is rendered at a lower resolution which promotes a high framerate.

When small high quality area with text is pasted over the larger area, you’ve combined the benefits of what were two mutually exclusive approaches. You are able to mix quality and speed in the same frame. This can work out well for a number of different applications.

You can experiment with basic compositor functions in Windows using the VR Workbench, which is part of the SteamVR Beta. It is located in the SteamVR/tools/bin/win32 subdirectory in your local Steam installation.

SUMMARY

In summary, we see that Valve and Oculus are both working towards a VR Compositor. The Compositor can allow for multiple processes to work together to render a single scene. It can be expected to abstract some of the underlying display details and optimizations from the developer. It provides continuity when a program fails or a new program takes control of the HMD. Finally, it can allow different areas of the display to be rendered at different resolutions.

The Compositor looks like it’ll be a great addition to VR. I’ve been hoping for this function for some time, and it dovetails perfectly with last year’s Virtual Home concept (in terms of launching local programs and summoning a utility space). Now if I could just work Carmack’s Time Warp into letting me rewrite that entire article to make more sense…

UPDATE: Now that the Valve has released their OpenVR SDK, we can see that their implementation of the Compositor does not currently include the mixed rendering layers that Oculus describes. Instead, Valve supports 2D overlays. It also appears that the function of the Chaperone is integrated into the Compositor itself.

Should an application go non-responsive, the Compositor will continue to process the user’s head tracking. It will also fade into a grid scene to give the user the ability to reorient themselves. (My experimentation has shown that with the Oculus DK2, it draws a white screen with a dark line on the horizon.)

The functions of the Compositor are subject to change in future releases of Valve’s OpenVR or the Oculus SDK.

]]>

https://metaversing.com/2015/04/29/the-layered-compositor-magic-glue-for-vr/feed/0jmccormThe image of the left is from Valve's HelloVR. The image from the right is also from HelloVR, but the Compositor placed a layer from a completely different program on top of it. The entire scene tracks with any movement of the HMD. (Apologies for the picture quality. Because a second layer does not mirror onto the desktop in SteamVR, each picture was taken inside of the left eye of a DK2.)You can experiment with basic compositor functions in Windows using the VR Workbench, which is part of the SteamVR Beta. It is located in the SteamVR/tools/bin/win32 subdirectory in your local Steam installation.Is Valve Flirting with Augmented Reality?https://metaversing.com/2015/04/28/is-valve-flirting-with-augmented-reality/
https://metaversing.com/2015/04/28/is-valve-flirting-with-augmented-reality/#respondTue, 28 Apr 2015 12:47:51 +0000http://metaversing.com/?p=3013The story of how Valve let two of its engineers walk away with the company’s augmented reality tech is well known to the VR crowd. The impression we came away with was that Valve has shifted all of their attention to virtual reality and hasn’t looked back since. Or have they?

In last month’s series of articles on Valve’s Lighthouse, we reviewed what was known about their new tracking technology and covered some potential uses of the tech.

The Lumus DK-32 Wearable Display Development Kit

A curious finding was that not only was Lighthouse compatible with augmented reality, but that it actually helps solve some of the critical problems which continue to plague the fledgling industry. It was hard to think that this fact could have escaped Valve’s notice.

On April 23rd, Valve finally included their Lighthouse driver in the SteamVR beta. While the API remains unpublished, an examination of the new component revealed a very curious set of strings…

We see references to the nVisor ST50 combination AR/VR head mounted display, the Vuzix Star 1200 augmented reality glasses, the Lumus DK-32 augmented reality glasses, the Silicon Micro ST1080 HMD with 10% see-through display, and a number of development units named after microdisplay manufacturers.

There are also “flip” models of various displays including the Oculus Rift. Could these be AR/VR combo devices? Finally, we have what seems to be a reference to one of the AR prototype displays that were created by former Valve employee Jeri Ellsworth.

To recap: with Lighthouse, we have a new technology which has the potential to offer breakthroughs in augmented reality. Listed inside the windows device driver (which actually implements the technology) are specific models of AR and AR/VR combo devices.

We still don’t have proof, but we have enough pieces to start asking the question: is Valve flirting with augmented reality?

EDITED May 16, 2015: It is possible that the flip models are made to allow the user to easily see and interact with the real world simply by flipping the display out of the way.

]]>

https://metaversing.com/2015/04/28/is-valve-flirting-with-augmented-reality/feed/0jmccormThe Lumus DK-32 Wearable Display Development KitA Review of Earlier Articles… and a Return to Metaverse Issueshttps://metaversing.com/2015/04/20/a-review-of-earlier-articles-and-a-return-to-metaverse-issues/
https://metaversing.com/2015/04/20/a-review-of-earlier-articles-and-a-return-to-metaverse-issues/#respondMon, 20 Apr 2015 09:28:39 +0000http://metaversing.com/?p=2630Nine months ago, I wrote my last article on the Metaverse.

It was a short piece, mostly referencing an email from Fabian Giesen, a demoscene coder (and more) who was doing some VR work at Valve as a contractor. I’ll be honest, his message was a real downer for me, and I had my own Notch moment. Why was I working towards something that, if successful, would ultimately be used just to provide value to Facebook?

Over the past nine months, a surprising number of you have told me how those early Metaverse articles had actually been very helpful to you. A few of you said that you had a Metaverse effort going, but most of you were creating multiplayer virtual environments. Thank you all for your feedback and support!

I think the moment that it all crystallized and brought me back to Metaversing was seeing the return of Valve with the HTC Vive. Suddenly, it seemed like there were possibilities once again. Thanks, Gabe. I’m looking forward to learning more about your shared entertainment universe… perhaps a non-traditional Metaverse?

As I look back over last year’s body of work, I think most of the pieces have held up well enough. Perhaps the most controversial article was on the Virtual Home. The name, alone, drew an immediate comparison to PlayStation Home (closed in March, 2015), which turns out to be wildly unpopular with VR enthusiasts as the basis for a Metaverse implementation.

PlayStation Home was not where I was heading, so I can agree with much of the upset. Still, the article itself was far too ambitious. I tried to decompress way too many ideas into a short amount of space. I’ve learned my lesson — I’ll try to keep future articles more contained.

The PlayStation Home, now abandoned by Sony

What many of you may not have realized was that most of the articles from last year formed the discrete parts of a global design for a Metaverse. That Metaverse, ultimately, was never described in its entirety. I still have what appears to be a very unique blueprint for a Metaverse that I hope to describe in detail. I’m convinced that this model is not only viable (from multiple vantage points), but that it also has the ability to become wildly successful.

This year I intend to return to my work of laying down more of the design elements and then finally tying it all together. For now, I’ve got to see what happened to some illustrative artwork that was commissioned last year in JanusVR in support of an article I never published. It seems that some of the recent work by Valve (and now Oculus) has made that topic extremely relevant…

For the next few weeks, Alan Yates (Valve Lighthouse expert) is accepting questions about Lighthouse technology. Only questions about Lighthouse, please. No questions about Vive or the controllers will be answered.

If you have a question for Alan, tweet @vk2zay. He is going to let the questions accumulate over the next few weeks, and then respond to them all in a posting on his blog.

While a bit premature until dev kits are out, send me your lighthouse questions and I will compile them into a blogpost in the near future.

Introduction

A famous quote from Gabe Newell is about a lesson that Valve learned early-on when dealing with the Internet. You can find it in Episode 306 of the Nerdist Podcast at 00:12:14.

Don’t ever, ever try to lie to the Internet because they will catch you. They will deconstruct your spin. The will remember everything you ever say for eternity. -Gabe Newell

At this year’s Game Developers Conference where Valve announced their Virtual Reality partnership with HTC, and at that time, Gabe made an incredible claim about the Lighthouse tracking technology:

So we’re gonna just give that away. What we want is for that to be like USB. It’s not some special secret sauce. It’s like everybody in the PC community will benefit if there’s this useful technology out there. -Gabe Newell (Valve)

The story which accompanies the interview describes Lighthouse as a way of providing infinite input solutions into Virtual Reality. “As long as tracking is there, anything can be brought into VR, like how USB ports enable you to plug (virtually) anything into your computer.”

What the Technology Brings

In the previous two articles, we’ve dug into the technology itself, and it supports what we’ve been told. Spend perhaps $100-150 for two of Valve’s Lighthouse units and mount them in opposite corners of the room. At that point, you can almost forget about them. But any enabled device that you bring into the room can take advantage of:

Rock-solid positional data with high precision and resolution

Rock-solid orientation data with high precision and resolution

Very low additional power use (passive sensors, undemanding electronics)

This support would be available for an arbitrary number of devices, and “at a low enough cost to be incorporated into consumer electronics items such as televisions, headsets, input devices, or mobile devices.”

Given Valve’s ambitions for the technology, it is expected that they will create a complete solution that will feed fully resolved positional and orientation data to an electronic device without the need for additional processing.

That last bit of functionality has yet to be confirmed. If not the case, the processing power required to compute the position and orientation is extremely lightweight. Valve may also have an additional solution for wireless connectivity back to a PC.

It is unclear if the default Lighthouse mode will support any identity features, but our review seems to suggest that it would be easy for Valve to enable the following functionality with a user-installed firmware update:

Ability to instantly identify a room and to distinguish it from others

Ability to give the room a unique identity to be used as a database key

More on the significance of these later in this article.

It is important to note that while this technology seems quite promising, it is still being developed. An early developer release is expected in the spring, and consumer release is slated for November of this year.

Commonly Suggested Uses

To be honest, the apparent uses (provided by Valve and speculated by third parties) are quite plausible, but by themselves don’t seem especially compelling:

Ability to find real-world objects in the room while you are still in VR

Solving robotic navigational issues

Now that we have finished our technical review in the previous two articles and have a better idea about the system and its capabilities, why don’t we try our own hand at developing some new features which can take advantage of it?

Room Scanning

If this isn’t going to be an upcoming feature for the HTC Vive, even for novelty’s sake, then the obvious has been missed. The concept of creating a depth map just from two images is very well known.

What would make the process even more robust is combining a camera of well known characteristics with the precision of Lighthouse tracking (providing known position and aim at all times). If not with a unique device built especially for that purpose, then we’re talking about the HTC Vive itself with built-in camera and tracking.

How might it work? It couldn’t be simpler: walk around the room and look at everything. The software will merge image stills or video with high resolution position and orientation data for camera. Once completed, it would process the images, determine the depth of elements which have been seen from multiple angles, and reconstruct the entire scene in three dimensions and display it in virtual reality.

Worth noting, the internal development version of the HTC Vive appears to have two cameras in front. One cannot help but wonder if they contemplated yet another method of 3D image acquisition, perhaps more appropriate for real-time processing?

Room scanning is something that might play well with Valve’s announced room-scaled VR, where you actually move around the physical room in tandem with your character moving in virtual reality. If you’re going to move around your living room, why not use it as the location for a virtual world at the same time? (Give some thought to how that might work. We’ll circle back around to it later in this article.)

What else might room scanning open the door for? Social engagements and playing games with friends and family in a familiar environment. It could serve as a wonderful bridge between virtual reality and augmented reality.

Object Scanning

This is similar to room scanning, but you would indicate to the software a specific item in the room. You would get up close to the item and slowly look all around it while the software reconstructs it before you in real-time. The software could automatically determine any holes in the model and prompt you as needed to inspect specific areas in more detail (or from other angles) to get a more complete picture.

Yet another version might take advantage of a special mode which could be made available in the Lighthouse system. While the first Lighthouse unit provides high resolution tracking information for your head mounted display or camera, the second Lighthouse could temporarily enter a second mode where a carefully strobed and swept infrared laser light assists the camera in constructing a high-definition model of your object.

Once created, your object could be imported into your virtual library which you could shared with others.

Augmented Reality

We touched on this briefly when covering room scanning, but this topic deserves serious consideration by itself. What if it was as simple as walking into a room with a Lighthouse enabled webcam, putting on your Lighthouse-enabled Augmented Reality glasses, and having a conversation with your aunt who is sitting on both your couch and her couch from 200 miles away?

Maybe you are like me and you never liked what you saw with augmented reality. So many startups are quick to promise yet unable to deliver these pie-in-the-sky aspirational tech demos which are little more than ridiculous techno-fantasies.

There is no way these things could even do the required computer-vision based processing to constantly track the images with the user’s changing head movement, not to mention have any idea where to place objects in the room or how to share the same content with others in the room.

Or is there?

The curious thing is that the Valve Lighthouse solves quite a number of augmented reality problems. Tracking directly solves the viewpoint problem, but what about places to project content or knowing who to share data with? That would be tied to the room identity features mentioned earlier.

Lighthouse-enabled AR glasses could be able to instantly identify the room they are in and distinguish it from others. The next time you or someone else walks into the room, any special information (such as pre-defined areas to project images onto) are referenced and download based on the Lighthouse ID number. When Lighthouse-assisted, your AR device can focus more of its limited resources on communications, content, and graphics.

Take another look at one of those aspirational augmented-reality videos from earlier this year and imagine a Lighthouse in every room. Now that you know more about Lighthouse, doesn’t this look less aspirational and more like a blueprint for something that could be available next year?

Here’s the funny thing: CastAR was founded by two ex-Valve employees that did not want to make the transition from Augmented Reality to Virtual Reality. Valve let them go, but they also let them take their AR technology with them. It might be a good time for someone to ask Jeri Ellsworth and Rick Johnson about Lighthouse.

Commercial Lighthouse Units and Augmented Reality

After making the connection between Lighthouse technology and Augmented Reality, I started to wonder how it would work in the commercial space. I’m not much of a creative type, so I’m going to play this one straight.

As you enter the front door, your pair of Lighthouse-enabled glasses automatically picked up the ID beacon off of an in-store Lighthouse unit. You have AR Beacon Roaming enabled, so your glasses looked up the beacon’s unique ID in an online database, and determined that the available scene is compatible with your hardware and is consistent with your filter settings. The scene is tied to a specific location in the store.

Curious, you walk over to the indicated area, and give your glasses permission to download and execute the scene over your wireless connection. Within moments, a lifelike, distinguished, tall man with white hair in a gray suit appears in your field of view. He addresses you from the speakers built in the end pieces of your glasses.

Okay, let’s stop there. I’m not going to blow any more of this article’s word budget on this particular scenario, and I think you might have some idea where it can go from there. Yes, such an experience could not only be interactive, but it could also independently complete a transaction with the user.

Lighthouse can mean the ability to authoritatively signal the availability of pre-defined content that is tied to location, and to enable augmented-reality glasses to better take advantage of it (by providing stable tracking that would far exceed what smart glasses might be able to do on their own).

Can you imagine some other uses? Museums, bakeries, real estate, self-service kiosks? Creative technical types might operate a public sandbox for like-minded individuals to come and show off their latest efforts in front of a live audience.

Perhaps this is a world that Valve explored and decided that it was best to leave this to others?

Other Potential Uses

Home automation (visualizing the state of your home and making changes) could benefit greatly from Lighthouse-enabled room scanning or augmented reality.

Devices could be created for the blind which allow them to see objects in a room using depth scanning (and if combined with Lighthouse identity, features and functions of the room could be indexed and tagged in remote databases).

Small sets of freestanding Lighthouse-enabled cameras with network connections could become popular. Two or more in the same room could be used to create movies where the scene can be reconstructed from many different arbitrary angles. With the right processing, an entire room or stage could be broadcast in virtual reality in real-time. Streaming performances.

What about using an enthusiast level PC to deliver next-generation augmented reality features in the home or office, with today’s technology? This might deserve an article in its own right, so the description here is going to be brief.

Combine the augmented reality features made available with Lighthouse (such as room identity and presets), PC-based room scanning and depth-mapping, PC-based processing and graphics power, the Vive head-mounted display, and the idea behind one pre-existing Jeri Ellsworth patent assigned to Valve which includes re-rendering a live camera feed with the same perspective as the human eye would see.

What do we have? Just as mobile augmented reality and Lighthouse made the CastAR video look possible, a PC-driven augmented reality system and Lighthouse could make last week’s fantasy “Just another day in the office” high concept demo look like a blueprint for next year’s technology.

The number of different things, both big and small, which Lighthouse enables is staggering. What are some of the uses that you can think of for Lighthouse?

Lighthouse in the Storm. Image Source: wallpaper-kid

Summary

So you run into a case where there something we think is really important, it is an abstract, but something we think is really important and we want to push in that direction. The reason why fans haven’t arrived at the same conclusions is because they don’t have the same data as us. –Erik Johnson (Valve) [55:50]

When Gabe Newell looks at virtual reality, he asks how long will it be stable? How long until a VR display is replaced by direct neural stimulation? “You just want to test to make sure that you’re not investing in something that’s fragile.” –Geoff Keighley interview with Gabe Newell (00:47:48).

When I look at Lighthouse, it is anything but fragile. It solves core issues in Virtual Reality with inputs and tracking and does not seem easily replaced. What I find surprising is that it seems to have solid practical applications that match with Valve’s core mission as much as it has additional applications that go well beyond anything that Valve seems to be interested in.

Is this another USB, a common standard that is picked up and used across the industry? It sure is starting to look that way. If Valve is offering to license the technology for free, there is a lot of promise in this new enabling technology.

Development on this product still needs to continue (as planned), but from all appearances, Lighthouse’s potential as a common technology is a claim that passes the spin test.

March 28th, 2015 – For the next few weeks, Alan Yates of Valve is taking questions on Lighthouse technology.

Edit 3/25/2015 – Corrected a doubled word and also the link for depth mapping. Thanks /u/Boffster.

This is the second article in a series on the Valve/HTC Vive Ecosystem. If you have not already done so, please begin with the first article in the series.

Introduction

Today’s article will provide additional information on the Lighthouse units, explain the Lighthouse sensor system, and take a brief look at the sensor processing which is used to return the absolute position of a tracked device.

Strong Disclaimer

This particular article will try to tread carefully. There’s no way around it, folks. This article is going to contain facts, rumors, innuendos, and outright lies about the operation of Valve’s Lighthouse sensor system.

Why?

We’re working with publicly available information, which is scarce.

There is no documentation.

It is still in development and very subject to change.

There is no need for regular users to understand the underlying details.

Software developers can expect to be given an API that reports position without knowing any of the underlying hardware details.

Finally, for the time being, Valve employees are busy getting this stuff ready, and their time is better spent working on the product than answering all the outside questions. See page #9 of the Valve Handbook for New Employees for more details on how that process works.

We’ll have to assume that we’re on our own, for now.

Back to the Lighthouse for a Moment

I’m going to use the earlier research and development model for a reference.

An earlier model of the Lighthouse. Image source: UploadVR

Towards the middle upper left of the enclosure is a panel that has been mounted with LEDs. The apparent purpose of these LEDs is to widely emit a flash of infrared light which could have something close to the same perspective and range as the laser beams.

As outsiders, we don’t actually know what they are used for (see disclaimer section, above). But such a panel could be used for any number of purposes, and the two most relevant suggestions include:

Transmitting a Lighthouse unit ID number

Transmitting a mark to synchronize timing

We will speculate on additional uses of the LEDs in a later article.

Do you remember from the previous article how each Lighthouse unit would do an X sweep (10 milliseconds) and a Y sweep (10 milliseconds), and then go dark for an equal period of time (20 milliseconds)? We believe that pattern is designed to allow two Lighthouse units to tag-team an area. For the timing to work out properly, the two Lighthouses have to be in sync.

Lighthouse A – X Sweep Laser On, Y Sweep Laser Off (10ms)

Lighthouse A – X Sweep Laser Off, Y Sweep Laser On (10ms)

Lighthouse B – X Sweep Laser On, Y Sweep Laser Off (10ms)

Lighthouse B – X Sweep Laser Off, Y Sweep Laser On (10ms)

How can they coordinate so closely? Early speculation was that the area-flash LEDs were being used to transmit a timing mark to the Vive and the other Lighthouse, but analysis of the video from the previous article did not find any presence of an LED flash. Alan Yates, our Pharologist at Valve reminds us that the iPhone has a good IR filter, and that we actually were missing out on some LED flash activity.

Remember earlier, how we said this document will contain facts, rumors, innuendos, and outright lies about the operation of Valve’s Lighthouse? There is room for all sorts of optimizations and variants.

Each beam could be active for only 8ms. The drums could be running slower. The sweeps could overlap and still be usable. Please consider these and any other specifics as a plausible but potentially inaccurate examples which are used for illustrative purposes.

Back to the picture of the enclosure, apart from the LEDs, you’ll also see the two drums in the bottom and right sides. The two laser beams responsible for the horizontal and vertical sweeps are emitted from these spinning drums. It is a sure bet that the speed and position of those drums is set and carefully regulated by the Lighthouse unit itself, and that the speed is kept constant. The Lighthouse also carefully regulates the relative alignment of the two drums, keeping them from facing the room at the same time.

Changes to when the LEDs are powered, when the lasers are powered, and how the sweep motors behave can all be combined into a different mode of operation. A different mode might provide functionality which is slightly enhanced, or even completely different than what we see the Lighthouse being used for today.

You might want to keep that thought in the back your head. Lighthouses can be reprogrammed to support goals other than the absolute tracking of HMDs and controllers.

You should be picking up on how open-ended that statement is, and some of you might be getting some ideas. There are lots of them — an upcoming article will contain concrete examples. One of them is so stunningly obvious, you’ll wonder how everyone missed it.

Okay! Now Onward to the Sensors!

Any tracked object will have a number of infrared sensors (photodiodes) mounted on the surface. These sensors are particularly sensitive to the same infrared wavelength used by the Lighthouse lasers. We don’t know for sure if they are sensitive to the infrared LEDs or not.

We expect each sensor to be wired into a specialized integrated circuit, which performs some initial signal processing, such as the rejection of false signals and other light sources (such as sunlight). When a valid hit is registered, the chip will know which sensor was hit, and using an extremely precise clock, will time when it was hit (and potentially more).

Using the synchronizing flash from the Lighthouse LEDs, we can adjust the clock inside of our tracked object to match the clock inside of the Lighthouse. That same flash can tell us (or we might be able to determine on our own) exactly when a new cycle starts and the laser is starting a new sweep (at the 0 degree mark).

Why is time so important? Recall that each drum in the lighthouse is spinning at a known rate – one full revolution every 20 milliseconds (50 times a second). If we know when the drum is at the 0 degree mark, how fast the drum is spinning, and how long the drum has been spinning since the 0 degree mark, we know exactly what angle it is facing (up to the precision of our measurements, and the characteristics of the physical components).

Knowing this, inside the tracked device, we can use a precise timer to tell us how long it took the laser’s sweep to hit a photosensor, and what angle that represents. With that basic unit of measurement, you are on your way to determining position and orientation of the entire tracked device. But you still need more data. It just so happens that more data quickly follows.

SteamVR Controller Image Source: The Daily Dot

Sensor Placement

The outside of a tracked device is covered by a number of these same photosensors. As the laser sweeps across the room, it also sweeps across the tracked device, and all the excited sensors will quickly result in an exact list of what sensors were hit by the Lighthouse, and at what angle.

The list of which exact sensor was hit at what time is combined with another set of information: the exact position and orientation of the sensor relative to the body of the tracked object. If you look at the SteamVR controllers above, you’ll see the careful placement of a number of sensors at the top of the controller. They have recorded the X/Y/Z position and orientation of every individual sensor.

Placement of sensors is critical. The number of sensors is important… to a point. You do not need to blanket the outside of your tracked object with sensors.

Image: Doctor Miranda Jones from the original Star Trek series. Fashion forward, perhaps, but she went totally overkill on the sensors. Maybe she should have used Valve’s Lighthouse technology?

At the same time, if you use them sparingly, you need to keep them spread out so that even if they are held at an odd angle and partially blocked by a user’s arm, enough of them will still be able to acquire the Lighthouse signal. A sensor must be bathed in the signal from a Lighthouse in order to work.

Remember that picture of the SteamVR controller, back earlier? They designed it like the hilt of a sword. The guard is above the grip (where you shouldn’t be putting your hands), and that is where they placed the sensors.

To initialize tracking, a minimum number of sensors need to be able to see the Lighthouse, but less are required to hold tracking. An IMU (inertial measurement unit) inside the tracked unit also reduces the number of sensors required at any time, and increases the tracking resolution. (Again, see the disclaimer. There are a number of different ways that this can be implemented. The IMU is not a required component.)

At this point, if enough sensors on our tracked device are lit up by the Lighthouse, we’re good to go. It becomes a well understood geometry problem, and a matter of performing a computationally light set of trigonometric calculations to arrive at the absolute position and orientation of the tracked device within a room. (There is no big number crunching algorithm to steal time away from processing more important things, like graphics and content.)

“And then the magic happens.” I know that some of you are itching for hard technical details, and this answer is unsatisfying. Can they do it with one sweep, or do they have to process them in pairs? What about the relativistic effects of movement? After the initial acquisition process, do they only look the sensors that they expect to have data? I think we’ll have to wait to find out more. If there is a great answer in the near future, I’m happy to link it into the article.

The specific method that Valve uses is not known at this time, but the method is expected to be not unlike what has been discussed for the Oculus Development Kit 2’s camera-based tracking. Actually, come to think of it, Valve had a hand in that, as well. There is reason to be confident in their solution.

Upcoming

Looks like I’ve blown my word budget for this article. Next article, I hope to briefly touch a bit more on processing, and to provide some interesting examples of different ways that Lighthouse technology can be used.

Until then, I’ll leave you with a question. Doctor Miranda Jones is blind. What would it take for her to benefit from using Lighthouse technology in her own home? In public spaces?

If you’re reading this article, you’re probably already aware of the Valve/HTC partnership where HTC will manufacture the Vive, a virtual reality head mounted display, powered by Valve’s SteamVR platform.

As part of the reveal, one new piece of technology was introduced to the public: the Lighthouse. This is a brand-new-to-VR technology which will be used as part of a system to track the position and orientation of a user’s head mounted display and controllers throughout an entire room.

With Lighthouse, instead of using VR in a chair or standing in place, its room-scale VR feature allows you to use the space of an entire room as a stage to physically walk around in a virtual environment.

Disclaimer

This article is based on publicly available information. Be aware that we are trying to explain a system that is unreleased, subject to change, and has very little publicly available information. Some elements of this article may prove inaccurate at a later date.

With any complex system, there are many rules, details, and exceptions to explore. This first article is just going to cover the tech basics (but will still be plenty meaty for many). We’ll consider more detailed issues in later articles.

A Basic Operational Review

The purpose of this first article is to clear up some of the common misconceptions concerning the Lighthouse technology. It will also serve as a starting place for additional articles on Lighthouse and on the various aspects of the HTC/Valve partnership.

By understanding how this one component works, we can understand much more about what HTC and Vive are trying to deliver to consumers. They’re not just cranking out randomly incremental or independent technological solutions here; Valve is running a very deep and highly integrated game plan.

So… we’ve all seen the kind of hand scanner used to read the UPC codes off of the sides of boxes. They send a glowing red line out into space which strikes the surface of the box. If we were able to see infrared light, what Lighthouse does to a room would be similar in appearance. It sends a line of laser light out into space, and lands on the objects and walls inside.

The Lighthouse units have been referred to as a “dumb” device, which is partially true. They are not able to see or interpret what they are scanning. By themselves, they are unable to “read” a room. They serve only as a high-tech flashlight, providing a pattern of predictable illumination.

The Lighthouse units are not conventionally networked. They stand alone and they do not plug into your computer. Each unit only has a single wire for a power connection. Still, in a later article, we’ll learn that they’re far more intelligent than you might first believe.

Twenty-five times a second, each lighthouse unit sweeps the room with two infrared laser beams which are invisible to the naked eye. See the illustration below.

This depiction is technically inaccurate, but still demonstrates the concept of room sweeping. A default Lighthouse unit is expected to sweep only one axis at a time, not two at once. Image Source: Reddit user rubixcube6

Different from the animation above, the Lighthouse does not currently sweep the room with both beams at the same time. It also isn’t this slow. Even if our eyes were capable of seeing the infrared laser beam, it sweeps the room so quickly that the eye cannot track it.

The system has been stated to sweep the room 100 times a second. In 10 milliseconds, a single Lighthouse unit will sweep a first beam horizontally across the room. In the next 10 milliseconds, it will sweep a second beam vertically across the room. Finally, it will rest for another 20 milliseconds. That’s a total of 50 sweeps per second.

Because the Lighthouse system consists of two Lighthouse units, a second unit (across the room) is believed to be sweeping while the other unit is resting, and combined, they reach a total of 100 sweeps per second. Some surfaces in the room are swept once, others are swept twice, depending on which Lighthouse can see them.

The video, below, shows the operation of the Lighthouse in slow-motion. If you observe carefully, you may notice the pause after every pair of sweeps.

Aside from potentially being able to achieve higher steady refresh rate, why use two Lighthouses instead of one? With a conventional tracking approach that uses a single camera, if you put a controller behind your back, the computer loses sight of it and is unable to determine exactly where it is. The object you are holding in the virtual world goes dead or disappears from the game.

If you look at the cartoon image at the beginning of this article, you will see the two Lighthouse units are placed high and in the corners of the room. By placing them in opposite corners, it gives them an opportunity to completely surround a person or an object in laser light, making it far more difficult (but certainly not impossible) to foul up the tracking.

Modes

The Lighthouse is expected to be capable of several different modes of operation. What we have described in this article is only one way the Lighthouse might behave, and is based on the behavior of the pre-release units. It is possible that a Lighthouse will ship to you with only a single mode enabled, but the Lighthouse units are user reprogrammable. We can expect other modes to be available for more specific applications, and this topic will be part of another upcoming article.

Coverage

When asked about how much space the Lighthouse covers, the initial answer was 15 feet x 15 feet. This very specific answer caused a lot of unnecessary alarm and confusion. What if I have a smaller room? What if I have a bigger room? What if I can’t dedicate a whole room and I want to sit in a swivel chair at my desk*, or to stand in the middle of my living room?

In response, Valve’s Chet Faliszek clarified this issue at a presentation at EGX Rezzed. “We say 15 feet, which is what a lot of people have heard. That isn’t required; that’s just one version of it. You can be seated, you can be standing, you can have a small room or big room. We like having those options.” He wasn’t walking it back; Valve is offering all those possibilities.

* – It is worth noting that if you intend to use the Vive at a desk, you should place your Lighthouse units where they will have an unobstructed view of your head and arms.

The minimum space for a Lighthouse appears to be enough so that you can sit or stand in place, and freely move your arms about you. Perhaps 6 feet by 4 feet. The maximum space for two Lighthouse units have not yet been defined, but is expected to be greater than the 15’x15′ figure given.

Down the road, they expect to provide the ability to concatenate multiple spaces together with additional Lighthouse units. That is another feature that might work out really well for a specific application.

For in-game tracking, an inspection of the current SteamVR Beta API reveals support for two different methods of positional tracking. The traditional system provides a relative position while seated in a chair. The new system provides absolute position while standing in a room. Developers can use one or both tracking systems as needed.

Other Hardware Manufacturers

You may not think that this offer is very significant, and based on anything that I’ve read so far, I don’t blame you. Perhaps you only thought of a company which wants to sell you a new controller or a competing head-mounted display?

The more we understand about Lighthouse, the better that we can answer the question of why any other kind of company would want to integrate this technology. We’ll work our way back towards answering that specific question.

Upcoming

For you hardcore geeks and pharophiliacs (also known as lighthouse lovers), yes, we’ll get to the good stuff. We have to lay down some more foundation first. The next article will discuss the other half of the tracking system, which includes the sensors that use the predictable patterns from the Lighthouse to compute your absolute position inside of a room.

]]>https://metaversing.com/2015/03/23/examining-the-valvehtc-vive-ecosystem-basic-lighthouse-operation/feed/7jmccormValve LighthouseThis image is technically inaccurate, but still demonstrates the concept of room scanning. A Valve Lighthouse unit will only scan one axis at a time.Competitors with Different Goals: Valve versus Oculushttps://metaversing.com/2015/03/20/competitors-with-different-goals-valve-versus-oculus/
https://metaversing.com/2015/03/20/competitors-with-different-goals-valve-versus-oculus/#commentsFri, 20 Mar 2015 14:45:09 +0000http://metaversing.com/?p=1487The recently announced HTC Vive looks to be a strong technology competitor against the highly anticipated consumer release from Oculus in the PC space. While Oculus has long-ago stated that they are working to deliver their consumer VR headset at a lower margin, possibly even at cost, HTC/Valve has announced their entry of a premium VR experience.

A Different Focus

What is overlooked by many is that while these two companies compete in VR hardware and software, their focus couldn’t be any more different.

Oculus is coming at virtual reality hardware from both sides: low-cost mass-market [to drive users] and high-end [to drive technology]. Only recently (with the reappearance of Valve) have people begun to question the second leg of that approach.

In the short to medium term, Oculus simply wants to develop the technology and to get enough people on-board. In the medium to long term, on behalf of Facebook, they want to explore other opportunities and to create an avenue for Metaverse based services over the Internet. To put it more amusingly: Facebook is looking to be the next Facebook before their core business starts to atrophy.

Valve is currently coming at it from the angle of PC gaming. (It is unclear where else, if anywhere, that their partner HTC may be wanting to go with this, but I suspect that they may have their own ambitions.) Valve/HTC is claming the high-end of the feature space, which goes hand-in-hand with the well-known “Glorious PC Gaming Master Race” schtick started by Zero Punctuation.

Really, they’re probably more looking just to be competitive… and to differentiate themselves. Did you see their announcement of a price premium? That helps support your opinion of Valve providing a superior solution, which works to Valve’s benefit almost as much as increasing the number of users. Judging by the reactions of VR enthusiasts, it was well received.

Valve’s mission of pushing PC gaming forward is something that protects and grows their Steam software distribution platform — they do not want to be marginalized by a single competitor which controls the market. That means that they need hardware. But that also drives their focus into the SteamVR/OpenVR middleware to support third party VR products. To date, they have not communicated any mention of ambitions in mobile VR or the Metaverse, but they’re not excluding it, either. To put it more amusingly: Steam is looking to be the next Steam.

Looking a little further out, I think that there is only so far that Valve can climb the product and technology tree before Oculus catches them and even surpasses them. The high profile recruitment and acquisitions of Oculus speak to this. Yet at some point, it may not matter to Valve, so long as they can entrench themselves as a platform for VR software distribution (and services).

Today, we have two companies that are looking to protect their legacy and they’re using virtual reality to project their existing business models into the future. Looking at the one space where they collide, which is PC gaming, in the short term there will be cooperation and competition.

Subtopic: PC Gaming

Oculus: The head start that Oculus has earned with their SDK means that there are going to be Oculus-only titles. There may also be publisher spill-over benefits with easier software ports into mass-market mobile VR. It is also good to be the owner of a PC-based Oculus solution because Valve will want to support your hardware in SteamVR. Why? Because they want to sell you games. Oculus may have started its focus on games, but long-term, it is unlikely to be the bread-and-butter for the company. Still, Oculus is going to have to try hard if they want to lose PC gaming.

Valve: Currently favored to steal the first-mover advantage in PC gaming, but that remains to be seen. They’ve introduced novel technology (Lighthouse tracking and room-scale VR) which means that they’ll have exclusive features which initially will only available through their hardware, but will be free for other hardware manufacturers to integrate. (We’ll have to see how well publishers target those unique features.) They have an enviable existing marketplace which will be tough to topple.

Ultimately, Valve doesn’t have to win the PC market as a whole, or even the high-end. They only need to offer and support choices (or, what some might spin it as “cause fragmentation”) with their own hardware and by supporting other VR hardware vendors. They need to prevent one company from monopolizing the space and cutting them out of software sales.

Summary

Are both companies on the right path? It would seem so. They’re just working towards different goals. Ultimately, we’re just caught in the middle, and you know what? I like it.

UPDATE:March 22nd, 2015 — I don’t know about Oculus, but I can confidently say that I’ve underestimated the scope of Valve’s efforts in Virtual Reality. I’ve spent the past two days pouring over public resources regarding their hardware. With the assistance of other users on Reddit, I believe that I’ve reversed-engineered some of their announced technology, and gained a solid insight into other pieces which have yet to be announced. I hope to share more about this with you soon.

UPDATE: March 20th, 2015 —Underscore_Talagan correctly pointed out that Valve is making their Lighthouse system free to integrate by third-party hardware manufacturers. This has now been noted and cited in the text above.