I'm standing in a small office in Seattle's Belltown neighborhood, just blocks from the gorgeous Olympic Sculpture Garden, where solid walls block the nearby environs. I've been invited to the darkest, most insulated room at the relatively new Reactor Accelerator, a tech-incubation office space. That’s because, for what I'm about to see, I need to be in a completely controlled environment.

Local engineer and designer Mark Haverstock hands me a headset and asks me to put it on. It looks a lot like the Oculus Rift, a virtual-reality headset that has exploded in the past year thanks to a millions-strong Kickstarter and tons of public appearances. This headset is black and lightweight, too, and it fits nice and snug on my head.

But the Oculus Rift is a standalone headset tethered to a computer, used mainly for sitting or standing in one place. The team behind Seattle's VRCade, who made their system from scratch, wants to free that experience and make it all-encompassing.

To that point, Haverstock hands me a pair of headphones and a plastic gun, while co-founder and CEO Jamie Kelly queues something up on a nearby PC. Once my headset screen flickers on, I'm standing in a virtual room that recalls the Doom series: a dark, neon-coated space base. I move my head around, and sure enough, my worldview moves along with it as if I'm really on a Mars outpost.

When I ask where the joystick is for moving around, though, Haverstock and Kelly laugh. No, dude. To walk around in the VRCade, you really walk around in the VRCade.

I answer that I'm terrified of running into something in the real world while my vision is obscured, but they insist I'll be fine. It turns out the game space's virtual walls correspond to the real ones in the admittedly small office I occupy in meatspace. I walk a few feet and raise the plastic gun to shoot at targets from various distances. It just plain works; my virtual gun never flickers or warps around in my vision. Even the audio follows me thanks to virtual surround sound.

After that, Kelly generates a stack of crates in the virtual world and asks me to kneel behind them and blind-fire my gun overhead without looking. I do as I’m told, and I can see the red flashes of my laser gun light up the floor in front of me as I do, even though I can't see what I'm shooting at. I'm completely protected down here on the floor. And I'm kind of freaking out. I've felt no 3D motion sickness (something I can't say for the Oculus Rift), and I've felt totally in control of this virtual space. It feels kind of badass to boot.

To really drive home what makes the full-room VRCade experience different, Haverstock takes my gun away while I'm still in headset mode. He walks a little ways away, holds it high in the air, and says, “Come get it.” I only have a floating, virtual version of the gun to guide me, and yet I can still walk to the appropriate point in the room, reach out, and grab the real-life gun where I think it should be.

That was the moment I became VRCade's latest slack-jawed convert.

From Wii remotes to full VR

Enlarge/ Those white dots help the cameras track users wherever they go in the playspace.

The Seattle trio of Kelly, Haverstock, and software lead Dave Ruddell has been cooking VRCade in earnest since 2010, but the project started with some virtual-reality, laser-tag dreams Kelly had back when the Wii launched in 2006. “We were promised this natural controller, something that was very precise,” he says. “But the Wii only has a nunchuk and d-pad. You can't turn and aim independently. That was a nightmare for me.” It was a hacked version of Wii launch game Elebits, modified to let you tie one Wiimote to your head for aiming purposes, that made Kelly hopeful that he could one day pull off his fully immersive, first-person shooting dreams.

Since then, Kelly has bounced between odd jobs, including secretarial work and retail, as well as stints at Nintendo as a software tester and a street team member. Eventually, all three VRCade team members ended up at Seattle's F5 Networks—Kelly as a video producer, Ruddell as a software engineer, and Haverstock as a contracted building engineer.

While Haverstock and Ruddell have specific super-nerdy skills with college degrees to match, Kelly is the company's dreamer, the guy who has tried and failed a few times with wild ideas about how to make virtual-reality gaming work. His first idea, a twist on laser tag, was flat-out nuts: “What if you had a floor that was made up of 1'x1' squares that were pillars on pistons, and you could raise them up, so you could essentially create any level you wanted if you had a couple thousand of them?”

A year after the others shot that idea down, the team tried and failed to come up with an augmented-reality system that could paint real walls with virtual imagery (“we couldn't keep track of things like items and bullets”). After that, they gathered up $2,500 of inertial measurement unit (IMU) sensors to track joints and body movement, “but we quickly realized they were garbage” Kelly says.

By late 2012, word of the Oculus Rift’s take on virtual worlds had grown, but Kelly's dream always revolved more around an arena—“about eight players running and jumping in a game.” His eureka moment came in late 2012 when he started looking into something no other commercial-grade option had considered: motion-capture cameras.

“With those, we could get absolute position and great fidelity, but the capture volume would be limited and the cost would be insane,” Kelly says. Or so he thought. A new system from mo-cap company Optitrack caught his eye because it promised capture spaces as large as 11,000 square feet at a reasonable cost. The VRCade team purchased 12 of Optitrack’s lower-end cameras, thanks in part to a $20,000 investment from Haverstock’s dad. The lower-end cameras meant a more limited room size, but they still deliver the no-lag experience they wanted to test out. From there, they’d add trackable white dots to a headset and a prop (sword, gun, etc), then send that mo-cap data to be simulated live in-game.

Setting it all up

Enlarge/ To walk around in VRCades' virtual world, you simply walk around.

F5 Networks allowed the trio to set their cameras and computers up in a barren floor of the company's Seattle headquarters. From that point, the team only needed half a year to create its current prototype, a working demo arena coded in the semi-open Unity game engine. The ability to actually move around the space makes all the difference in bypassing the motion-sickness issues that plague many Oculus Rift users. (Those issues stem from "when you press up on a joystick and your eyes think you're moving forward, but your body thinks otherwise," Kelly clarifies.)

The backpack that VRCade users wear has little to do with tracking. Instead, it’s the team's first-pass way to house the system's battery and wireless video device. Once Haverstock's new 3D printer arrives, he plans to move those parts to the headset in a shrunken form that doesn’t impact performance. Even at a smaller size, the battery, which will also be swappable, should last long enough for hour-plus sessions. That more than fits the team's arena-arcade vision.

The headset itself houses a 1280x800 video panel “just like the Oculus Rift prototype," Kelly notes (update: the original version of this story mis-stated the resolution as dual 720p panels. Ars regrets the error), adding that the Rift's original dream video panels aren't even produced anymore. "We can order a few of them off eBay, while [Rift creator Palmer Lucky] can't just order thousands of them." Since VRCade's video screens also receive wireless signals from a PC, the resulting visuals—lag-free, mind you—aren’t dependent on being tethered to a high-powered machine that can produce convincing 3D visuals.

All these selling points go down on VRCade's "pro" list, but they’re countered by a giant con: that $20,000 entry cost, which would only grow with the team's dream of an 11,000-square-foot arena space. As such, they're not aiming their virtual crosshairs at anything like a Kickstarter campaign. Instead, Kelly dreams of VRCade beginning as a dual-use space: serious business space in the daytime; arcade-style, pay-as-you-go play center at night.

Kelly and his team are already well on their way to proving VRCade’s worth for more than just virtual play spaces. For one thing, they've loaded Google Sketchup 3D modeling files into the VRCade, then invited architects and clients to put the set on and walk through their potential new homes. The full-walking experience means builders and designers can enjoy an immediate sense of scale while considering, say, the size and placement of windows. After those demos, Kelly is always keen to ask the architects, "How much is this worth to you?" The answers have been pretty high.

Other recent VRCade converts include current and former US military personnel, who have remarked on how the new system compares to the most common training simulators. "A few people who've been in the military used this and said, 'Your system right here kicks the pants off of ours.' What if you could have [VRCade] in a truck that folds out and then drive up and have people train?" VRCade could do that, the team insists, and they could simulate everything from wind to weapon jams—and provide instant replays and performance analysis to boot.

VRCade’s presence at Seattle's Reactor Activator means the team is also meeting a lot of small-fry game makers who are working with engines like Unity. "It can take us as little as 10 minutes to get Unity apps running in VRCade," Kelly brags. A game made with VRCade in mind doesn't need animations coded for things like head bob, reloads, and the like; the motion tracking handles all of that. Smaller-scale proofs of concept aren't too far off from these developers, and the team already has a "VRDK" (virtual reality developer's kit) in the works to smooth the transition process for working first-person games.

Looking to the future

All of this talk is getting a bit ahead of the current state of things, though. The VRCade-specific apps—and the heavy-duty investors who could pump in the money needed to build VRCade's first dream center, are still in the future. A good VRCade game is going to need enough space to allow a lot of running around, after all. That's a serious chicken-or-egg hiccup; who's going to make a game that is logistically untestable? And who’s going to fund such a huge space for games that don’t exist yet?

In the meantime, the team members at least have nearly a full year of free office space and support at Reactor Activator, where they will get to sit elbow-to-elbow with other tech-obsessed dreamers. After successful demos at small Seattle events, they're gearing up for a public blitz at shows like GDC, perhaps even E3 2014.

That is long enough, Kelly believes, for his next-level VR system to hit that sweet spot between high concept and sellable potential. He's happy to point any doubters toward his VR peers at Oculus Rift.

"Head-mounted displays have been around for 30 years, but they've either really sucked or were outlandishly expensive," Kelly says. "There was no real reason for that industry to push forward. When [Oculus Rift creator] Palmer Lucky said, 'I can build one that's better and cheaper than anything out there,' the entire industry showed up. That's the same thing we're doing here.

"Right now, tracking yourself in VR space is a very elite, proprietary system," he continues. "There is nothing for the consumer market at all. Because it's not for consumers, it suffers from issues like sitting in a chair, using a mouse and keyboard to walk, dealing with a wire hanging from a boom on the ceiling, not tracking props... [They're] very finite. Our system is extremely open because it's designed from a gamer's perspective to take on all sorts of flexibility. Tons of people, tons of props, tons of movement, 60 frames-per-second, surround-sound, constant signals—standards that most industries don't need."

Promoted Comments

You see them in real time. If there are other players in the space, their position is tracked perfectly along with yours. You can high five them if you'd like. This means you are in the same situation as if you were playing laser tag and you bump into someone. There will be no surprises.

Three players have to find their way out of a maze inhabited by Weeping Angels. For non-Who-fans, the Angels can only move when nobody is looking at them. It you blink, or if the lights flicker, the Angels move.

The players would need to use teamwork to ensure there is always somebody looking at all visible Angels and all possible places where Angels could approach.

Add a Sonic Screwdriver as an extra puzzle-solving mechanic and you've got a great gaming experience. The headsets would need modifying, though, to be able to see when somebody blinks when they should be staring at an Angel.

His eureka moment came in late 2012 when he started looking into something no other commercial-grade option had considered: motion-capture cameras.

Commercial outside-in optical tracking is nothing new, Vicon being the ur-example known by everyone and having been around for donkeys years. $20,000 sounds about right for a smaller lower-end setup. Consumer outside-in optical tracking is the hard bit. The cameras themselves are approaching suitable (the PS4eye would be really nice if it had a USB compatible interface), and tuning them for latency over processing for a 'nice' image is fairly simply, but you're left with the issue of precisely and accurately locating the cameras with respect to each other without much user interaction.

The rift only contains a single 1280x800 panel, split down the middle to provide one view per eye.

Quote:

adding that the Rift's original dream video panels aren't even produced anymore.

The original panel, while being a near-perfect size for efficient usage at a 90°FoV, was a 6-bit non-FRC panel with exceptionally poor contrast and a very slow pixel transition time (i.e. laggy and smeary). Not so much a 'dream' panel, as a 'what was available at the time for cheap' panel. I assume VRcade are using a panel of similar size but a different model (if it is indeed 720p not 1280x800), so hopefully does not have some of these issues. MIPI adapter chips and boards appear to be coming close to market, so access to modern cellphone and tablet displays is on the horizon for a significant jump in resolution and display quality (IPS rather than TN) for DIY HMDs.

1:1 VR workspaces are the cream-of-the-crop for VR environments. But above the costs for the HMDs, wireless video transceivers, processing hardware, tactile props, cameras, etc is the cost of having a big empty room that is dedicated to the space. If VRcade can make their system portable and quick enough (and moreover EASY enough) to set up, they've got something really special.

For the real world (with people running about) you would have to likely make it only function if the camoflaged player was still/moving very slowly and if another play actually moved directly towards them and was close enough to impact in the next second or so that they became visible again.

It would make for an interesting system. Especially if the player was heavily incentivised to remain invisible, so deliberaly avoided placing themselves in situations where such fail safes had to kick in.

Imagine the scene in the slaughter house from Predator 2, done as a game with one team and one (or a few) predators.

I'm not sure if we are understanding each other. In the VRcade, you have the ability to use your eyeballs to look anywhere your head is facing, just like the real world, and you have the ability to move your head as well, just like the real world. The only difference is the inherent current limitation of FOV on HMDs, which simply simulates the effect of wearing a helmet which blocks some of your FOV.

Users know that our system is accurate after we perform the grab test mentioned in the article. Holding out the gun in front of the player, they are able to grab it even without seeing their real hands OR having an in-game representation of their hands. Using just depth perception and muscle memory, users grab the gun out of the air every time. This same level of accuracy applies to high fives.

You see them in real time. If there are other players in the space, their position is tracked perfectly along with yours. You can high five them if you'd like. This means you are in the same situation as if you were playing laser tag and you bump into someone. There will be no surprises.

Sounds cool, but how do you stop people from slamming into one another as they're running around that giant space?

I can actually contradict this based on experience. If I told you where I got the experience to do so you'd have roughly a 33% to ID me so I won't; knowing that the NSA is stalking me is bad enough. However, there is one thing people don't seem to understand about 3D space like this and that is how you actually "see" the world; peripheral vision and eye movement. This is why you will most likely still end up running into people in the same space even with full body tracking.

Think of it like this. Sit still and look around without moving your head. Easy, right? Now, fire up a first person game and put your face close enough to the screen so it fills your view. Look around the virtual world without using a computer input to move. You can't. This instinct to look with your eyes without moving your head causes you to feel you are clear to move. In reality collisions happen between people almost every time in that type of environment. Mind you the technology is still a freaking blast and how I wished I'd had a game to test the whole thing out with.

EDIT : Watching an attempted high five is hilarious. It is even funnier if a marker is knocked out of place due to the high five.

Right, I understand. Helmet with high resolution, or equivilent, LCD displays with reflective targets in a pattern for tracking. Several clusters of reflective targets for hands, feet, arms, legs, chest, and back. A pattern of targets on your prop; a gun in your case. And a series of pilots and mechanical engineers bumping into each other and missing high fives.

Was just giving my observations from having worked on a similar type project for two years. As with anything experiences may differ.

Damn...this was always the idea I had when people talked about VR games and the issue of how to simulate running/walking around. There were all sorts of fancy treadmills and other methods of making it feel like you're running but I always felt that the simplest way to do VR/AR gaming was a basic "laser tag" style setup. Your goggles either overlaid the "game" stuff on the real world or just showed you walls and stuff wherever there were actual walls. With the tech getting small enough and wireless latency low enough, you could do a lot of the processing on wearable processors as well as send data back to the "server" to keep track of it all.

The simplest implementation would be to build a sort of maze like in laser tag so players could just run around and shoot each other. The only difference is that you don't see a brightly lit warehouse through the goggles...you see a derelict space freighter or a haunted house or whatever setting they want to program.

It seems highly unlikely that there are no downsides. Even a wired system like the Occulus has lag problems for example. When the article puts 'lag free' in there, I become suspicious since that seems technically impossible.

There is an impact vest made by TN Games that we are experimenting with. Getting power and signal to the thing wirelessly is the big challenge. Not to mention size and fit across every body type! The Germans play with pain inducing gaming tech all the time, so I'm thinking we may learn a bit from them. Seriously, though, paintball fans love the rush and risk of causing and receiving pain, so this isn't a bad idea for everyone.

Three players have to find their way out of a maze inhabited by Weeping Angels. For non-Who-fans, the Angels can only move when nobody is looking at them. It you blink, or if the lights flicker, the Angels move.

The players would need to use teamwork to ensure there is always somebody looking at all visible Angels and all possible places where Angels could approach.

Add a Sonic Screwdriver as an extra puzzle-solving mechanic and you've got a great gaming experience. The headsets would need modifying, though, to be able to see when somebody blinks when they should be staring at an Angel.

His eureka moment came in late 2012 when he started looking into something no other commercial-grade option had considered: motion-capture cameras.

Commercial outside-in optical tracking is nothing new, Vicon being the ur-example known by everyone and having been around for donkeys years. $20,000 sounds about right for a smaller lower-end setup. Consumer outside-in optical tracking is the hard bit. The cameras themselves are approaching suitable (the PS4eye would be really nice if it had a USB compatible interface), and tuning them for latency over processing for a 'nice' image is fairly simply, but you're left with the issue of precisely and accurately locating the cameras with respect to each other without much user interaction.

The rift only contains a single 1280x800 panel, split down the middle to provide one view per eye.

Quote:

adding that the Rift's original dream video panels aren't even produced anymore.

The original panel, while being a near-perfect size for efficient usage at a 90°FoV, was a 6-bit non-FRC panel with exceptionally poor contrast and a very slow pixel transition time (i.e. laggy and smeary). Not so much a 'dream' panel, as a 'what was available at the time for cheap' panel. I assume VRcade are using a panel of similar size but a different model (if it is indeed 720p not 1280x800), so hopefully does not have some of these issues. MIPI adapter chips and boards appear to be coming close to market, so access to modern cellphone and tablet displays is on the horizon for a significant jump in resolution and display quality (IPS rather than TN) for DIY HMDs.

1:1 VR workspaces are the cream-of-the-crop for VR environments. But above the costs for the HMDs, wireless video transceivers, processing hardware, tactile props, cameras, etc is the cost of having a big empty room that is dedicated to the space. If VRcade can make their system portable and quick enough (and moreover EASY enough) to set up, they've got something really special.

It seems highly unlikely that there are no downsides. Even a wired system like the Occulus has lag problems for example. When the article puts 'lag free' in there, I become suspicious since that seems technically impossible.

Range, latency, and fidelity are all MASSIVE shortcomings on the Kinect. They are not usable. The new Kinect will be better, but because there are no markers to track, you leave a lot of guesswork up to the camera, and unless you want people throwing up, you cannot leave head tracking to guess work. Nevermind prop tracking, skeleton solving, occlusion issues, etc. Kinects are not an option.

Therein lies the issue; a team of people who set dress every game and every level in every game. It would be a massive undertaking and would cause camera occlusion issues. The benefit is that you can lean on objects. The downside is that the tracking bandwidth gets clogged because now you are tracking crates and walls, which means fewer players and props. If you left the set up, that entire capture volume, all that hardware, and the real estate would only be used for one game and one level, which is very limiting.

The lack of real walls actually isn't a big deal at all. Once you are in the virtual world, the real world vanishes. You respect walls and, because we control every aspect of this reality, we can prevent your character from walking through walls even if you do.

The lack of real walls actually isn't a big deal at all. Once you are in the virtual world, the real world vanishes. You respect walls and, because we control every aspect of this reality, we can prevent your character from walking through walls even if you do.

I like this idea. Maybe have the view 'black out' or similar when people walk through obstacles. Make cheating the system a useless endeavor.

Imagine if I can make a 300 mph U-turn on an Indy stock racing car and speeding up to 1,000 mph on a busy financial district? Zipping through traffic on a stolen car with a cop on my tail? Something I wouldn't dare to do in reality? Oh how about those Avatar scenery was awesome. Riding on the back of that red dragon that's worth $20K. When can I get one? Any discount for Ars users? So many questions.

Imagine if I can make a 300 mph U-turn on an Indy stock racing car and speeding up to 1,000 mph on a busy financial district? Zipping through traffic on a stolen car with a cop on my tail? Something I wouldn't dare to do in reality? Oh how about those Avatar scenery was awesome. Riding on the back of that red dragon that's worth $20K. When can I get one? Any discount for Ars users? So many questions.

This is a big issue, but there are ways around it. Originally we wanted to put players in a suit, but the hygiene issues, and suit up time, were unacceptable. Straps which go over clothing to hold markers is more hygienic, but there will always be some kind of skin contact and it is up to us to address that.

The idea of a VR arcade makes me giddy with excitement, but at the same time I'm finding myself preemptively repulsed at the idea of putting equipment on my head that's so recently been on another person. (PTSD from that head-lice scare in first grade? Possibly.)

Can confirm, the full body suits for any Mocap systems turn into sweat sponges. The only way around it would be a hybrid Marker/Vision based tracking. Thus players wouldn't need to wear a body suit, and at most a few markers for position, and player ID.

8+ player Mocap solutions also aren't going to be cheap unfortunately. I wish VRCade luck in getting the funding needed for such a setup.

Yes, you absolutely may come by to try it. Anyone can. Just email me at jamie@vrcade.com and we will set it up.

True to their word Jamie and the guys let myself and a group of friends come check out the system tonight. It was amazing! I've demoed the Oculus Rift before, but when you can actually move around in the 3D space and interact with objects....believe me when I say you haven't experienced anything like it before.

Thank you so much Jamie and Mark. Any video and/or write up simply cannot explain how awesome this is. Moving around and interacting with objects was incredibly natural, poking and manipulating objects in space was entirely natural, crouching and leaning around objects to fire from cover was all how you would expect something to work without any training, learning curve, etc. It just works.

Cannot impress how lame watching a video, or watching somebody else use it in front of you is, compared to experiencing it.

I really don't understand the hygiene issues, you've never played laser tag/laser dome or any kind of vest-based gun-game? You have to touch the gear a lot and also wear it.

Also, never done go-kart? Those full-on helmets where on some other person just before your turn. Don't see the problem ;p

Talk to the bowling alley guys (and go-kart guys, and paintball guys, etc). They've been dealing with this "issue" for years. There are all kinds of sprays and powders and whatnot that will sterilise equipment between uses. It's a solved problem in any number of ways. No need to over-think it.