I feel like i'm the only one that has zero interest in this. Of course, I also dislike 3D movies quite a bit, and can never see those 3D images on paper.

Have you tried 3D gaming? I agree that 3D movies aren't very interesting, but the 3D effect in gaming is much more pronounced and immersive, especially with high quality graphics settings. People call it a gimmick but anyone who has run through Metro 2033 on Ultra knows it is anything but. In movies there is a sort of pop-up book effect, and often the border of the screen ruins the illusion. A 3D game offers far more depth to individual objects and the environment in game since it is being rendered on a pixel by pixel basis. Fog literally hangs in the air, and textures have a tactile quality to them as if you could reach out and touch them. Although the screen border problem is still present in most cases (although most people sit closer to a PC monitor than a TV or movie screen, so it covers a greater portion of your field of view).

The way the Rift works, your entire field of view is the game - you don't see anything else. And since it isn't rendering 2 images in sequence for the 3D effect, the power required to run high quality visuals at 1080p and 60 fps will be much lower. Most people can't use 3D vision with high settings at this point, it is too demanding at 120 fps.

Quote:

Originally Posted by Jay G.

Yes, the view changes when you turn your head.

Yes, and they are working on positional tracking (ie. measuring the "zero" point on x, y, and z axis, so you can lean forward and tilt your head around corners. This is one of the bigger weaknesses of the dev kit. If you lean forward looking straight ahead, the camera doesn't lower with you, for example.

Quote:

Originally Posted by edstein

I just can't see playing a game for extended periods with this thing on my head. Seems cool as a tech demo but that's about it.

The unit is supposed to be insanely light, no different than a pair of ski goggles. That will help a lot.

I also think the lack of active shutter will make it much more tolerable for long periods of time. But yeah, like 3D I don't see this as a total replacement for 2D screen gaming. Certain game types will be amazing, like the examples below.

Traditional gameplay styles will be completely upended, you won't want to play precision platforming type games or 3rd person titles, obviously. Games for the Rift, I imagine, will be far less about the physical controls and more about what you are actually seeing and doing in the environment. Imagine a Myst style puzzle game. You will be able to do so much more when it feels like you are in the world. Super fast and fake twitch shooters like COD, not so much. Slow simulators like Arma? Amazing.

Quote:

Originally Posted by Drexl

Exactly, because in that game you could look around the cockpit, but the ship would continue to move in whatever direction you're pointing it towards with the thumbstick or joystick (well, actually if you're not turning it would just keep moving forward without any stick input, since you only move the stick when you turn).

Also the new Star Citizen game from the creator of Wing Commander will have Rift support.

Quote:

Originally Posted by spainlinx0

I'm trying to see how an FPS would work on this? Maybe when you tuck your gun and run, you can move in one direction while looking around, but not changing the direction of your run. Then when you bring your gun up, or are moving more slowly, your view actually changes where you're aiming? Or you aim within your view?

See the above example of Arma, a series which has had an independent look feature for a while and works with head tracking software. You will probably also see the 'strafing' control eliminated in favor of using the A and D keys for turning. Traditional game controls aren't going to work well with the Rift I don't think.

Here's an article from Wired that recently was written talking about the development and how they are addressing the latency issues that cause nausea. I can't get enough of this thing. It has been a long time since I have been this excited about new technology. Seems like even though Samsung has never let anyone use their OLED technology that it might change with them.

In May 2012, a programmer named John Carmack—who, as a cofounder of id Software and the man behind games like Doom and Wolfenstein 3D, is widely seen as the father of 3D gaming—tweeted a picture of what looked like steampunk bifocals made out of a black shoebox. “This is a lot cooler than it looks,” read the caption.

He was right.

Since then, that awkward contraption—now more streamlined, and known as the Oculus Rift—has become the most anticipated new product in gaming since the Nintendo Wii got people off the couch. It’s a head-mounted display that promises to be a gigantic step toward what many had dismissed as an unrealizable dream: virtual reality.

The Rift is the brainchild of a 19-year-old tinkerer and VR enthusiast named Palmer Luckey. A collector of old VR headsets, Luckey was all too familiar with the shortcomings every system had faced—small fields of vision, unwieldy form factors, horrific resolution. He was also uniquely suited to do something about it: Years of modding videogame consoles and refurbishing iPhones for fun and profit had given him enough fine-soldering skills to start Frankensteining pieces from his existing headset collection. Eventually, chronicling his efforts on a message board devoted to 3D gaming, he figured out a way to cobble together a headset with a field of vision that dwarfed anything else on the market and allowed people to become completely immersed in a 360-degree playspace.

Luckey originally envisioned his creation as a DIY kit for hobbyists; after joining up with two partners and officially incorporating, though, they realized they could have a game-changing consumer peripheral on their hands. They began pre-selling $300 prototype headsets to software developers on Kickstarter in August 2012, just weeks after Carmack had taken his early version (on loan from Luckey) to the E3 gaming trade show. They pulled in nearly $2.5 million—and in spring of 2013, when those units began shipping to developers, virtual reality started to seem a lot less virtual.

Since then, the hype has only gotten louder. Oculus brought an improved version of the Rift to E3 in June and showed off its 1080p resolution as well as demos that placed wearers inside a virtual movie theater watching a trailer for Man of Steel. For the first time, applications beyond gaming began to suggest themselves. People, to use the clinical term, freaked out. Hell, I freaked out. Media coverage was rapturous; devoted Oculus forums and subreddits proliferated. Oculus launched a project depository for game demos, and developers responded, creating completely new experiences for a completely new medium.

Now they just need to build a consumer version that can pay off the promise of Luckey’s early experiments. And now that the company has received $75 million in Series B funding from the likes of Andreessen Horowitz—which sources say value the company at more than $250 million—the stakes are even higher. Before the official consumer version of the Rift (known internally as V1) becomes available in 2014, they’ll have to iron out seemingly innumerable kinks, from finalizing the display tech to deciding exactly what features will be included. But first and foremost, they’ll have to solve a problem that has plagued VR since the days of Tron: how not to make people sick.

There are a number of hurdles to creating a seamless virtual experience. Tracking, resolution, motion blur; sometimes it seems like the list never ends. But underneath them all is the most visceral obstacle of all. It’s the one that makes people feel dizzy, or hot, or nauseated. It’s latency.

Latency is lag. It’s the delay between inputting a command and seeing the effects of that command. You see it today in online gaming when your broadband connection gets unreliable: suddenly, you’re a half-step behind the action. Your button pushes and thumbstick moves don’t kick in immediately, and by the time you’ve seen another player he’s already killed you. In that kind of situation, the only price of latency is frustration. Wearing a VR headset, though, the price is something akin to motion sickness. Oculus CEO Brendan Iribe calls it “the Uncomfortable Valley”—that queasy feeling players get when they move their head and experience a barely perceptible delay while the world catches up. People are willing to put up with many things in the name of novelty, but nausea isn’t one of them.

Of course, you can’t fully remove latency—there always will be some delay. The question is how low latency needs to be. As processor power has progressed, various head-mounted displays and VR sets have claimed to have solved the latency problem at various thresholds: 100 milliseconds! 40 milliseconds! Those thresholds might do away with the most frustrating delays, but they can’t guarantee comfort. “It’s easier to get sick from latency than it is to perceive it,” Luckey says. “People in the VR industry have been disagreeing on what humans can perceive—and that number always seems to match up to what their system is just barely able to do.”

For Oculus, that magic number is somewhere under 20 milliseconds. “When you cross that, you get to something magical. It really enhances the sense of presence,” says Nate Mitchell, the company’s vice president of product. “I’m very confident we’ll be sub-15.” That’s about half the latency than the devkit version can attain, and it will require a bunch of innovations—some that cut the actual latency, and some that just convince you that they have.

Motion to Photons

“Motion-to-photons latency,” as the Oculus team calls it, is the aggregate latency that arises from the instant a player moves his or her head to when the player sees a new image on the screen. That pipeline consists of six stages:

1. User input
2. A USB cable delivering that command from the Rift to a computer
3. The game engine translating that command for the graphics processor (GPU)
4. The processor issuing a “write” command for a new image
5. The Rift’s display switching pixels to display the new image
6. The new image appearing in full

Oculus’ task is shaving as many milliseconds of latency from as many of these stages as possible.

The first step is minimizing input latency—speeding the process of translating action into digital commands. That’s a matter of hardware. While Luckey’s original prototypes used an off-the-shelf sensor, current Rift headsets utilize a proprietary inertial measurement unit (IMU) tracker that uses a gyroscope, accelerometer, and magnetometer and then fuses those readings to evaluate head motion in real time. That “sensor fusion,” as Oculus calls it, drops latency for this first stage—which was often around 15 milliseconds for the original tracker—to below a millisecond. Getting that input command from the Rift to the computer—where all the actual processing takes place—adds another 1-2 milliseconds, a number that can’t be reduced without reinventing the USB cable entirely.

At that point, the burden of latency shifts from the Rift itself to the game developer—in particular, how high a frame rate the game is able to deliver. Most modern-day games play at 60 frames per second; at that speed, a single game image takes 16.67 milliseconds to render and get pushed to the computer’s graphics processor. If a developer is able to double that speed, of course, they can halve the latency introduced by that render time (i.e., a game that plays at 120fps takes 8 milliseconds to render a single image). That’s not unheard of for PC games, but for now Oculus has to assume the lowest common denominator of 16.67 milliseconds.

If you’re keeping track, we’re already close to 20 milliseconds of latency—and the image in your headset still hasn’t changed. For that to happen, the computer’s GPU needs to send a command back up the USB cable to each pixel in the Rift’s display. Some pixel switches happen very quickly—a black pixel can turn white in less than 10 milliseconds—but for one gray pixel to become slightly grayer can take close to 30 milliseconds. To save time, each individual pixel starts switching as soon as it receives its “write” command from the GPU; the Rift writes from bottom to top, so by the time the GPU is sending a command to the top lines of the display, the bottom lines have already switched.

All told, that process “smears out,” in Mitchell’s words, to between 20 and 30 milliseconds overall; since the processing and writing happen somewhat concurrently, that adds up to a total lag of up to 40 milliseconds for a 60fps game. (For reference, Luckey’s original duct-tape prototype was in the 50s—and Sony’s $1000 HMZ T3W has 52 milliseconds of latency in the device itself, not counting the latency introduced by a game.) That’s still too much.

Fortunately, Oculus has a few tricks left to employ.

Nips and Tucks

So that time-consuming pixel-switching process that hogs so much of the latency pipeline? Turns out it’s only time-consuming on LCD screens. OLED technology, such as what’s found on a Samsung Galaxy phone, can switch pixels in a matter of microseconds. So will the consumer version of the Rift utilize OLED panels? “We haven’t decided anything, but it would make a lot of sense to consider,” says Mitchell (with a smile, it should be noted). “The real trick to OLED is that there’s only one manufacturer in the world making OLED technology in the form factor we want.” That would be Samsung—and they’ve never sold their tech to any third party.

That doesn’t mean Oculus isn’t considering other display technologies that outperform LCD—and it doesn’t mean they’re not tinkering. As part of their ongoing collaborative work with Valve Software, the two companies worked up a prototype that, along with a number of other improvements that WIRED saw but can’t discuss, utilized an OLED panel. That was the prototype that got them below 20 milliseconds for the first time. It was also the first time that Iribe was able to use the Rift without feeling sick. “I’m one of the most sensitive in the office,” Iribe says. “People always joke that ‘it’s not ready to show Brendan.’ And for the first time, I felt incredibly good.” So where did the OLED panel come from? “We gutted a phone,” says Luckey. They won’t say what phone they gutted, but it’s clear that it’s a Galaxy S4.

By all accounts, that was a turning point. Andreessen Horowitz founder Marc Andreessen had been asking how long it would take for Oculus to conquer the Uncomfortable Valley. “We’ve had to be honest and say it’s soon, but not yet,” Iribe says. After he saw the prototype, though, Iribe emailed Andreessen: “I said, ‘Okay, we’re ready—you need to come down here on the next flight.’”

Chris Dixon was among four people from Andreessen Horowitz who saw the newest prototype. “I’d tried the devkit and thought it was really impressive, but the latency wasn’t quite there in my opinion,” Dixon says. “Going and seeing the new prototype gave me confidence that they were going to solve all of those problems. I think I’ve seen five or six computer demos in my life that made me think the world was about to change: Apple II, Netscape, Google, iPhone…then Oculus. It was that kind of amazing.” (The prototype and functionality that Andreessen Horowitz saw will not be shown at CES, as has been speculated elsewhere.)

It’s not all thanks to the OLED display, though. The Rift’s three-sensor head-tracking unit also samples motion data at 1000 times a second—four times faster than the original tracker from Luckey’s prototype. That high-speed data collection has not just allowed them to shave even more latency, but to actually predict where a player will move his head before he moves it. “There’s research that says you can’t predict where the head’s going well enough to render ahead of it, but nobody’s ever had a thousand samples a second to be able to try,” says Luckey.

He goes on: “If a head’s moving very quickly, you know it cannot instantly stop, and you know that it can only start slowing down at a certain speed, so you can say ‘well, if they’re turning their head, I know that they can’t slow down their head beyond a certain amount, so I’m going to render a little bit ahead of where it’s reporting they are.’ Then, as your head starts to slow, it lowers the prediction back down to zero, and during a very high-speed turn it can ramp it back up, which is when you need it as well.” This doesn’t change the latency of processing in any real way, but it allows the user to see the new image faster, and thus change the perceived latency—nearly 10 milliseconds less.

And then there’s Carmack, the graphics wizard who officially joined Oculus in August as CTO. Before he joined the team, Carmack wrote on his own blog about mitigating latency by changing the way the GPU writes displays. (If you’re not a programmer, don’t even bother trying to parse it: just know that it mentions things like “late frame scheduling” and “asynchronous time warp,” and be thankful there’s not a quiz.) Since then, Oculus has met with GPU manufacturers NVIDIA, Qualcomm, and AMD in an effort to ensure off-the-shelf PCs (and next-generation Android phones) will be able to pump the proper experience to the headset. “Making people buy a particular graphics card just to use Rift is totally off the table,” Mitchell says.

There’s still one more piece of the puzzle: ensuring the games and applications people are developing for the Rift are as streamlined as possible. In October, Oculus began shipping the Latency Tester, a tiny tool that allows developers to measure latency of their own motion-to-photons pipeline. By placing it in one of the headset’s two eyecups and pressing the test button, developers send a test signal that tells their game engine to draw a colored square on the screen; the tester reports how many milliseconds it takes for the square to appear.

Granted, it’s all still a work in progress, but after attacking it from so many angles, Oculus doesn’t consider latency the bugaboo it once was. “We’ve crossed the Uncomfortable Valley,” Iribe concludes. So it’s on to other things. Mitchell points out that the Oculus team still needs to make significant progress on a number of challenges—tracking precision, image quality, resolution—before the consumer unit finally ships. But with an extra $75 million, and the guidance from one of the most hands-on VC firms around, they’re confident that they’ll get there.

In the meantime, there are always more milliseconds to lose. “We’re shooting for zero,” Mitchell says. “We’re super-close.”

Used one of these the other day, thought it was fantastic with ridiculous potential.

This aired on Friday:

*SHark Tank video*

I think Mark Cuban knows something more about competitors to the Rift.

This was a nice surprise. I knew he wouldn't get an investment, but I was surprised by how receptive the sharks were to the overall concept. It means something he wasn't laughed out of the room.

Quote:

Originally Posted by spainlinx0

I had no idea that John Carmack had joined them as their chief technology officer. That's a nice development.

They also received that $75M investment from Andreessen Horowitz. Apparently Marc Andreessen got some hands on time with their latest prototype and was blown away. If I had to guess, they have pretty much resolved the latency and head tracking issues causing nausea in most people.

I'm with you. This is the most exciting tech product in development right now, by a long shot.

Quote:

Originally Posted by Dan

In a way, this is kind of a cool implementation of the Rift... in another way, it's kind of dumb.
*3D Cinema video*

I think it's really cool. The sense of presence you get with something like the Rift makes these virtual meeting places much more viable and interesting than something like Second Life. Imagine walking up to you friends avatar and it feels like a real person standing in front of you.

Quote:

Originally Posted by RichC2

The Cinema 3D stuff works surprisingly well, it gives you the sense that you're watching something on a massive screen.

I'm envious. Did you get to use the dev kit or one of the newer 1080p models?

I think it's really cool. The sense of presence you get with something like the Rift makes these virtual meeting places much more viable and interesting than something like Second Life. Imagine walking up to you friends avatar and it feels like a real person standing in front of you.

I can agree with that. I just think the whole movie theatre thing is kind of a bad idea. If the Rift is using 1080p screens, and you're in a "virtual" theatre, watching something, then the resolution and picture quality in general just can't be as good as on your TV or monitor. I imagine that the "projected" image on the Rift will lose something in the translation from the 1080p (or worse) source file to the virtual theatre experience.
Like I said, I like the idea of it, but I just can't imagine actually using it for that to watch an actual movie from start to finish.

Yeah, definitely. What really gets me excited is thinking five years down the line when 2k/4k resolution becomes possible on these things and systems can drive them. Imagine a VR environment and you can't even see the pixels.

I'm envious. Did you get to use the dev kit or one of the newer 1080p models?

Just the lower res Dev Kit.

Quote:

Originally Posted by spainlinx0

I don't understand why more people aren't excited by this. My friends barely have a reaction when I talk about it. It just seems like a huge game changer.

It is, unfortunately unless you've done your research it sounds no different than the slew of older VR headsets that have been released in the last 20 years.

Once people use it for the first time, they're pretty much sold. I was stoked to see it in my house, called a friend up to come check it out, did not share in the excitement. Tried it out, had to pry it away from him after 20 minutes.

So the Crystal Cove unit was demoed at CES this year and was basically the highlight of the show.

This new model integrates a camera into the set-up which tracks IR sensors on the face of the Rift to detect head position. In addition, it utilizes a 1080p OLED screen and a new software rendering method that drops extra frames contributing to nausea. I don't really understand the technical side of that last part.

In any case, impressions are motion sickness is basically completely eliminated, and the positional and head tracking has basically been refined to perfection. People report playing for long periods of time with no desire to take it off.

I'm a little disappointed they needed a camera to solve the positional tracking problem, but it's a small price to pay.

The one thing that annoys me about any Rift information that comes out are the comments. These people are the most negative people ever. I'm so glad I at least have this forum to discuss gaming with. Every other comment is "vaporware" or "they don't meet deadlines" or some other negative comment. What deadlines has this company missed? At this point I don't think my excitement can be dampened unless they reveal a price tag much much higher than the $300 anticipated. I plan on being a day one buyer.

The one thing that annoys me about any Rift information that comes out are the comments. These people are the most negative people ever. I'm so glad I at least have this forum to discuss gaming with. Every other comment is "vaporware" or "they don't meet deadlines" or some other negative comment. What deadlines has this company missed? At this point I don't think my excitement can be dampened unless they reveal a price tag much much higher than the $300 anticipated. I plan on being a day one buyer.

The best ones to me are "They got $75m and this is all they have to show?" keeping it mind it was 3 weeks between the $75m and Crystal Cove at CES, since 3 weeks is a ton of time.