Posted
by
Unknown Lamer
on Wednesday January 08, 2014 @09:32AM
from the metaverse-not-included dept.

crabel writes "The Oculus rift prototype Crystal Cove shown at CES uses a camera to track over two dozen infrared dots placed all over the headset. With the new tracking system, you can lean and crouch because the system knows where your head is in 3D space, which can also help reduce motion sickness by accurately reflecting motions that previously weren't detected. On top of that, the new 'low persistence' display practically removes motion blur."
The new low-persistence AMOLEDs also achieve 1920x1080 across the field of vision. Reports are that immersion was greatly enhanced with head tracking.

Thinking way out there... but if the Rift catches on, will significantly more brains be trained to cope with motion sickness? Will we be better equipped for space travel? I wonder if it will reduce motion sickness medication sales.

Think about the people you know who can't read in the car. Reading in the car doesn't make them handle it better next time, they just vomit twice.

That's a lot like saying that because you can't do a pull-up you'll never be able to do a pull-up. It presumes that a process of adaption through incremental improvements is impossible.

It isn't like reading in the car makes a person vomit immediately, perhaps if they just read for one minute more each day they would get to a point where it wouldn't make them sick. Or maybe it is a matter of the speed of the car, where if they were able to increase speed 1mph each day they would acclimate.

There are genetic, environmental, genetic, congenital, cultural, and mental factors with different levels of relevance for every single human characteristic. Trying to reduce even the most genetic(like colorblindness) to just one factor is going to get some false positives and false negatives.

Sure. But there isn't a single human genotype, phenotype, or culture that will let its members thrive at zero partial pressure of oxygen. No matter who you are, you can't hold your breath indefinitely. (At least not without external support.)

Now, I'm pretty sure that susceptibility to VR sickness isn't as predetermined and immutable as oxygen metabolism, or even color-vision defects. I have no idea where it falls on the spectrum, but I'm skeptical of anyone who says "you just need to practice and get over i

I think the reality is that we'll fall into three groups. Those with no VR sickness, those who can practice and get over it, and those who will always have VR sickness unless the tracking latency gets down to a few milliseconds.

There is no training to "overcome" motion sickness. The arrogance, and lack of understanding of the problem is going to make the vomit helmets so much fun to watch. There is a genetic predisposition to get past motion sickness which is why British tend to handle it better, and Asians tend not to. Ballerinas technically "adapt" to the spin that normally causes motion sickness, but it's just a trick. They learn to lock their eyes on a distant object though a portion of the spin. You screw up the trick and yo

It's the Navy that's doing the current experiment, and NASA that didn't get very far. Your article doesn't even cover any positive results worth reporting. They expected 15 to 20 percent success rate, but do you notice the lack of a follow up report? That was back in July of 2012. How much was actually successful. How many careers did they actually salvage? Why if this was all so successful have them moved onto an experimental Jell and Mist? I know why they keep trying because the Navy and NASA have jobs th

It isn't like reading in the car makes a person vomit immediately, perhaps if they just read for one minute more each day they would get to a point where it wouldn't make them sick.

There are very very few people who would go to that effort in this case. 99.999% of people would get motion sick once and then never use the device again. Since it is for entertainment purposes primarily what would be the point?

I don't know about that. While I do realise this is an anecdote, and I can't rule out physiological changes, I do recall suffering a lot from motion sickness as a child (travelling in the back seat and messing with my brothers limited my view of the outside of the vehicle, causing the motion sickness) but after a lifetime of playing games, reading books (with breaks whenever I felt sickness coming on) I've found that the length of time i can go before any motion sickness kicks in gets longer and longer. I c

I couldn't read in the car for years, I'd just get a big headache and feel sick. Then I started having to take the bus to university, 30 minutes both ways, for 3-5 days a week. It took me a few months but I adapted and now I can read anywhere just fine. So yes, you can most certainly adapt.

I really hope this doesn't turn out to be what the 3D trend has become for movies. Contrary to other past attempts for VR headsets, now there's both the hardware and the knowledge available to build something revolutionary that actually *works*. Plus JC is on-board so expectations are very high.

I've watched 10+ movies in the cinema in 3D, including Avatar, The Hobbit, Star Trek Into Darkness and Gravity in IMAX and a range of others in regular 3D. As many other people will tell you Gravity and Avatar are a different class of 3D movie to everything else. As for the rest, I can easily tell they used 3D as a gimmick. You got the odd spear/bee/shrapnel flying out at you from the screen to remind you that the movie was 3D, because frankly for all else, you can easily forget it/not notice it.

However, I have also played computer games in 3D. The difference between a game and a movie is that the movie chooses specific things to show you in 3D. In a game, they simply render EVERYTHING from 2 viewpoints and transmit that to each eye. I played Crysis 2 on the XBOX360 and was blown away by how it (I really dread to say) added a new dimension to the game. The HUD was rendered to be right up in your face and everything was at not just varying, but the RIGHT depth behind it. Far away monsters were far away, close up were close up and everything in between had it's own natural place. If you had water splash it felt real. It didn't feel like your vision had simply been blurred, it felt like something had actually blocked you, it was there, real.

I also have an account from a guild mate who played WoW in 3D and wonders how he ever managed to play it flat before, all the players now seemed like they were actually standing in places in relation to eachother, and he wonders what would happen if WoW had player collision seen in other games, because when viewed in 3D it looked so horrendously wrong for one player to be standing in the sprite of another, shattering the complex illusion of realness by the 3D effect.

There is so much other than simple games that the Rift could be used for. I paraphrase Palmer Luckey when I say "The reason [Palmer] had chosen to make the rift the way I have, is to make a device that doesn't strive for perfection in one area, and falls down in others. I wanted to make something that was good enough in as many areas as possible, and be affordable, so that we can get it out to people. It is not until people have it, and start using it, that we'll know what it can be used for". He may have mentioned the Kinect as an example of something made for one use, being put to many unforeseen other uses.

You could use a HD version of google streetview to record famous places and locations. Then people could explore them without having to make the trip there. You could use them for 3D conference calls (imagine using a future version of FaceRig to make the Rift Headset disappear). The problem is that there's not enough of these out there for inventors to invent with just now.

What people are thinking this could be used for is only the tip of the iceberg. The reality might turn out to be so much more than first though.

My favorite use of the Rift, and what really showed me the gameplay applications the VR gives, is the game "Lunar Flight"

You're inside the cockpit of a lunar lander. What really makes it cool, is that you have to look around at your controls. If you want to turn the lander on, for example, you have to turn all the way to your right where the power button is to be able to activate it. If you want to access your map, you have to turn to the left where the monitor is. Looking around at your different monitors

However, I have also played computer games in 3D. The difference between a game and a movie is that the movie chooses specific things to show you in 3D. In a game, they simply render EVERYTHING from 2 viewpoints and transmit that to each eye.

This is how Hollywood is doing 3D Movies in postproduction. If it was shot with a 3D camera (one with two viewpoints), everything would be in 3D.

I think the bigger issue is that in a 3D movie the depth of field is not always infinite. There are many shots where the person talking is in focus while things in the foreground and background are blurry. Even if you try to focus on the background, which appears to be a different focal plane, the background will remain blurry and it breaks the illusion.

Yes, that too is something that doesn't happen in a computer game, and something that only a very brave person might try in a movie. Movie makers use focus as a way of keeping your attention on what they want you to watch. Perhaps there is a limitation on using a camera to record movies. Perhaps they could make an animated movie (Pixar or similar) where everything is always in focus, and you chose yourself where to look.

That seems a little cynical don't you think? More and more movies are coming out that have a 3D showing if you look I would guess you would notice it trending up for blockbusters and kid movies. I will give you that it doesn't work well at home because passive TVs have been until recently, rare. Passive 3D tv and projectors are also quite expensive right now but watch and I bet you will see a slow trend towards more people buying them as the technology becomes more reasonably priced.

3d is what it always had been. Height, width, and depth. In movies people complain that you can't refocus, and that means it it not 3d. Well, you also can't look at anything not in frame either, even if it just was in frame- you ser and focus on what the director wanted.Such people generally say it has to be a hologram to qualify for the 3d label.But what you pointed out is we will have a new intermediate level. Better than 3d, but no eye tracking for refocus. Immersive 3d. Maybe eye tracking for reasons ot

Agreed, I will certainly buy one if the price is at all affordable. I've been waiting for a good, motion tracking headset since the old iGlasses display that came out and worked with MechWarrior 2 in DOS. Resolution has been my main stopper since then, but this has not only the resolution, but a giant leap forward in tracking.

I use mine with glasses (since it's too blurry without them). My normal ones don't fit in the Rift, but my previous pair (with their slightly different prescription) does. It's a very uncomfortable experience that becomes physically painful after a while. The lenses of the glasses press against the lenses of the Rift even with the Rift cranked all the way out, which means the glasses are being pushed against your face (bridge of your nose most specifically) with a great deal of force. Ouch.

I would buy one right now but it always seems like the retail version is just around the corner. i would rather wait and get a model with tracking and a better screen. I can hardly stop myself from getting the dev model, but I know I won't be able to justify getting the retail one when it comes out then.

As I understand it, one of the big problems with VR sickness is latency. If the display refresh and the tracking-camera frame rate are both 60 Hz, there's no way to get less than 33ms of lag as the display tracks your movement -- and that's assuming zero time to process tracking info and render the scene.

I'd hope that they're using at least 120 Hz refresh on the display, and something much faster for the tracking camera, but I don't know what the state of the art is like on the tracking end.

I seem to remember many years ago some research with non-progressive field rendering -- I don't remember if it dropped to low-res/faster-updates during fast motion, whether it blurred everything but central vision, or something else. In any event, I think it required highly non-standard display hardware. This was probably in the CRT days. I'd think it would work well to drop back to (say) 480p resolution during fast slews, increasing the frame rate 4x, but I don't know how accessible the necessary hardware/software would be.

The model demoed is said to have 30ms latency, total, from user input to screen. They've mentioned their end goal is sub-20ms. Current thinking is that 7-15ms is the ideal where we aren't able to perceive any lag.

The problem with motion sensors is that there is always unavoidable lag. If you use accelerometers you need to smooth the input a little to avoid it jerking about. Of you use cameras you need to wait for a full video frame, although you can of course raise the frame rate.

Modern controllers are horrible too. Wireless introduces lag. Not much, but it all adds up.

If the display refresh and the tracking-camera frame rate are both 60 Hz, there's no way to get less than 33ms of lag as the display tracks your movement

Not sure how important tracking camera is versus rotation detectors (ones which were in previous dev kit). Orientation is sampled 1000Hz. Camera is probably less, but they have prediction for movement (you cannot change position velocity as fast as rotation speed), so it might be non-issue.

Regarding display refresh - there is 60Hz and there is 60Hz. With new display, they are not taking 33ms to refresh the display - they are 'blinking' it very fast and then it is black for most of the time. This means that

Ah, yes, now I remember reading about the short-display approach. That still doesn't help with latency, though, if you have to render the entire frame before you can "blink" it, and then wait a full frame interval before "blinking" the next frame.

The flow I assumed is something like this: acquire the tracking image (takes one tracking-camera-frame-duration), then read it out (can be arbitrarily fast), then process it for localization (can be arbitrarily fast), then render the next frame of your scene (can b

Their solution here is that the frame is shown on-screen for significantly less than the full 33ms, then the screen blanks, so you're not getting out-of-date visual information. The flicker rate is high enough that you don't notice the gaps.

The camera framerate isn't relevant, because it's not the primary source of motion data. Their motion sensor (which updates at 1000 Hz) is used for positioning, while the camera is used to provide a reference to prevent drift. The camera could probably work at even just 30 Hz and still be fine, because the accelerometres in the rift aren't going to drive that much off course in 1/30th of a second.

Not if you have to capture an entire tracking image before you process it, and render an entire frame before you display it. That may be an invalid assumption, though -- if they can break these down into line-by-line processing (or some other smaller-than-a-screen increment), then yes, improvement is possible.

I have been tracking this for a long ass time and with both google glass and oculus I keep asking where the fuck is Microvision? Their tech deals with all of the FOV, depth of field, focus, focal length, and resolution issues in spades so... WTF?

Are we talking about same Microvision which has 720p resolution and needs a 'screen' at least 6 inches from the projector and cost over $300 per piece? Imagine oculus rift with that front part 20cm long...

I'm a bit worried that if I'm in a complete and total 3D immersive space that I won't be able to use it indoors for fear of bumping into invisible furniture.

I'm in a modest house and I have a tiny postage stamp yard. My Wifi signal is pretty good out on the street, all things considered, but I'm also afraid that if I revert to a five year old and play in the street that I'll be hit by an invisible car.

Have they considered making safe places to use this as part of their marketing strategy? Sort of a big

That's where this [kickstarter.com] comes in. Call it a trackball for your feet (although it's actually concave) with added thigh-strap.Then there's also projects working on representation of your body in the 3d world, including relative position of your various body parts, like Stem [kickstarter.com]. Combine these three and everything first-person should be quite immersive without you falling through a window, tripping over garden tiles or being run over by the school bus. As long as you're okay with poor feedback when you bump into somethi

>Have they considered making safe places to use this as part of their marketing strategy? Sort of a big open VR gym?
>And in that case, let's make multiplayer games where I can shoot my friends who are being presented to me as orcs. It'll make laser tag look like kinder blocks.

The Rift wouldn't be right for that since it completely blocks your vision, but something like Google Glass with an AR overlay on the whole lens will inevitably be used to create something like Dream Park. http://en.wikipedi [wikipedia.org]

They better not lose track of time. Honestly, after seeing the new prototype yesterday I'm stating to think the final product won't be available until 2015. If sony announces and releases a true VR headset for the PS4 this year(not the new HMZ whatever), they'll lose their biggest advantage: being the first to the market. And it's not only sony, valve is reportedly working on a vr headset of their own and there are also castAR, glyph and infinityEye as minor competitors as well.

The resolution could use some improvement (and has for the real release) but the tracking is AMAZING. It really feels like you are immersed completely in a 3D world. It works best in environments where your movement is decoupled completely from vision (driving and flying simulators). I've never experienced motion sickness in my entire life but 20 minutes in Half Life got me feeling quite queezy.

And I believe this is why the consumer version has been delayed. They've identified possible sources for the VR nausea (lag, lack of head *position* tracking) and are working to resolve them.

I'm OK with the delays while they iron out these issues as I'd prefer a VR headset that has a lasting market presence to one that is introduced and in bargain bins in 3 months due to wide spread reports of users getting sick with minimal use.
That said... I'm am seriously giddy about this thing.

If I recall the earlier specs, it had a gyro and accelerometer (like a modern smart phone) so it could track your head *movements* but it had not reliable way to position your head in 3D space (any effort to do so would require initial calibration (tell the SW my head is right now 5 ft from the floor) and go from there and hope the errors don't creep up over time.
The external camera they added (which gets pointed to the user) seems to be a more robust way to determining the exact location of your head and

What impy is referring to is say for example, you're in a vehicle. The vehicle (and you) are moving in the game, but your head (and body) is not so won't be experiencing the actual acceleration.
Or for a FPS you might jump down a ledge (in game) but again, the whole time your feet are firmly planted on the ground in RL.
You would need a system with actuators to jostle and tilt you in the right directions to simulate that. Something like this [costco.com].

They aren't going for perfect, but there are several things that it absolutely needs before it is ready. Head tracking tops that list easily. I have one of the earlier prototypes, and the lack is painfully obvious. Plus, core features like that have to be integrated into games, if you are missing it then you will have a compatability break between 1 and 2, which would be very harmful at this stage of affairs. The new display is less nesseccary, but it is something worth improving while they are working on

Carmack will keep dinking around with this and never ship a product. They should just get version 1.0 out the door and start working on 2.0.

Have to be careful with that when it comes to technology. You might be quite right but in many cases if the technology isn't sufficiently well developed then then potential customers get pissed off and never come back. This can be true even if you eventually work out the problems. See Apple's Newton for an example. A lot depends on the sort of customers you have and how forgiving they are of rough edges on technology. Consumer electronics customers tend to be not too forgiving in my experience.

Given that we've had 'imperfect' (read 'downright sucky') VR available to the public essentially without success for over a decade now, I'd say that they have reason to keep polishing.

Whether or not Oculus Rift will be the eventual winner, or whether somebody who polishes faster will get to it first, I have no idea; but shoddy VR implementations are pretty uncompelling except for 5 minutes of novelty use.

Given that we've had 'imperfect' (read 'downright sucky') VR available to the public essentially without success for over a decade now, I'd say that they have reason to keep polishing.

I used to work with technology like this about 10 years ago. Even if it were spectacularly good imaging that isn't really the problem with it. The biggest problem is that most people do not like wearing headsets. Oh, you can get someone to try it out for fun once or twice but after that the novelty wears off quickly for most. You might get some hardcore gamers and technophiles to buy it but I really cannot see this being a mass market item. It's fairly expensive to make, the market size is relatively s

I can't help seeing a huge similarity between what you are saying and what people said about smartphones, when I was such a geek for carrying around a HP28s / Psion IIIa / Palm Pilot / PocketPC. After decades of slow improvement, something can reach a threshold a suddenly take off.

I can't help seeing a huge similarity between what you are saying and what people said about smartphones, when I was such a geek for carrying around a HP28s / Psion IIIa / Palm Pilot / PocketPC.

I don't remember anyone thinking the potential market for smartphones was a niche market. The problem was that the state of technology didn't allow for a form factor and features with mass appeal. Smartphones were a convergence of several technologies for which there was already a proven demand (phones, PDAs, cameras, personal computers). VR headsets do not enjoy the same situation. Virtually nobody uses them or needs them for any practical purpose today. There are a few incredibly small niche applicat

It boils down to whether you think Virtual Reality is a viable concept - whether it will displace the ways we currently accomplish things that we deem to be worthwhile.

The immediate appeal of Oculus Rift to me is that I like racing and flight simulators, and I think it will be perfect for that. But the immediate barrier to my buying one is that I have a 15 year old son who already disappears into the Minecraft world for as many hours per day as he is allowed to do so. "Gaming" doesn't even quite cover i

It boils down to whether you think Virtual Reality is a viable concept - whether it will displace the ways we currently accomplish things that we deem to be worthwhile.

I've actually spent about 5 years of my professional life working on simulation and VR technologies including a bit of immersive VR. It does have some uses but as a mass market technology I just don't see the so-called killer app. The hardware is expensive and that is unlikely to change, especially given the lack of economies of scale. Yeah a few people will use it for gaming (not many) and there are some industrial and military uses. The military probably has the most use for this sort of thing of anyo

The Rift is already extremely cheap, devkits sell for just $300 which is about as much as a good monitor will cost you and given that the thing is little more then a mobile phone screen and motion tracking that price can easily go down to $200, $100 or even less in the coming years when they ramp up the volume.

The reason why VR failed in the past is that it was to expensive and just not good enough. Tracking was slow, resolution was low, the things were heavy, FOV was tiny and game support was extremely lim

The Rift is already extremely cheap, devkits sell for just $300 which is about as much as a good monitor will cost you and given that the thing is little more then a mobile phone screen and motion tracking that price can easily go down to $200, $100 or even less in the coming years when they ramp up the volume.

I genuinely hope I'm wrong and that they sell a ton of these, I'm just not optimistic. I'm not against the Rift in any way but I'm pretty dubious they are going to get enough sales to get big volume discounts. I run a company that makes wire harnesses and does contract assembly so I'm more than passingly familiar with the economics involved here. Companies like this contract with companies like mine to build their products for them. A few thousand units is not nearly enough to move the needle on price.

That's part of the equation but it really is not the primary reason it has continued to fail. The primary reason is that this technology always has been a solution looking for a problem. It's neat but it doesn't really scratch an itch.

I remember similar things being said about tablet computing:) It doesn't particularly matter if it has "mass market" adoption anyway in the long run. As long as it becomes available to people like me who have been waiting for something that has "good enough" resolution and tracking, at a decent price, then it will have been worth it. I do think that there are a lot of gamers who would love this. Just look at how much the simple motion tracking on the first Wii changed the direction of console gaming.

To put it in perspective, game consoles are sustained entirely by the limited market of gamers. "Casual" gaming was not even seen as a market segment until fairly recently with the rise of mobile gaming and the Wii. Until the PS2, game consoles had no purpose other than playing games either.

I agree there are going to be significant hurdles to overcome in achieving the economies of scale needed to push this on a mass-market basis. But they're on the appropriate path by drumming up core development support fo

A few thousand units is not nearly enough to move the needle on price. Setup costs diminish greatly at around 10,000 units (usually) but that's isn't where the big money is here

They have shipped well over 20'000 developer kits so far. That's a device only sold through their website, known to be low-resolution, lacking position tracking and to become obsolete within a year due to release of the much improved consumer version. I have little doubt that they will sell a lot more units once the consumer versions hits the retail shelves. They also have something like $90 million venture capital, so they certainly can do some volume ordering.

The primary reason is that this technology always has been a solution looking for a problem.

The eMagin HMD from about a decade ago wasn't "downright sucky." It worked great for stereoscopic vision with head-tracking, thought the FOV and resolution were nowhere near those of the Rift. Playing F.E.A.R. on it was immersive and terrifying. The only problem was that Nvidia dropped support for it shortly after release. It would have easily been worth the $1,000 it cost if it had allowed upgrade of the video driver beyond the version that was current when the HMD was released. Instead Nvidia taught

I disagree, this is a paradigm shift for consumer devices, if you get to the market with something that causes vertigo/nausea in 50% of your users (due to high latency, some people can adapt, some can't) you will have a LOT of bad word of mouth and significantly cut your sales. When it comes to VR now either you do it very very very well, or it's better to not do it at all: I am really glad to see that they are taking their time with this and are going for the lowest amount of latency before shipping.

In this case though I think they may be right - VR has a bad name that will work against it because of the crappy hardware released in the 80s and 90s. Getting it right (enough) this time could well be the difference between having it take the world by storm, and being just another historical curiosity. And the single biggest weakness with the devkit would seem to be nausea - pretty much everyone agrees it starts fairly quickly, especially in First-person games, and takes a month of two of acclimation to

You know that every tech company has version n+1 deep in development and version n+2 at the experimental stage in the window before version n ships, right? It's a tendency which is remarkably insensitive to development timescales, be it a yearly phone or a half-decadal console.

What kind of propaganda are you pushing, who are you working for, or how ignorant are you to not have read seemingly ANYTHING about the Oculus Rift before posting this? The entire idea has been to ship some time in 2014.

More importantly, Carmack just came on and it's not HIS project. If I'm not mistaken he was tapped after the car crash that killed the fellow who was heading this part of the project.

I have to disagree there. I've seen a few interviews with Carmack on this technology and it doesn't seem that he's fighting for "perfection". In the interviews he cites specific numbers he believes are necessary to achieve immersive VR. He's not aiming at an abstract concept of making it better and better, but rather minimum requirements (for example, 20ms input lag).

There have been plenty of VR devices in the past, and they have been huge letdowns because people hear VR and imagine that it's like seeing an

Latency. The screen isn't the only latency component, and if you're trying to get under 20ms (considered to be the point below which your brain won't notice the latency) and your 60Hz display is adding 17ms of latency, that's a problem.

Besides, your eye doesn't work like a camera with a shutter. A human can see a 1ms long flash of light, for example, but can't process more than 10-12 distinct images per second, for example. And the framerate required to produce natural motionblur is way in excess of 60Hz.