OpenVR: Building an Oculus Rift for only $150

The Oculus Rift is a really cool piece of kit, but with its future held in the grasp of Facebook, who knows what it’ll become now. So why not just build your own? When the Oculus first came out [Ahmet] was instantly intrigued — he began researching virtual reality and the experience offered by the Oculus — but curiosity alone wasn’t enough for the $300 price tag. He held off until he had a useful purpose for it, and as it turns out he did — he builds and flies multicopters, for which an FPV setup would be super handy!

Other FPV setups cost close to $300 as well, so getting a device with more features just makes sense. Promptly after realizing this, he faced the Maker’s Dilemma: Buy it, or build it? To test the waters, he decided to order some aspheric lenses to do some quick tests with a smart phone and a ghetto cardboard box setup — the results were surprisingly good. No turning back now!
The hardware consists of:

The majority of the cost here is in the LCD, with everything else being pretty inexpensive. Once it’s all built (details on his blog), it is time to get the software, downloaded right off of GitHub. The rest is pretty self-explanatory — just take a look at the results! We’re even tempted to build one now. Videos below.

There are a couple of problems with this. . . well, not so much problems but issues:

1) Most video downlinks are low resolution; thereby providing less information for your visual cortex to draw depth data from. (There is the DJI 2.4 GHz HD system which seems promising but keep in mind that two cameras are potentially required.)

2) Depth data through stereoscopic imaging is moot when converging at infinity.

3) It is very hard for one’s visual cortex to make use of “unnatural” image data. One’s brain has spent years calibrating to one’s eyes. To suddenly be presented with image data through cameras that are poor quality in comparison to the eye makes it hard for your brain to fuse the image data. This problem is endemic with VR systems, so much so that many users of VR don’t experience image fusion at all without some amount of training.

4) This leads me to a thought: and ideal FPV setup would utilize one high quality camera and a laser range finder (or similar) and downlink both data to a base station which would process the data and present a user with a simulated stereoscopic image. This would be useful for negotiating situations in which depth sensitivity needed to be adjusted. For example; flying a multi-rotor through a building would require a much different depth sensitivity than, say, soring a fixed-wing at 5,000 ft.

Regarding “unnatural” image data, I thought the opposite was true? For example they do that experiment whereby someone wears glasses that flip everything upside down, and it takes on the order of days for the visual cortex to adjust. Another example is with artificial sight systems such as implanted electrode arrays or those tongue-pad things, where the arrays are very low resolution.

But 1280×800 is the original resolution of the Oculus Rift – which made me sick even before moving around. Staring at blocks might be enjoyable for minecraft players, but for anything serious you need considerably higher resolution.

It’s not the resolution (or lack of) that makes you sick, it’s the sensor lag.

I’ve built two DIY head mounted displays with motion sensing and I also own the first Oculus Rift dev kit.

The second of my two HMD’s was built as an attempt to improve the first. I used an iPad Mini screen (not the Retina display) with a simple Chinese 9DOF board (a combo of gyro, accelerometer, and magnetometer).

Like you, I thought the increased resolution would reduce the motion sickness (which I’ve never noticed, but others who’ve tried my rigs have). Apparently the lag was still enough to bother people.

So I upgraded the motion sensor. I swapped the 9DOF board for a 3D Robotics APM 2.6. The APM 2.6 is a very capable multi rotor controller. Totally overkill for what I needed, but the sensors are top notch.

Apparently the updated sensors have drastically reduced or eliminated the lag for those who had it the worst before.

Out of curiosity, I hooked the APM 2.6 up to my original lower res screen, and the lag was gone as well.

There are indeed issues with resolution, as well as screen technology. Higher resolution reduces the “pixel grid” effect. Screens with tighter spacing between pixels reduces the pixel grid effect as well, even with identical resolutions and similar sized displays. It’s my opinion that the resolution has very little impact on the feeling of “realness” – it’s the motion sensing input.

If it’s wrong, the contract our brain keeps with all its inputs is breached and the brain lets it be known.

Lag is what the Oculus Rift team have been putting a lot of effort into eliminating. If you’ve seen their latest prototypes they’re using motion tracking systems that use cameras and IR reflector dots in addition to sensors.

Problem is that even if motion tracking get’s to be 100% accurate with zero lag it will always be a certain amount of milliseconds behind the rendering/physics engine of whatever simulation these headsets will be used for.

I’m sorry, I was a little unclear. The motion did make me sick after a short while, but that wouldn’t have stopped me from using it. But looking at that pixel grid close up is a long way from fun for me.
I still had high hopes for the consumer version, until Facebook bought them.

If hardware was the only problem then yes you could maybe build an oculus “clone” for $150. This is not the case however. It’s probably a very nice piece of kit, no doubt! But at this point, I don’t really think anything can compare to what they’ve got in store. Eagerly awaiting my DK2 (which I got sponsored through work) :)

Yup, I really liked your project. I tried your LVDS extension board, unfortunately I had an extra pin on the power plug, for brightness. While trying to give it the highest value via VCC of the board. I burnt the screen :D

Ok so here is my crazy idea for a better 3d experience. Using a wide formant curved “3D” capable screen. with the correct polarization on the left / right lenses. would give a much wider view than current screens almost to the point of having real peripheral vision. as you could see across the nose line where they currently have a divider.

So does the monitor go to blue screen if you pick up interference or start dropping signal? If so it will make a poor FPV setup. Hobbyking put out a DIY kit with a monitor that doesn’t go blue screen and it sold out in a day for that reason. They had to have the monitor made special just for that purpose.

I believe it’s more like an analog screen. Instead of dropping the image all together and going blue, the picture quality degrades. So if your flying with it as an FPV rig and you see the picture degrading you can still see well enough to turn around and start flying back until the signal gets stronger.

I made a DIY Rift using a lower resolution 7″ 1024×600 LCD monitor, seems to work well on older games like Quake and Unreal, but its way to low resolution for some of the newer games, have a look over at

What you get is a cheap DIY Rift that looks to the software and demos like the real thing, and works well, but the resolution is way to low, I am planning to upgrade to a 7″ 1280×800 LCD, just wish there were more 3D printable designs for the 7″ LCDs, the only 3D printable design that I have seen for 7″ LCDs is from http://www.mtbs3d.com/phpbb/viewtopic.php?f=26&t=18035

It’s not an Oculus Rift clone. The development kit uses a 7″ panel and 7X aspheric lenses, not a 5.6″ panel (was used only in the prototype) and 5X aspheric lenses (not enough magnification). It won’t be compatible with games and demos developed for the Oculus Rift, without content it’s pretty much worthless even if it’s cheaper.

The drivers you use for oculus are general 3D thingies with various options to tweak depending what you use as display. The perception driver allows you to make game profiles with settings for FoV and yaw/roll/pitch multipliers and all such things for instance.

Another option is to use a android cell phone. They have extreemly high-resolution screens, just the right size, and good quality IMUs and enough processing power to use them with low lag. http://projects.ict.usc.edu/mxr/diy/fov2go/

I’ve seen these DIY project before. Your project’s biggest weakness of your project is that your using a USB uplink to your VR headset, no way will that possibly get you enough band width to display a rendered game at the 720p of your lcd at 60fps. even with a usb 3.0. Wouldn’t it be better to go with HDMI and forget the arduino and get some sort of software to intercept the image and turn it into stereo before it’s sent out the HDMI? Next. Where’s your head tracking solution? That’ll bring you up to another 150-200$ if you go with TrackIR. You could go with FaceTrackNoir but that’s a hit or miss depending on your setup. Also that 720p screen is getting split into 2, so it’s as much res as original dk1 which will make your games look like ass.. DK2 has 1080p. Also, if you get a 150$ lcd, it’s response rate is probably going to be around 12ms and dk2 is already at 2ms. So, I ask the question, why would you do all this and get a much much worse product and for more or about the same money (when you account for headtracking) as a DK2 which is available at 350$.