Try on any virtual reality headset and within a few minutes the sense of wonder might wear off and leave you with a headache or a topsy-turvy stomach.

Computational imaging experts say that’s because current virtual reality headsets don’t simulate natural 3D images. Now, researchers in the Stanford Computational Imaging Group have created a prototype for a next-generation virtual reality headset that uses light-field technology to create a natural, comfortable 3D viewing experience. With help from NVIDIA Corp., their findings will be presented and demonstrated Aug. 9–13 in Los Angeles at SIGGRAPH 2015, a conference that focuses on computer graphics and interactive techniques.

In current “flat” stereoscopic virtual reality headsets, each eye sees only one image. Depth of field is also limited, as the eye is forced to focus on only a single plane. In the real world, we see slightly different perspectives of the same 3D scene at different positions of our eye’s pupil, said Gordon Wetzstein, an assistant professor of electrical engineering at Stanford. We also constantly focus on different depths.

When you look through a low-cost cardboard virtual reality headset or even a more expensive headset, there is a conflict between the visual cues your eyes focus on and how your brain combines what your two eyes see, called “vergence.”

This mismatch is similar to what causes the motion sickness symptoms some people experience. If you read a book in a car, your eyes stay fixed on the text even when the car moves on a bumpy road. But, because your sense of gravity feels that bumpiness while you read, there is a mismatch between the cues of what you see and what you feel, thus creating the feeling of motion sickness.

The new light-field stereoscope technology – developed by Wetzstein along with researchers Fu-Chung Huang and Kevin Chen – solves that disconnect by creating a sort of hologram for each eye to make the experience more natural. A light field creates multiple, slightly different perspectives over different parts of the same pupil. The result: you can freely move your focus and experience depth in the virtual scene, just as in real life.

“You have a virtual window which ideally looks the same as the real world, whereas today you basically have a 2D screen in front of your eye,” Wetzstein said.

The headset design incorporates two stacked, transparent LCD displays with a spacer. The researchers’ prototype was made with off-the-shelf parts and is the first step toward a viable solution.

Not everyone experiences the negative side effects of current headsets after using them for a few minutes. But solving the problems for longer-term exposures could prove consequential for numerous applications, including robotic surgery, phobia treatment, education and entertainment.

“If you have a five-hour (robotic) surgery, you really want to try to minimize the eye strain that you put on the surgeon and create as natural and comfortable a viewing experience as possible,” Wetzstein said.

This new Stanford research comes at a time when virtual reality is seeing explosive growth and interest in Silicon Valley, Hollywood and beyond.

“Virtual reality gives us a new way of communicating among people, of telling stories, of experiencing all kinds of things remotely or closely,” Wetzstein said. “It’s going to change communication between people on a fundamental level.”

Wetzstein’s computational imaging work is going beyond the lab and into the classroom. In the fall, he will team with Tanja Aitamurto, deputy director of the Brown Institute for Media Innovation at Stanford, to teach an interdisciplinary course at Stanford’s d.school focused on the social impacts of virtual reality. The class, EE392D,Designing Civic Technologies with Virtual Reality, will be open to all Stanford students from any major. Wetzstein is also developing a class focused on virtual reality technology for the spring quarter.