The second half of Facebook’s keynote took an odd turn today at F8, spending a surprising amount of time looking at the unknowns of consciousness and visual illusions. But toward the end, Oculus Chief Scientist Michael Abrash spent a few minutes talking about the biggest opportunities for improvement in the virtual-reality experience: haptics, visuals, audio and tracking.

While previous versions of the Oculus Rift have required plugging in a separate set of headphones to get the “I’m encapsulated in a virtual space” feeling, the Crescent Bay prototype has its own pair built-in, and the company has said the consumer model will, as well. The Crescent Bay model I played with at CES earlier this year didn’t include features like noise cancellation, so there’s still obvious areas to improve on that aspect of the integrated experience over time.

On the tracking front, the Crescent Bay demo proved that Oculus can provide nigh-perfect camera-based tracking when you limit the space you can move around in to a four-foot by four-foot square. If you look outside the Oculus ecosystem to the HTC/Valve Vive headset, the technology is already out there to allow for a full-on Holodeck-style “I can walk around in this space” experience that gives you as much as a living room’s worth of space.

The visuals issue is a little trickier. There are a few main ways Oculus can improve the graphics in the Oculus Rift headset as it exists today. The “easiest” method is to simply improve the hardware on the PC side of the equation: If you throw a better graphics card in the desktop powering your headset, you can get more dynamic shadows, more polygons and prettier textures in your games at the same screen resolution without a drop in performance.

That’s basically what Oculus has been doing with each of its public demos. Every time I’ve gone in for one, I’ve been able to get the engineer running the session to brag about the GPU they’re using (at CES it was a ~$600 Geforce GTX 980). But in the long run Oculus is going to improve the optics and display in the headset itself.

Even in the impressive Crescent Bay prototype, you’ve only got a 90-degree field of vision, while your eyes are normally capturing 280 degrees. To remove this “binocular” effect (and any visible pixels in your vision), you’d need better lenses and a display with a resolution higher than even Apple’s new Retina 5K iMac, which starts at $2,499. It’s not hard to imagine Oculus getting closer to that ideal spec over the next several years, but it’ll happen with gradual iteration and improvement (and will require early adopters to keep buying high-end rigs for development and gaming).

The haptic side of things is perhaps the most difficult to address. Oculus has yet to demonstrate an “official” control mechanism for the Rift, instead letting each developer use the peripherals they find most suitable. During his keynote, however, Abrash noted that “[he expects] hands to be as capable in VR as they are in the real world” in just a few years. That could be a hint at something along the lines of Leap Motion’s hand-tracking solution for OSVR, which embeds a camera at the front of your head-mounted display.

However, the slide Abrash showed during the keynote specifically noted haptics, which means Oculus is thinking not only about tracking hands to interact with virtual worlds, but having the world interact back — letting you feel the things you reach out to touch. It’s one thing to expect people to strap on a headset for any amount of time, but gloves and other hardware solutions to VR haptics require extra bulk and maybe even charging another set of peripherals. But if it works, haptics in your VR controllers could make experiences far more immersive than even the best projects we’ve seen to date.

It’s a big challenge that’s easy to mess up, so it’ll be interesting to see whether Oculus can figure out something consumers will buy into later this year, or if it’s a longer-term issue to be addressed in future iterations