Raw Data (2016) launched more than a year ago in Early Access. Following a number of major updates including the addition of a PVP mode, the game is set to hit version 1.0 and launch out of Early Access on October 5th on Rift and Vive, are will also come to PSVR for ...

The American film-maker Vince Gilligan has teamed up with Sony Playstation to make his “Breaking Bad” crime-drama a Virtual Reality venture. With the new VR tool, Gilligan enables the film lovers to try out the story of Jesse Pinkman and Walter White in a Virtual atmosphere. The “Breaking Bad” is ...

Daydream’s 2.0 software update, called Daydream Euphrates, is going to bring along with it some major changes that are set to reach down to the core of the upcoming Android ‘O’ operating system. In a AR/VR press briefing, Google told us there are a number of new systems the update will ...

My favorite narrative VR piece that I’ve seen so far this year is Rose Colored by Adam Cosco that premiered at VRLA. It’s a science fiction morality tale that is asking us to look at a potential future of immersive technology. I’m cautious to say much more without spoiling his ...

With Apple’s increasing interest in VR on Mac, the company is adding essential features to its new Metal 2 graphics API to support high performance VR on MacOS. Because VR demands high-powered graphics and extremely low latency, specially tuning the rendering pipeline between the GPU and the headset is crucial ...

Close Menu

Sony’s Richard Marks Expects Natural Voice Input to Play Major Role in VR’s Future

In a recent interview with Glixel, Dr. Richard Marks, head of Sony’s Magic Lab R&D team, talked about PSVR’s development history, social VR, and a possible holodeck-style future. He thinks voice input has unrealised potential, and could become the way users launch into different VR experiences in the future, customising them in real-time thanks to procedural generation.

Mark’s Magic Lab played a pivotal role in developing ‘Project Morpheus’ the prototype VR headset that would eventually become PlayStation VR.

Project Morpheus prototype | Photo courtesy Sony

Following a recent Christmas break where he says he studied a robot vacuum cleaner, tested all available voice-input devices for the home (such as the Amazon Echo and Google Home smart speakers), and watched every Black Mirror episode, it was voice control that excited Sony’s head of Magic Lab the most. Marks thinks that a voice-enabled VR environment, perhaps in the form of a procedurally-generated sandbox, where practically any element could be changed at the user’s command, “doesn’t seem very far away.”

Marks imagines a future where voice input technology is set free in VR, limited only by the user’s imagination. He describes a possible virtual environment that is partly procedural, but containing finely-crafted areas created by development teams where users would send most of their time.

“That’s the kind of thing that will involve probably multiple groups and multiple companies even to get all the content that you would want to have happen, but that’s what I think the vision of VR is in the future. That’s why I see it as the holodeck. I just put it on and I can make my world anything I want right now”, he says.

With apps like Virtual Desktop and even Oculus Home it is already possible to use voice activation to launch VR software from within a PC headset, and there are several interpretations of holodeck-like launch environments available or in development, but Marks is imagining a time where machine learning has taken significant steps beyond where it is today, allowing users to spawn anything from a vast library, or seamlessly interact with virtual characters, with nothing more than a voice command.

Google, who recently claimed to have the most accurate speech recognition, announced its collective AI efforts are now under Google.ai during last week’s I/O 2017 conference, which was heavily focused on machine learning. Their natural language processing is at the cutting-edge of voice technology, but developers are only beginning to explore the complexities and nuances of voice user interface design, as described in James Giangola’s presentation. There are many hurdles to overcome before we can have meaningful and frictionless conversations with our virtual assistants that go beyond a limited set of commands.

Asked why there isn’t a VR version of the most popular games like League of Legends or Overwatch, Marks offers a few reasons, suggesting that the number of available players and budget determines the type of game that can be made, and that sometimes a VR version simply doesn’t make sense without effectively making two different games. He points towards Resident Evil 7 (2017), whose VR mode is currently exclusive to PSVR, as a good example of a game that works on both screen and headset.

“When the game can do it I think it’s a great thing for them to do, because they can take advantage of the huge installed base of non-VR players too”, he says. “But I think once the installed base of VR gets big enough then obviously we won’t have that issue. You can just make an amazingly deep long game that’s super high production value… It just won’t be exactly the same game.”

Referring to Star Trek: Bridge Crew, which launches at the end of the month, Marks talks about the importance of social interaction in VR and in particular, the feeling of ‘co-presence’, and how it will improve in the future as the number of VR users increases, bringing greater incentive to share a virtual space with others. But artificial characters will always have a role to play, and there is a higher expectation for believable interaction with NPCs in VR games. To highlight co-presence using AI, Magic Lab has a ‘believable characters’ demo, where you interact with robots in a playroom using natural gestures and body language.