Vrmmorpg Research Paper

The one factor that everyday people think is impossible to make in a VR is taste, smell, touch, and hearing. But it can be done, and is very much possible. It would require lots of very advanced technology. But if you were really truly passionate about making it happen, it could be done. Here’s a little info that just barely scratches the surface on what I know about making it possible. Here’s how: Things like taste are simply brain waves being sent from Neuro to Neuro. By recording what brainwaves look like when tasting a certain taste, like sour, you could then, with the right equipment, mimic that brainwave therefore tricking the brain into thinking your tasting that taste. If you wanted to use many different tastes for many different things, it would require A LOT of brain recording, so I would advise anyone to stick to the basic sweet, salty, sour, and bitter. It would save lots of time and money. And it’s almost exactly the same for touch; you could make that happen by mapping brain waves, because brainwaves “look” completely different when you’re feeling different things like hard, soft, or squishy. What you would do is map what the brain looks like when it’s feeling a certain texture like squishy and then mimic the brainwaves when the players avatar is “feeling” something that’s supposed to be that texture in the game. Sound is very simple, but when it comes to hearing “other players” and making your own avatar speak, what you would have to do is redirect brainwaves that tell you to speak and put them in the games data where they then could be then translated to make the sound appear to be coming from the player’s avatar. And to make it sound like that player’s voice, having them record them saying a few words before playing the game. That could then be transferred into the game so the sound is very similar to that of the players own. The only truly hard part is making good realistic animation, honestly I don’t know how anyone could make truly realistic animation, but here are a few ideas: As you can see by simply looking at the back of your hand, there are simply too many details in this world to simply animate. The only way to even start to capture the details would be to somehow use a photo, make it 3D, and make it so you can interact with it, for example, you would be able to go up and feel the roughness of the bark and touch the leaves in a sort of “picture” of a tree. Honestly I don’t know how to make that possible, but it was simply an idea I had. We already have the ability to make very extremely realistic animation, for example a creature in a movie, we have the ability to make it so very realistic and detailed, the only problem would be making the realistic object “feel able” or made so you can interact with that person or object, a little of that could be achieved if you have an already complex system that can monitor and send out / mimic brainwaves. If you had a system complex enough to mimic brainwaves, you could have the system send a message back to the players brain mimicking the signal to feel, taste, small, or hear, therefore making the brain think its smelling, tasting, hearing, touching, or doing something. Honestly I don’t know how complex the technology they are using for this is; I only watched a short 20:00 minute video on it. But I’ve heard it’s amazing, and it’s my dream to see it myself one day. I don’t know if their system focuses on actual movement and using brainwaves that were already sent, or recording and mimicking brainwaves. If you wanted the game to be even more like S.A.O. a system like option two (they do exist) could make it so the player doesn’t have to move their actual body, only their avatar. I’m not sure if their S.A.O. technology is more like an “Oculus Rift” where it mainly focuses on actually movement and actions, not brainwaves. Or if it’s more like “Emotivs Neuro-Headset” which focuses only on brainwaves, and controlling things mentally. I just scratched the surface on what I know, I’m young and haven’t been studying neurology for too long, but I’m very devoted and know a lot for my age. It’s my biggest dream to be able to go to a virtual world and get away for all the stress and pain of this world (At least temporarily). I hope one day to be a neurologist and get a degree in computer science. -A 13 year old girl with big dreams

Share

Immersive virtual reality has barely cleared the launch pad, but a new technology promises to redefine the VR experience. It’s called full dive virtual reality (FDVR) and it is straight out of science fiction — literally. Full dive virtual reality technology pushes VR beyond immersive headsets and makes the user one with the machine through the use of a Brain Computer Interface, or BCI. You read that right. The age of the cyborg is upon us… almost.

Yes, advances are being made across a wide range of technologies that will allow users to control computers with mere thought, and to receive data from the computer directly into the brain. A true full immersion virtual reality experience is the new holy grail. But is full dive virtual reality possible now, or shall we check back in a few decades? More to the point for gamers, if full dive virtual reality is soon to transition from science fiction to the inferior world known as reality, when will full dive virtual reality gaming become more than a distance dream?

Before we explore those questions, let’s step back into sci-fi and remind ourselves exactly what we hope FDVR will do for us.

Origins of Full Dive Virtual Reality

The term originated from a Japanese light novel anime series, Sword Art Online, written by Reki Kawahara in 2009. In the series, a Virtual Reality Massively Multiplayer Online Role-Playing Game (VRMMORPG) called Sword Art Online, or SAO, is released in the year 2022. 10,000 players don their “NerveGear” BMI helmets and begin to play.

Players soon realize that the game developer has locked them into the game and that any attempt to logout or end the game will be fatal to their real-life bodies. The only way to exit the game, and to survive, is to successfully reach the 100th floor of the castle and defeat the final foe.

Not exactly the pitch you want to give investors, to be sure, but there are some good things there we can take away. First, the story popularized the concept of VRMMORPG. Gamers who weren’t thinking of engaging untold minions in gameplay suddenly started dreaming. Second, the concept of a helmet creating a non-invasive neural interface between man and machine whetted the appetite for those wanting a more seamless way to interact with computers. Not that the idea of BMIs was new, but seeing the concept demonstrated in such a rich storyline made the idea all the more tempting.

Examples of Full Dive Virtual Reality in Use

Back in the real world, real and exciting things are happening. Full dive virtual reality technology is rapidly stepping off the pages of science fiction and into the lab.

Humble Beginnings

Recent advances in technology can sometimes be appreciated more when contrasted with earlier successes and failures.

In 2013, Harvard University broke ground for FDVR advances by conducting a human- brain-to-mouse-brain experiment. In this experiment, researchers used an electroencephalogram (EEG) brain-brain-interface (BBI) to detect the thought patterns of a researcher. By focusing his thoughts, the researcher was able to control the movement of a rat’s tail. Not exactly the kind of action most serious gamers are looking for, but it marked an important turning point in BBI technology. For the first time, two brains communicated directly through a hardware interface.

There were two reasons why this experiment was noteworthy. Not only did it demonstrate that human thought could be correctly interpreted by computer and used to control a rat’s brain, but the experiment was non-invasive for both the researcher and the rat. While researchers around the world have been able to control rats’ brains through the use of implanted electrodes, this rat suffered no such indignity. Non-surgical focused ultrasound (FUS) was used to impart the control signals to the rat’s brain, a technology we will elaborate on shortly.

The Paralyzed Walk

Fast-forward to 2015. Researchers at the University of California at Irvine also used an electroencephalogram machine to detect human thought. This time, however, those thoughts did not make it possible to control a rat’s tail, but to help a paraplegic man walk for the first time in five years. The man, who had suffered a spinal cord injury, walked nearly four meters with the assistance of the EEG device and advanced software.

The study involved placing electrodes on the patient’s head and on his legs. By detecting and interpreting signals from the man’s brain and sending those signals to his legs, the system bypassed the damaged spinal cord and allowed him to once again control the movement of his legs.

The experiment required the patient to undergo 19 weeks of training prior to taking his first step, but the encouraging results proved the concept was valid.

These two experiments may not be encouraging to those who had hoped for more, but consider this. Within two years, we have seen the technology advance from wiggling a rats tail to allowing a paralyzed person to actually walk across the floor. Such progress should be exciting, indeed, for anyone interested in full dive virtual reality development.

Technologies involved in Full Dive Virtual Reality

Full dive is a term that has been used by some game developers to imply that their games provide a more-immersive VR experience. By using advanced full-body motion sensors and high definition spacial audio, along with the VR headset, players do enjoy more immersive gameplay than ever before. But this is not true full dive.

Full dive, in the truest sense, goes beyond external viewers and sensors for user interaction and provides direct interaction between the user’s brain and the computer. Full dive is not so much full body immersion as it is full mind immersion.

Physical connections to the brain are not necessarily required, but there must be interpretation of the user’s thoughts by the computer, and — more importantly — the computer must be able to send sensory data directly into the user’s central nervous system. Full dive is, essentially, the ultimate goal of brain-to-computer interface research.

BCIs have been around, in various experimental stages, for years. What makes FDVR a new concept is that it has the added component of virtual reality. Regardless of whatever advancements in BCIs may have been made until now, VR is a game changer (no pun intended).

Within the scope of what full dive is, or can become, is the question of just how much interaction between user and machine is possible. Two levels of interaction are theoretically achievable: Mobile and Immobile.

Mobile and Immobile Full Dive

Mobile full dive, as the name suggests, involves the user physically moving his or her body with their movements being detected by external sensors, not unlike current VR technology. For lack of refined methods of detecting and interpreting thoughts, physical action provides the most reliable means of providing data to the computer. In order for the experience to be FDVR, the sensory data returned by the computer must be fed directly into the user’s brain or central nervous system; otherwise, we just have a conventional VR experience.

Immobile full dive, on the other hand, means just what it says: the user is immobile and the FD experience is entirely cerebral. No physical movement is necessary, as the computer can translate the user’s thoughts with adequate precision to control the program. While conventional VR gear could still be used — and probably will be for the foreseeable future, full dive will ultimately involve bypassing the user’s five senses by sending sights, sounds, smells, tastes, and tactile sensations directly into the brain.

In theory, a full full-dive experience would disable the user’s external sensory awareness, replacing it with virtual awareness for the duration of the HDVR session. Likewise, the user would have no need to move their body, as their thoughts would control their virtual bodies within VR space. There is, admittedly, a creep factor to being so totally immersed that one’s own physical body becomes obsolete. Some might even question whether we really want to go there. But, of course, we will.

Have a AR/VR Project in Mind? We Are Here to Discuss

Government Help

Studies such as the Harvard experiment show exciting progress, there must be greater even advances in order for FD to become a reality. The U.S. Defense Advanced Research Project Agency (DARPA) may provide a key component.

DARPA plans to spend $60 million (USD) over the next four years to develop a high-resolution, wide-bandwidth intracranial electrode array for recording and stimulating brain activity. The DARPA Stentrode (stent and electrode) has been successfully tested in sheep and is injected into a blood vessel where it records and stimulates neural activity. Experiments with humans began in 2017.

The minimally invasive device may be the closest thing we have, yet, to a brain modem. And while the ultimate goal is to enjoy a full dive experience without the need for implants of any sort, the lessons learned by the Stentrode will help us to get there.

Electroencephalogram (EEG)

One secret to advancing any technology is to find what works and keep doing it. As we discussed, EEG systems were successful in the 2013 rat-tail experiment, and they were successful in the 2015 University of California at Irvine experiment where a paraplegic walked. As promising as the DARPA device may be, the non-invasive EEG holds, perhaps, greater promise for developers wishing to advance full dive technology.

EEG has already moved from the lab to commercial applications. One company, Emotive, has penetrated the market with 5-channel and 14-channel “neuroheadsets” and advanced EEG software. Emotive is developing applications for its products in the fields of “academic research, advertising and media, education and training, mobility, defense, communication, automotive and IoT (Internet of Things) development.”

Clearly, EEG is an excellent foundation upon which to advance full drive technology.

Focused Ultrasound (FUS)

In terms of brain interfaces, focused ultrasound is a means of stimulating targeted regions of the brain by bombarding them with ultrasonic energy of a particular frequency and pressure level. In the case of the rat tail experiment, the motor cortex of the rat’s brain was stimulated with 350 KHz from a transducer placed on the rat’s head.

While FUS is primarily a therapeutic tool used by healthcare professionals for treating certain health conditions, Harvard’s out-of-the-box thinking adapted it for their brain interface. This is exactly the kind of creativity that will lead innovators to push full dive closer to reality.

Full Dive Virtual Reality in Gaming

In the 2016 edition of this article, we stated, “To date, there are no TRUE full dive games…” And to be honest, no commercial-grade FD game was even on the horizon, nor the headset to control such a game.

That has all changed.

At SIGGRAPH 2017 in Los Angeles, Neurable debued the VR game Awakening. The game storyline goes like this: you are a child who has telekinetic powers and you are being held prisoner in a government lab. Robot prison guards are present prevent you from escaping your cell. The object of the game is to use your telekinetic abilities to select objects in the lab and to hurl them at the robot guards, knocking them out of commission. Defeat enough guards and you are free to escape.

Oh, yeah. And you control the game with your thoughts.

By simply focusing on whatever object you wish to weaponize, you cause that object to be thrown at the guards. The debut version used a modified HTC Vive headset. An array of sensors detect brainwave activity on the surface of the player’s scalp

If battling foes using only the power of your mind is on your bucket list, you might not have long to wait. Neurable is aggressively working with content developers and VR partners to bring their technology to market.

But wait… it gets even better.

Neurable is working on an SDK for Unity, so developers can build their own mind-controlled games. How cool is that?

AppReal-VR: Your Full Dive Virtual Reality Development Partner

Full dive virtual reality development isn’t all fun and games. Success requires a solid foundation in the very latest virtual reality technology, and the ability to transform complex theories into workable solutions. Pioneers in full dive development must go beyond adopting the latest technology; they must be able to do real science.

AppReal is a virtual reality development company specializing in programming, VR production, mobile app development, and custom development of virtual reality solutions. We have the experience and technology needed to bring clarity and direction to your FDVR vision. Together, we can transform your ideas into reality. Why not call us today to arrange a free consultation?