Miriam has run a multitude of experiments that aim to assess how current immersive technologies can tap into the most beneficial neural systems for learning. In particular she talked about the Mirror Neuron System which, when activated, significantly improves learning performance.

My interpretation is that this area of the brain activates when we copy other human beings. The evolutionary benefit gained by apes copying apes as they learn to use tools and develop language has led this part of the brain to be instrumental in the quality of our learning.

In short, if we can utilise human interaction within our training applications, we will improve the users ability to learn.

But here’s the rub;

The human has to be real or highly realistic for this part of the brain to activate. Put a cartoon representation of a human, something that we would recognise as human, in front of us and nothing happens! Put a medium-fidelity CGI (Computer-Generated Imagery) character in our learning app and I’m afraid that once again we fail in this respect.

This is great news for 360º content, where we can have a real human committed to video. However, 360º as a format is limiting in terms of the level of interaction that can be included. This is highly problematic if we want to include muscle memory and learning-by-doing elements in our applications.

Taking all of the above into account, we must focus on improving our ability to utilise highly realistic human avatars, CGI animated characters and motion capture. These areas have traditionally required highly trained niche professionals and hardware. This made them very expensive, putting them out of reach of typical Learning & Development budgets.

The good news is that, as with all things tech, the kit required is constantly falling in price and the tools available to us continue to improve. At the same conference I saw highly realistic avatars created using the Unreal Engine (a commonly used tool to author xR content). At Make Real, we have also been working out new workflows that use the stereo camera capability of an iPhone X to capture facial information and map this onto CGI characters.

To be clear, we are not there yet, not in terms of triggering the Mirror Neuron System, but we are getting closer all the time.