It might well be that current experiments with binaural sound for VR 3D recreation of sounds based on the relative positions of the ears, and the head shape of the listener, are not quite getting the full picture of sound reception.

A McGill-led study has found that the perception of speech sounds is modified by stretching facial skin in different directions. Different patterns of skin stretch affect how subjects perceive different words.

In other words, the skin itself is responsible for a certain percentage of deciphering incoming sound waves. Researchers used a robotic device to manipulate the facial skin in ways that would normally be experienced when we speak. They found that stretching the skin while subjects simply listen to words alters the sounds they hear.

So, when you go to talk physically, stretching the skin around your jowels, this impacts on the way sound inflections reach the brain. Its only a minor effect, and the smoke-and-mirrors approach VR uses may be avble to sweep this one under the carpet as the effects are so tiny.

McGill neuroscientist David Ostry of the Department of Psychology his colleagues at the Haskins Laboratories and Research Laboratories of Electronics, Massachusetts Institute of Technology performed the original study. Their results are published in the January issue of Proceedings of the National Academy of Sciences (PNAS).