Cognitive Sciences Stack Exchange is a question and answer site for practitioners, researchers, and students in cognitive science, psychology, neuroscience, and psychiatry. It's 100% free, no registration required.

I'm interested in how the brain processes and recognizes the image of the person's own face.

A bit of background:
A while ago I've developed an overlay-camera like app for iPhone that allows me to combine two pictures. One picture, is a static image taken from the internet. It can be any person, young or old, male or female. On top of this picture is overlaid a semi-transparent live camera feed from an iPhone. The camera feed can be resized and repositioned to align the facial features of two images.

Once the person using the app aligns two images - the live camera image(like a reflection) and the static image, the person is looking at a combined reflection.

Because the camera is live, and the person can blink, smile and otherwise contort facial muscles, it appears that the person is looking at his/her own reflection, while the brain sees a combined picture. This produces an interesting experience, where the brain is tricked into accepting the combined image as one's own. The degree of "difference" between two images can be adjusted with the transparency slider, and the magic number is somewhere around 45-55% transparency, where the brain accepts the combined image. Lower values are seen as the web image, while higher values are seen as the camera image.

Consider this image of Hugh Laurie

Here's an image of my own camera reflection superimposed on the image, there's almost perfect match in facial features due to transparency setting. Notice how the headphones and the sweater can be seen, but there are no major differences in facial features (because the beard hides my chin)

By adjusting the transparency of two images, I was able to get close to a point where the image is no longer recognized as self, and is instead seen as a new image. Additionally, my brain sharply recognized the emotion written on the other person's image. Once the two images are blended - the wrinkles around the eyebrows, mouth, and the shape of the mouth become very obvious, and appear to give an emotional "gloss" to the image, which cannot be as easily recognized in the image alone, and is not present in the live camera feed.

Random aside, but I have edited "wandering" into "wondering" about 15-20 times in your questions on here and on Bio, I'm curious as to whether your choice of the word is deliberate. :)
–
Chuck SherringtonNov 5 '12 at 0:19

haha, and here I was "wondering" why my questions are edited so frequently. Thank you for the heads up!
–
Alex StoneNov 5 '12 at 18:45

1 Answer
1

The brain does map different parts of the visual image to different neurons. It's called topography and is a fundamental feature of all sensory neurons.

The next part of your bolded questions jumps upstream. Some fMRI studies found "Jennifer Anniston" cells- cells that only respond to the image of Jennifon Anniston. Italian researchers found neurons in awake primates that increased firing in response to another monkey doing something. The researchers called these mirror neurons.

So, there may be single neurons in humans that bind visual perception and recognition. However, it is more likely, that interacting groups of neurons change their activity relative to each other or the omnipresent local field oscillations. This mechanism appears in other places in the brain and, theoretically, it is also more robust to injuries.

Very interesting! I did notice that with the exception of several people, I did know how most of them looked like before the experiment. So it may very well be the case of different groups of neurons firing.
–
Alex StoneNov 8 '12 at 2:22