Familiar Faces

Familiar Faces

3 years ago

Priyanka Hardikar

Kelcie Wallace

Every time someone looks at a face, neurons are at work. They are storing the faces of the friend made at school, the cashier at Macy’s encountered briefly and even the face of the model on the front cover of Vogue.

Neurons are the reason you are able to recognize faces every day.

“The idea is you code what makes a face different from every other in the world,” said Alice O’Toole, a researcher specializing in visual perception and quantitative models in the School of Behavioral and Brain Sciences.

That can’t be done until finding out a person’s average expectation and how the individual deviates from that expectation, O’Toole said.

“If I send you off to Japan for three or four days, your expectation of what faces look like would drift toward the average of what you’re seeing,” O’Toole said. “So then when you come back here, faces might look like they’re expanded outward or like they’re contrasting a bit from what your expectation is.”

This is called face adaptation, which happens when you look at something that differs from the usual and it changes your central expectation.

In the same way, O’Toole’s theory of face adaption applies to someone who is looking at several abnormally expanded faces. Then, if the person were to see a normal face, he or she would mistake the face as squished in appearance.

It seems straightforward at first. A peron’s retina receives two images through both eyes – but then as they look at someone, they see a full three-dimensional structure that depending on the angle, or central prototype, appears differently.

For 15 years now, O’Toole has researched what makes a face memorable. She is interested in understanding what people are able to remember about faces ¬— how they build that representation, apply it, and when it fails, what went wrong.

“We know somewhere in your brain you have neurons that are storing faces, but what are the neurons storing and what is the nature of the information they have?” O’Toole said.

She said the only way to approach this is by understanding at least computationally how neurons store information. For that reason, computer scientists are working on finding an answer now more than ever.

Faces communicate a lot of different information — a person’s gender, age and even whether he or she is happy today.

“If at some point you could wake up and understand precisely what it is that makes humans so good at face recognition, that would be a good long-term goal,” O’Toole said.

As of now, O’ Toole is solely focused on researching facial recognition. Research in facial recognition allows O’Toole to take many approaches from psychology experiments and functional neuroimaging to observing what computer scientists are doing to recognize faces.

“You see the whole thing from a different view and that gives you ideas for your experiments,” O’Toole said.

Although computational models and face recognition have always fascinated her, her previous area of study was vision, or human stereopsis.

When O’Toole was a graduate student at Brown University, her psychology adviser passed away at the end of her first year. He had been studying faces.

After he died, O’Toole spent three years working on human stereopsis until she came to UTD to teach.

Another aspect in her research involves comparing computers and people in their ability to recognize faces.

At international competitions that O’Toole attends for face recognition computer systems, computers are given algorithms with the task of figuring out the likelihood that two different images are, in reality, the same person.

Jonathon Phillips from the National Institute of Standards and Technology runs the competitions every two to three years, and has worked with O’Toole for seven years.

He remembers the competition in 2010 to reveal the most surprising results. On a data set of 1.6 million faces in mug shots, the best algorithm created could allow a computer to recognize the correct face 93 percent of the time.

“The improvement was astonishing,” Phillips said. “My background is developing algorithms so that computers can understand images, but a natural problem in this area is developing algorithms so that computers can recognize faces in general.”

In addition to comparisons between computers and humans, O’Toole has also tested forensic experts against everyday people, including UTD students. Forensic experts from all over the globe including Interpol agents, FBI agents, Australian passport officers and Dutch police officers were tested for their accuracy in recognizing faces.

It was a high-risk experimentation, O’Toole said, and she was relieved the experts did turn out to be more accurate at recognizing faces than the UTD students.

“It almost comes down to the experts operating more like machines than like normal people,” she said.

Although machines have proven more accurate than humans in recognizing faces on high quality frontal images, they stand no chance against humans when it comes to identifying faces they know well. These are the individuals people can identify quickly from a glance as far as 100 yards away.

“The next stage in the algorithms is getting computational models to a level that’s as good as you are with people you know,” O’Toole said.

One experiment included getting pairs of images and presenting them to people and computers. Half the pairs of images were non-matches but highly similar, while the rest of the pairs were the same person but highly dissimilar. While the machines identified the images incorrectly, the people were able to identify them accurately.

O’Toole said she recognizes this is a breakthrough, because it sheds light on why humans are able to achieve better facial recognition than computers.

This is because the individuals were unconsciously using the body to help match the faces, O’Toole said.

After redoing the tasks and measuring the individual’s eye movements, O’Toole and the other researchers realized that when the information was in the body, the individual spent more time on the body to help identify a person.

“If they had only the face, they didn’t know better than the machines, but if they had the body, they did as well as they did on the first one,” O’Toole said.

Body recognition is an offshoot of facial recognition, said Matthew Hill, a prior student of O’Toole and current researcher alongside the psychology professor.

Hill said there is a list of 27 words that describe a body such as heavyset and petite. A statistical method called correspondence analysis allows Hill to then connect the similarities between the two bodies and decide whether they belong to one person.

“What keeps me interested is the way the numbers and the mind can match up,” Hill said.

For O’Toole, it’s how face recognition applies to every day life. Autism, for example, is characterized by changes in the way people look at other people’s faces and how much they can process from expressions and understand intentions, she said.

“Face recognition is so important for human communication,” O’Toole said. “We all know we’re being photographed everywhere we go. It would certainly be valuable to know how much better people are at identifying faces than machines and when it’s best to combine the machines with the humans.”

The Mercury

The Mercury publishes news, opinion and feature articles of interest and importance to the UT Dallas campus and community, with primary emphasis on news that most directly and immediately concerns students.