Summary: Looking for a tool to analyze participant's webcam feed to produce a report of facial expressions - it does not have to be real-time.

I am trying to pitch remote usability testing (as opposed to lab testing) to my boss, and am looking for a tool to overcome our inability to observe nonverbal communication.

What I am looking for is a tool that will analyze the participant's webcam feed to produce a report of facial expressions - it does not have to be real-time.

I am thinking that we could use this report as "bookmarks", using distinct expressions as a sign of positive/negative reaction to the interface to produce a list of interactions that are working well for us, or in need of change.

I found this list of APIs, but non of the websites provided all the information I required (I did contact several of them, and am waiting for a reply).

Does anyone have any experience with such a tool and could recommend it?

FaceReader constructs a model of the face from the video and
automatically evaluate several elementary facial movements (action
units). Based on these movements it calculates the likeliness that
each of six basic emotions (joy, anger, sadness, surprise, fear and
disgust) is felt at any given time. Tests from the provider of the
software indicate a success rate of up to 90% with frontal face
images.

Weaknesses Data limited to six basic emotions.

The video has to be captured during a test with a real product,
limiting the usefulness of FaceReader in the early stages of the
design process.

That said, facial expression analysis is not a refined science and your experience might be more of a hit or a miss. To quote this article

EmoVision dashboard is one of the possibilities of facial recognition
for analyzing purposes. People can be sad and puzzled, neutral and
happy, even surprised. This kind of software recognizes the subject`s
emotions and lets us analyze them further. It will also generate a
suitable graph.

FaceReader dashboard is useful on measuring overall emotional
Intensity on scale of 0 to 1, but often enough a lot of emotions are
going on at the same time and the software can only detect the
strongest of them. The problem with these kinds of applications is
that they are not ready yet.

During measurements, the subjects need to be completely still and must
not talk or move their head. Facial touching is generally not allowed
and this method does not work well with glasses. It requires more
lighting and only recognizes the simplest and strongest of emotions.
Subtle emotions are ignored.

That's an interesting point to definitely consider when conducting international studies. From the little I know, there is a cross-culture baseline for most common expressions, how distinct the expression is, however, will vary between cultures. Interesting enough, a blind person uses the same facial expressions as a sighted one!. Thank you for the link! I'll look into it.
–
Nir BentiaFeb 10 '14 at 13:28

HcaLong participated with a company that made something similar and worked great, maybe you can contact them, can not remember which was the link the study, but you can look inside your web http://preparatumente.com