Facial Emotion Recognition

fer.predict(data, [params])
Determine the emotions expressed in an image of a human face.

Current Version: 2

Arguments

data – refer to img format guide – required[api_key] – String – optional – your indico API key[cloud] – String – optional – your private cloud subdomain[v or version] – Integer – optional (defaults to 2) – specify model version[detect] – Boolean (defaults to False) – optional – When True, FER detects all faces in the image and returns their locations with a FER dictionary for each face (see Output for more information) When the image has a background or contains multiple faces, set this to true for best results.[sensitivity] – Float (defaults to .8) – optional – The certainty threshold (between 0 and 1) the model uses to decide what to return as a face. Only used when detect is True.

Output

This function will return a dictionary with 6 key-value pairs. These key-value pairs represent the likelihood that each of 6 detectable emotions are expressed by the face in the analyzed image. The keys in the dictionary are strings containing the emotions (Angry, Sad, Neutral, Surprise, Fear, Happy) and the values are the probabilities that the face in the analyzed image is expressing each emotion.

Values less than 0.05 indicate that it is very unlikely the face is expressing the corresponding emotion.

Using the detect flag changes output format: When using the detect flag the results returned will be a list of dictionaries. Each dictionary in the list corresponds to a detected face. Each dictionary has two key-value pairs, the keys are “emotions” and “location”. The value of “emotions” is the same dictionary described above with 6 key-value pairs. The value of “location” is the bounding box of the detected face in the form [x position of top left corner, y position of top-left corner, width of bounding box, height of bounding box].