Judgmental A.I. mirror rates how trustworthy you are based on your looks

As the success of the iPhone X’s Face ID confirms, lots of us are thrilled to bits at the idea of a machine that can identify us based on our facial features. But how happy would you be if a computer used your facial features to start making judgments about your age, your gender, your race, your attractiveness, your trustworthiness, or even how kind you are?

Chances are that, somewhere down the line, you’d start to get a bit freaked out. Especially if the A.I. in question was using this information in a way that controlled the opportunities or options that are made available to you.

Exploring this tricky (and somewhat unsettling) side of artificial intelligence is a new project from researchers at the University of Melbourne in Australia. Taking the form of a smart biometric mirror, their device uses facial-recognition technology to analyze users’ faces, and then presents an assessment in the form of 14 different characteristics it has “learned” from what it’s seen.

“Initially, the system is quite secretive about what to expect,” Dr. Niels Wouters, one of the researchers who worked on the project, told Digital Trends. “Nothing more than, ‘hey, do you want to see what computers know about you?’ is what lures people in. But as they give consent to proceed and their photo is taken, it gradually shows how personal the feedback can get.”

As Wouters points out, problematic elements are present from the beginning, although not all users may immediately realize it. For example, the system only allows binary genders, and can recognize just five ethnicities — meaning that an Asian student might be recognized as Hispanic, or an Indigenous Australian as African. Later assessment such as a person’s level of responsibility or emotional stability will likely prompt a response from everyone who uses the device.

“[At present, the discussion surrounding these kind of issues in A.I.] is mostly led by ethicists, academics, and technologists,” Wouters continued. “But with an increasing number of A.I. deployments in society, people need to be made more aware of what A.I. is, what it can do, how it can go wrong, and whether it’s even the next logical step in evolution we want to embrace.”