New AI can guess whether you're gay or straight from a photograph

An algorithm deduced the sexuality of people on a dating site with up to 91% accuracy, raising tricky ethical questions

Artificial intelligence can accurately guess whether people are gay or straight based on photos of their faces, according to new research that suggests machines can have significantly better “gaydar” than humans.

When the software reviewed five images per person, it was even more successful – 91% of the time with men and 83% with women. Broadly, that means “faces contain much more information about sexual orientation than can be perceived and interpreted by the human brain”, the authors wrote.

Psychographic software can predict way more from your photos and it is used right now an facebook.

'I was shocked it was so easy': ​meet the professor who says facial recognition ​​can tell if you're gay

As well as sexuality, he believes this technology could be used to detect emotions, IQ and even a predisposition to commit certain crimes. Kosinski has also used algorithms to distinguish between the faces of Republicans and Democrats, in an unpublished experiment he says was successful – although he admits the results can change “depending on whether I include beards or not”.

His findings are consistent with the prenatal hormone theory of sexual orientation, he says, which argues that the levels of androgens foetuses are exposed to in the womb help determine whether people are straight or gay. The same androgens, Kosinski argues, could also result in “gender-atypical facial morphology”. “Thus,” he writes in his paper, “gay men are predicted to have smaller jaws and chins, slimmer eyebrows, longer noses and larger foreheads... The opposite should be true for lesbians.”

Kosinski has a different take. “The fact that the results were completely invalid and unfounded, doesn’t mean that what they propose is also wrong,” he says. “I can’t see why you would not be able to predict the propensity to commit a crime from someone’s face. We know, for instance, that testosterone levels are linked to the propensity to commit crime, and they’re also linked with facial features – and this is just one link. There are thousands or millions of others that we are unaware of, that computers could very easily detect.”

Cambridge Analytica always denied using Facebook-based psychographic targeting during the Trump campaign, but the scandal over its data harvesting forced the company to close.

So this experiment was conducted by someone who has vested interest in proving a genetic sexuality hormone theory. Can't see any possible experimentation bias here.

There is no detection here, only good guesses, most likely with biased samples. A learning algorithm is only as good as the material you feed it with. Try feeding it with pictures of homosexual people that contradict "typical" facial features of homosexuals and see the accuracy plummet. Since there are absolutely no hard physical clues for someone's sexuality this algorithm's gaydar is just as good as anyone else's.

Try feeding it with pictures of homosexual people that contradict "typical" facial features of homosexuals and see the accuracy plummet. Since there are absolutely no hard physical clue for someone's sexuality this algorithm's gaydar is just as good as anyone else's.

Try feeding it with pictures of homosexual people that contradict "typical" facial features of homosexuals and see the accuracy plummet. Since there are absolutely no hard physical clue for someone's sexuality this algorithm's gaydar is just as good as anyone else's.

your sentence 1 contradictions your sentence 2

In what way?

Feeding it more varied pictures will bring the success rate down to the level of a normal human to about 60% I'd say. And when I say hard clue I mean a clue that will let you detect with 100% certainty the sexuality of a person. Since nothing like that exist we can only go by soft clues. Which means the decision basis and the guesses of an algorithm will be the exact same as a human's if fed with the same varied material. The machine in this example was obviously fed with extremely biased samples. Deliberately or not but this experiment is very much flawed.