Update 02.03.2017 12:19: Since this article was published, the material on the Diversity.AI website has been taken down and replaced with a single message: "The initiative has not yet been launched." To see what it used to be like take a look at the cached version.

Original story

Advertisement

It was meant to be the beauty contest to end all beauty contests. But when the results from Beauty.AI – the photographic pageant judged by supposedly objective artificial intelligence algorithms – came in, it was clear there was a problem.

Holding AI to account: will algorithms ever be free from bias if they're created by humans?

Read next

What’s really going on in those Boston Dynamics robot videos?

ByRichard Priday

Launched in 2016 by “deep learning” group Youth Laboratories, Beauty.AI used age and facial recognition algorithms to choose what its creators declared would be “the First Beauty Queen or King Judged by Robots.” But the 44 winners – selected from among more than 7,000 entrants who had submitted selfies through the app – included only one with dark skin, although numerous people of colour had sent in photographs. The resulting media coverage saw the contest labelled as racist.

Now WIRED can exclusively reveal that Youth Laboratories is back, with a new think tank, Diversity.AI, devoted to “inclusion, balance and neutrality in artificial intelligence.” Oh yes: and another beauty contest, Beauty.AI 3.0, which co-founder Anastasia Georgievskaya says will far surpass its predecessors in the accuracy of its aesthetic judgements.

“We are now going to include a step that would test algorithms for biases, especially racial, and all other kinds of biases like gender and age,” says Georgievskaya. “The journalists directed our attention towards a very important issue: [the algorithm] was created in a wrong way so it's not inclusive.”

Advertisement

Georgievskaya says the beauty contest will also highlight the wider issue of algorithmic discrimination, in which automated and semi-automated systems. “Discrimination is important because it can be found in other algorithms too,” she says, giving search engines and credit scores as examples. “We want to draw attention to this problem.”

To that end, Youth Laboratories is also launching a new artificial intelligence think tank: Diversity AI. The think tank, which launches next month, will discuss the question of algorithmic bias – and perhaps also enable applied research to overcome it.

Read next

Android P is out to steal the iPhone's high-end crown

ByAndrew Williams

RYNKL

The contradiction between Diversity AI’s aim of reducing discrimination and its method – a contest based, by definition, on appearance-related discrimination – did not appear evident to Georgievskaya, who first became interested in measuring attractiveness after her bioengineering degree at Lomonosov Moscow State University. She sees the way people judge each other’s appearance and concludes that the problem is prejudiced, human perception. “Many beauty contests are really biased and racist and sexist,” she says. “We want to provide an alternative for everyone.”

Advertisement

Diversity AI’s exact purpose is yet to be determined – there is talk of conferences and research papers – but to start, Youth Laboratories has taken the first step of collecting an advisory board. “We just asked people who are really experts in the area,” says Georgievskaya. “We have people from NVIDIA and Open AI and Oxford University.”

But when WIRED contacted some of the experts listed on Diversity AI’s advisory board, they appeared unaware that Youth Laboratories was planning on holding another AI beauty contest. “I was not aware of that,” says Elisa Celis, Senior Research Scientist at École Polytechnique Fédérale de Lausanne (EPFL), one of Europe’s leading centres for machine learning. “I have nothing to do with beauty pageants. I don’t endorse beauty pageants no matter who’s judging them, whether they’re robots or not.”

Celis, who works on creating algorithmic tools to overcome biased data, says Youth Laboratories has offered to share its data and algorithms – an extremely useful resource for researchers in the area. “I think this work is really important,” says Celis. “But my goal and the goal of everyone affiliated with Diversity AI is not to support Beauty.AI.”

Machine learning versus AI: what's the difference?

The Diversity AI website also lists Jack Clark, Strategy & Communications Director at AI ethics research institute OpenAI, as an advisor. But when WIRED contacted Clark he said he had not had any substantive discussions about the organisation, and asked for his name to be removed. At the time of publication it had not been, but a short text had been added to the website saying “The initiative has not yet been launched”.

Advertisement

Georgievskaya speaks passionately about eradicating algorithmic diversity. She sincerely believes that objective measurements of beauty can help towards that goal. But there is another reason why Youth Laboratories might want to revisit Beauty.AI 3.0, which is set to launch in September this year: to market its suite of health algorithms, such as wrinkle analysis app RYNKL (“artificial intelligence which cares about your looks and helps you adjust your lifestyle to look younger”).

Youth Laboratories, which has 10 people working in Moscow and Oxford, also claims commercial clients in other fields. “We do emotional recognition for one of the largest banks in Europe,” says Georgievskaya. “To provide some more targets for them. So for advertising.”