Instagram Algorithm to Recognize Cruelty and Kindness

September 14, 2017

Instagram is using machine learning to make its platform a kinder place, we learn from the CBS News article, “How Instagram is Filtering Out Hate.” Contributor (and Wired Editor-In-Chief) Nick Thompson interviewed Instagram’s CEO Kevin Systrom, and learned the company is using about 20 humans to teach its algorithm to distinguish naughty from nice. The article relates:

Systrom has made it his mission to make kindness itself the theme of Instagram through two new phases: first, eliminating toxic comments, a feature that launched this summer; and second, elevating nice comments, which will roll out later this year. ‘Our unique situation in the world is that we have this giant community that wants to express themselves,’ Systrom said. ‘Can we have an environment where they feel comfortable to do that?’ Thompson told ‘CBS This Morning’ that the process of ‘machine learning’ involves teaching the program how to decide what comments are mean or ‘toxic’ by feeding in thousands of comments and then rating them.

It is smarter censorship if you will. Systrom seems comfortable embracing a little censorship in favor of kindness, and we sympathize; “trolls” are a real problem, after all. Still, the technology could, theoretically, be used to delete or elevate certain ideological or political content. To censor or not to censor is a fine and important line, and those who manage social media sites will be the ones who must walk it. No pressure.

Search the site

Stephen E. Arnold monitors search, content processing, text mining
and related topics from his high-tech nerve center in rural Kentucky.
He tries to winnow the goose feathers from the giblets. He works with colleagues
worldwide to make this Web log useful to those who want to go
"beyond search". Contact him at sa [at] arnoldit.com. His Web site
with additional information about search is arnoldit.com.