Google Creates tool to detect toxic comments in the media

Google recently announced the creation of Perspective, an artificial intelligence tool for the media to automatically detect toxic and disrespectful comments.

“Perspective is a technology that uses automatic learning to identify toxic comments. And by toxic we understand that are those comments that will make someone leave a conversation”, said Jared Cohen, the person in charge of the initiative, in a conference call.

Google’s new artificial intelligence project attempts to be a useful tool to fight against “online harassment and the growing toxicity of online conversations”.

“A number of individuals are hampering civilized and dynamic discussion in the platform and media comment sections, leading many people to abandon the conversation. Many media have ended up closing those sections”, said Cohen.

And how does the algorithm know which comments are toxic?

The algorithm compares the analyzed comments with others that were previously categorized as toxic. It also has a system that corrects erroneous evaluations. The more information it manages, the more accurate the judgment will be.

Cohen stressed that Perspective is in “its early days” and “far from perfect”, but it will be improving as it has more data to train its accuracy.