Facebook will help prevent suicide with an AI upgrade

Facebook is finally expanding its tests for AI reporting tools to prevent suicide and self-harm on a larger scale. To detect more precisely, the social network will begin implementing pattern recognition for posts and Live videos to detect when someone is presenting suicidal thoughts.

Facebook’s AI-based suicide prevention algorithms have earlier been publicized in a month wide awareness campaign back in September. It was more focused on spreading awareness in the public with ads being displayed in its News Feed. But Facebook’s new “proactive detection” AI tool is way more than that. It will scan all posts for algorithms of suicidal thoughts, and when necessary send mental health resources to the user at risk or their friends, or contact local first-responders.

Mark Zuckerberg, founder and chief executive of Facebook, announced yesterday that the social media platform has upgraded its artificial intelligence. He wrote on his Facebook’s timeline saying,

“Starting today we’re upgrading our AI tools to identify when someone is expressing thoughts about suicide on Facebook so we can help get them the support they need quickly.”

Facebook will also use AI to prioritize particularly risky or urgent user reports so they’re more quickly addressed by moderators, and tools to instantly figure out local language resources and first-responder contact info. Facebook is also adding more moderators to suicide prevention. The tech giant has partnered with 80 local outlets like Save.org, National Suicide Prevention Lifeline, and Forefront to provide resources for users at-risk. During the past month of testing, Facebook has started around 100 “wellness checks” with first responders visiting affected users.