Facebook upgrades its AI to better prevent suicide

The social network said in a blog post that it’s expanding its use of AI to identify when someone is expressing thoughts of suicide. By using pattern recognition, the new AI can identify when someone says something that might suggest they’re considering self-harm. It can then send mental health resources to the user or their friend and immediately reach out to appropriate authorities.

The trained AI looks for images or words that were found in previous manually reported posts about suicide. It also looks for comments like “Are you okay?” or “I need help.”

Facebook started testing the software in the U.S. in March, and it already seems to be working. The company claims pattern recognition has successfully helped escalate its “most concerning reports” so authorities are contacted in half the time. In the past month, Facebook has worked with first responders on 100 wellness checks discovered using its detection techniques.

“This is about shaving off minutes at every single step of the process, especially in Facebook Live,” Guy Rosen, vice president of product management, wrote in the blog post. “There have been cases where the first-responder has arrived and the person is still broadcasting.”

The new suicide-prevention AI techniques will go live worldwide starting today, except in the European Union, where privacy laws have placed restrictions on gathering and analyzing sensitive information about users.