After 10 yrs of suicide prevention work, Facebook has announced that their algorithms will detect suicidal patterns even before they're reported by people.

On Monday, Facebook’s Mark Zuckerberg announced that they will now be using artificial intelligence to detect suicidal tendencies in every post on the social media platform in order to aid suicide prevention.

The initiative is said to be a part of the company’s efforts to prevent the growing number of suicide incident worldwide and get people the help they need. The company said that they would be upgrading their AI tool to determine whether posts, videos, or Facebook Live streams contain suicidal thoughts that might lead to self-harm.

“If we can use AI to help people be there for their family and friends, that’s an important and positive step forward.”

-Mark Zuckerberg

“Starting today we’re upgrading our AI tools to identify when someone is expressing thoughts about suicide on Facebook so we can help get them the support they need quickly,” Zuckerberg shared on his personal FB page.

Despite some people’s resentment of artificial intelligence and the fear of how it might affect the future, Zuckerberg remains positive about the said technology’s useful application, citing that “it’s good to remind ourselves how AI is actually helping save people’s lives today.”

In the United State’s alone, suicide is considered as the 10th leading cause of death according to the data released by the American Foundation for Suicide Prevention (AFSP). Over 44 thousand Americans are said to die by suicide each year, and for each death, there’s an average of 25 attempts. Overall, suicide costs the U.S. government $51 billion annually.

Suicide Rates in the United States | American Foundation for Suicide Prevention | https://afsp.org

Using AI Tech to Detect Early Signs of Suicidal Tendencies

Facebook’s AI-based suicide prevention initiative was first tested on text-only posts in the United States March of this year. With the latest upgrade, the company’s efforts will be introduced to the rest of the world with the exception of the European Union where data privacy restrictions are said to be different.

In a blog post shared by the company on their newsroom page, Guy Rosen, Facebook’s Vice President of Product Management, shared how they will track suicidal thoughts posted on the platform and help people with suicidal tendencies. The effort includes:

Using pattern recognition to detect posts or live videos where someone might be expressing thoughts of suicide, and to help respond to reports faster

Improving how we identify appropriate first responders

Dedicating more reviewers from our Community Operations team to review reports of suicide or self-harm

Apparently, the company is working on getting people who post suicide-related content immediate help before they are even reported by their close friends or family members. According to Rosen, they have dedicated teams working around the clock who review reports that come in and prioritize the most serious reports. Facebook also offers a number of support options such as the option to reach out to a friend or to contact a helpline.

Rosen further explained that their new AI-based suicide detection effort was developed in collaboration with different mental health organizations.

“Facebook has been working on suicide prevention tools for more than 10 years. Our approach was developed in collaboration with mental health organizations such as Save.org, National Suicide Prevention Lifeline, and Forefront Suicide Prevention and with input from people who have had personal experience thinking about or attempting suicide,” he said.

While the intention behind Facebook’s latest AI tech is undoubtedly good, many expressed concerns about the company’s proactive scanning of content posted by people, fearing how else the technology could be used. The issue was addressed by Facebook’s Chief Security Officer, Alex Stamos, in a tweet last night that says:

“The creepy/scary/malicious use of AI will be a risk forever, which is why it’s important to set good norms today around weighing data use versus utility and be thoughtful about bias creeping in. Also, Guy Rosen and team are amazing, great opportunity for ML engs to have impact.”

The creepy/scary/malicious use of AI will be a risk forever, which is why it's important to set good norms today around weighing data use versus utility and be thoughtful about bias creeping in. Also, Guy Rosen and team are amazing, great opportunity for ML engs to have impact. https://t.co/N9trF5X9iM

Do you support Facebook’s latest initiative to use artificial intelligence in proactively scanning suicidal thoughts posted on the social media platform?

Found this article interesting?

Rechelle Ann Fuertes

Rechelle is the current Managing Editor of Edgy. She's an experienced SEO content writer, researcher, social media manager, and visual artist. She enjoys traveling and spending time anywhere near the sea with her family and friends.