Facebook hopes to prevent suicide among its users by rolling out an artificial intelligence tool that searches for gestures of concern displayed by friends, "like comments asking if someone is okay." Once content is flagged and reviewed, a team reaches out to the at-risk user, along with his or her friends, through Facebook Messenger. The social media site sends helpful information - including a link to the National Suicide Prevention Lifeline - to the user's inbox. "With all the fear about how AI may be harmful in the future, it's good to remind ourselves how AI is actually helping save people's lives today," Facebook CEO Mark Zuckerberg wrote in a post on Monday. "In the future, AI will be able to understand more of the subtle nuances of language, and will be able to identify different issues beyond suicide as well, including quickly spotting more kinds of bullying and hate."But the feature raises concerns, especially when coming from a corporation with a less-than-stellar reputation regarding digital privacy. Earlier this year, the company was accused of targeting teens who felt "worthless" and "insecure" through customized advertising campaigns, according to the Australian. The teens were reportedly as young as 14, and the social media network identified their emotional states by monitoring photos, posts and online activity.Detailed in a 23-page report marked "Confidential: Internal Only," Facebook assessed when young people felt "overwhelmed," "nervous," "stupid," "useless, "defeated" and more. It then passed the data onto third-party marketing companies.The social media network also reportedly tracked users' locations in 2016 to make friend suggestions. In 2014, Facebook admitted it had altered the news feeds of more than half a million randomly-chosen users in an experiment that analyzed how emotions were spread online. After receiving some backlash, the company claimed its actions were covered in its terms of service.Facebook currently has about 2.07 billion users - roughly a quarter of the world's population.

MENLO PARK, Calif. —

Facebook hopes to prevent suicide among its users by rolling out an artificial intelligence tool that searches for gestures of concern displayed by friends, "like comments asking if someone is okay."

Once content is flagged and reviewed, a team reaches out to the at-risk user, along with his or her friends, through Facebook Messenger. The social media site sends helpful information - including a link to the National Suicide Prevention Lifeline - to the user's inbox.

Advertisement

Related Content

"With all the fear about how AI may be harmful in the future, it's good to remind ourselves how AI is actually helping save people's lives today," Facebook CEO Mark Zuckerberg wrote in a post on Monday. "In the future, AI will be able to understand more of the subtle nuances of language, and will be able to identify different issues beyond suicide as well, including quickly spotting more kinds of bullying and hate."

But the feature raises concerns, especially when coming from a corporation with a less-than-stellar reputation regarding digital privacy.

Earlier this year, the company was accused of targeting teens who felt "worthless" and "insecure" through customized advertising campaigns, according to the Australian. The teens were reportedly as young as 14, and the social media network identified their emotional states by monitoring photos, posts and online activity.

In 2014, Facebook admitted it had altered the news feeds of more than half a million randomly-chosen users in an experiment that analyzed how emotions were spread online. After receiving some backlash, the company claimed its actions were covered in its terms of service.

Facebook currently has about 2.07 billion users - roughly a quarter of the world's population.