Facebook, Twitter, YouTube pressed by US over terror content

Internet giants face grilling by Senate panel over efforts to prevent propaganda being spread on social media

(L-R) Monika Bickert, Facebook's Head of Global Policy Management; Juniper Downs, YouTube 's Global Head of Public Policy and Government Relations; and Carlos Monje, Director, Twitter's North America Public Policy and Philanthropy, at a hearing of the Senate Commerce, Science, and Transportation Committee on Capitol Hill, January 17, 2018 in Washington, DC. (AFP Photo/Brendan Smialowski)

WASHINGTON, United States — Terrorists and hate groups continue to get their propaganda onto social media platforms despite efforts by Facebook, Twitter and YouTube to shut them down, a US Senate panel heard Wednesday.

Islamic State, al-Qaeda, and others have stepped up their use of bots and other methods to fight the artificial intelligence and algorithms the social media giants deploy to screen them out.

In addition, they are now turning to smaller platforms and messaging apps with encryption and less ability to police users, like Telegram, Reddit and WhatsApp, though none have offered yet the previous broad reach that Facebook and YouTube have had.

Logos of US online social media and social networking service Facebook. (AFP Photo/Loic Venance)

Nevertheless, the largest social media firms were pressed in a Senate Commerce Committee hearing Wednesday over their reliance on artificial intelligence and algorithms to keep their powerful platforms clear of violent extremist posts.

A key concern is the continued ability to use anonymous accounts, which while benefiting pro-democracy activists battling repressive governments, also continue to empower extremists.

“These platforms have created a new and stunningly effective way for nefarious actors to attack and to harm,” said Senator Ben Nelson, who said that crackdown efforts by the social media giants so far are “not enough.”

‘Cat-and-mouse game’

Facebook’s head of Product Policy and Counterterrorism, Monika Bickert, said that 99 percent Islamic State and al-Qaeda-related terror content “is detected and removed before anyone in our community reports it, and in some cases, before it goes live on the site.”

Monika Bickert, Facebook’s Head of Global Policy Management, speaks during a hearing of the Senate Commerce, Science, and Transportation committee on Capitol Hill, January 17, 2018 in Washington, DC. (AFP Photo/Brendan Smialowski)

Senator John Thune, Chairman of the Commerce Committee, countered by asking why a video showing how to build a bomb — which was used by the man who attacked the Manchester Arena in June 2017 — has repeatedly been uploaded to its website every time YouTube deletes it, as recently as this month.

“We are catching re-uploads of this video quickly and removing it as soon as those uploads are detected,” said Downs.

Carlos Monje, director of Public Policy and Philanthropy for Twitter, said that even with all their efforts to fight terror- and hate-related content, “It is a cat-and-mouse game and we are constantly evolving to face the challenge.”

Clint Watts, an expert at the Foreign Policy Research Institute in the use of the internet by terror groups, testified that the efforts by the social media companies have been quite successful, but are still missing significant amounts of unwanted postings.

“Social media companies continue to get beat in part because they rely too heavily on technologists and technical detection to catch bad actors,” said Watts.

“Artificial intelligence and machine learning will greatly assist in cleaning up nefarious activity, but will for the near future fail to detect that which hasn’t been seen before.”

Executives from Twitter, Facebook and YouTube speak at a hearing of the Senate Commerce, Science, and Transportation committee on Capitol Hill on January 17, 2018 in Washington, DC. (AFP Photo/Brendan Smialowski)

Last year Google, Facebook, Twitter and Microsoft banded together to share information on groups and posts related to violent extremism, and to share techniques on keeping it off their sites.

All are struggling with the problem of anonymous accounts or accounts with fake owners. Watts called that the “most pressing challenge.”

“Anonymity of social media accounts has in many cases allowed the oppressed and the downtrodden to speak out about injustice,” Watts said.

“But over time, anonymity has empowered hackers, extremists and authoritarians to inflict harm on the public.”

Four million malicious accounts

Monje illustrated the problem: Twitter, he said, believes that less than five percent of its 300 million accounts are fake.

But, he said, “They keep coming back… We are now challenging four million malicious automated accounts a week.”

Watts said extremist groups have nevertheless been frustrated by the effort to censor them on social media and are actively searching for new outlets.

“They are looking for a place where they can communicate and organize. They have to be able to push their propaganda globally in order to recruit,” he told the Senate panel.

By signing up, you agree to our
terms
You hereby accept The Times of Israel Terms of Use and Privacy Policy, and you agree to receive the latest news & offers from The Times of Israel and its partners or ad sponsors.