Role in IT decision-making process:Align Business & IT GoalsCreate IT StrategyDetermine IT NeedsManage Vendor RelationshipsEvaluate/Specify Brands or VendorsOther RoleAuthorize PurchasesNot Involved

Work Phone:

Company:

Company Size:

Industry:

Street Address

City:

Zip/postal code

State/Province:

Country:

Occasionally, we send subscribers special offers from select partners. Would you like to receive these special partner offers via e-mail?YesNo

Your registration with Eweek will include the following free email newsletter(s):News & Views

By submitting your wireless number, you agree that eWEEK, its related properties, and vendor partners providing content you view may contact you using contact center technology. Your consent is not required to view content or use site features.

By clicking on the "Register" button below, I agree that I have carefully read the Terms of Service and the Privacy Policy and I agree to be legally bound by all such terms.

Russian Social Media Trolling Hits High Gear After Parkland Shootings

NEWS ANALYSIS: The Russian Internet Research Agency is never really quiet, but with the Florida high school mass murder, the disinformation trolls may have given us a look at their plans for the 2018 mid-term elections.

Within minutes of the first reports that a shooter had killed 17 students and staff at a Florida high school, social media came alive with a tsunami of activity. Of course, most of the messages were expressions of shock, grief, and outrage spreading through on-line communities in the U.S. and abroad. But much of it also consisted of carefully worded postings designed to inflame opinions on all sides of concern over the tragedy.

The hashtags covering all sides of the gun control debate were echoed thousands of times by bot networks, while known Russian propaganda outlets including Russia Today and Sputnik provided additional fodder which was widely repeated in social media.

Then the Russians (sometimes operating in the open) settled in to creating material for conspiracy theorists, providing fake news stories and videos alleging that the shootings never took place, or in some cases, did take place and were carried out by the government. The related hashtag #falseflag was trending for a few days.

What was happening was that the Russian Internet Research Agency, along with similar organizations from countries outside of Russia, was seizing on bizarre theories already on social media and then amplifying them through the use of fake accounts, bots and automated postings that could have hundreds of accounts say the same thing at the same time. At the same time, video of participants in the events were coupled with misleading headlines and used as “evidence.”

Further reading

But while the Russian trolls and their collaborators were hard at work following the shooting, as they are after many significant events, including the indictment last week of Russian trolls, this time the social media activities seemed to die out quickly, and many of the discussions quietly vanished.

It turns out that the social media were having a practice run of their own. Last year, Twitter announced its approach to bots and misinformation, saying that it would begin building new tools to detect bot activity, and removing misinformation. A few days after the shootings, Twitter carried out a bot purge, removing all accounts that exhibited bot-like behavior, which was about 50,000 of them.

Then today, Twitter announced its policy on automation and the use of multiple accounts, which forbade simultaneous Tweeting and a list of other related activities that were used by the Russian trolls and others during the 2016 election, as was indicated in the Department of Justice’s Special Counsel’s commentary explaining the Russian indictments.

Facebook, meanwhile, was taking actions of its own. “Images that attack the victims of last week's tragedy in Florida are abhorrent,” said Mary deBree, Facebook’s head of content policy in an email. “We are removing this content from Facebook.”

Indeed, the fake news and conspiracy theory content that I’d spotted a few days ago on Facebook was gone.

According to a Facebook spokesperson, the social network is in the process of rolling out a feature that will let users report fake news. While the feature is still being implemented, you can find out how to use it by searching for “reporting fake news” in the help files.

But whether this goes far enough is open to question. The Washington Post is reporting that YouTube’s top trending story is a conspiracy theorist video charging one of the Parkland survivors with being a “crisis actor” thus proving that the shooting never happened. The student has since tried to refute that allegation.

Events such as this cause lawmakers to question whether the social media are doing as much as they should. “Social media disinformation remains a grave threat, and with reports that conspiracy theories and disinformation related to the Parkland shooting continue to proliferate widely on Twitter, Facebook, and YouTube. I remain concerned that the platforms continue to fall short of taking this threat seriously,” said U.S. Senator Mark Warner (D-VA), vice chairman of the Senate Intelligence Committee in an email to eWEEK.

Twitter, for its part, says it’s trying to solve this problem. “We are actively working on reports of targeted abuse and harassment of a number of survivors of the tragic mass shooting in Parkland, Florida,” a Twitter spokesperson told eWEEK in an email. “Such behavior goes against everything we stand for at Twitter, and we are taking action on any content that violates our terms of service.”

“We are also using our anti-spam and anti-abuse tools to weed out malicious automation around these individuals and the topics they are raising,” the spokesperson explained, “We have also verified a number of survivors' Twitter accounts.”

Unfortunately, the battle for social media is already getting ready to escalate with the ability to produce fake news videos. A team of scientists and engineers at the University of Washington has demonstrated the ability to create a photo-realistic image of someone for whom there’s a supply of video and make them say anything they want. For their tests they used former President Barack Obama, but it can be anyone else's images that can be manipulated with artificial intelligence to create a fake video.

A spokesperson said that an AI application was able to analyze Obama’s speech patterns from 14 hours of video. The spokesperson said that the same AI technology can also detect fake video. “But even when fake news gets debunked down the road, it can still have profound and damaging effects; what will happen if—but more probably, when—someone makes a fake video of President Donald Trump declaring war on North Korea?” the spokesperson wondered in an email.

If there’s anything positive about the social media manipulation and related fake news that’s surfaced following the tragedy in Parkland, it’s that we had some idea of what to expect.

While there are still many who will gladly consume fake postings and misinformation because they agree with it, most people won’t. Instead, they now know what fake news and misinformation looks like and they can do something about it. Now the challenge is to be ready for the next round of fake news and social media manipulation that is even more convincing.