Demonstrators and counter-protestors gather at the so-called "Stop Islamization of Texas" rally, an event organized on Facebook by a Russian-linked page. Image source: CNN.com.

The role of social media is becoming central to the U.S. Congress’ investigation into whether (and to what extent) a foreign entity interfered with the 2016 presidential election. On October 31st, and November 1, 2017, representatives from Facebook, Twitter, and Google testified before the Senate Intelligence Committee as Members of Congress attempt to understand the role of these social media networks in shaping political discourse in the country.

Wired gained access to some of the data disclosed by Twitter to the Senate Intelligence Committee and subsequently published a report providing a very small snapshot of the role of bots (programs built to do automated tasks) in information sharing on online platforms. Most notably, the report revealed that a twitter account responsible for tweeting a picture of a Muslim woman in the UK following the Westminster attack in March, 2017 that went viral, was a Russian-based Twitter account. The Wired report reveals that “a network of accounts” posted “anti-immigration and racist tweets in an attempt to disrupt politics in the UK and Europe.” Of the small snapshot in data, Twitter confirmed 29 accounts as being Russian-backed, followed by 268,643 people, and having some of the tweets retweeted hundreds of times. The content of the tweets touched on topics related to the U.S. presidential election and on wider issues around Brexit and European politics, Wired reports. As part of the ongoing congressional investigation, Facebook also revealed “that 126 millions of its users may have seen content produced and circulated” by “Russian-controlled accounts and pages,” many of which posted explicitly anti-immigrant and anti-Muslim content.

Islam and Muslims were some of the central points of discussions during the 2016 U.S. presidential election. As The Bridge Initiative has previously reported, the 2016 presidential campaign season involved candidates playing into conspiracy theories about the influence of Shari’a, calling for bans to prevent Muslims from entering the country, and endorsing religious profiling of Muslims. Such political rhetoric played up on already existent and growing anti-Muslim attitudes in the country and contributed to the manufacturing of hysteria around Islam and Muslims as an existential threat to America.

Discriminatory rhetoric and policies against Muslims coincided with the largest number of anti-Muslim assaults reported to the FBI, as attacks in 2016 far surpassed the number following the 9/11 attacks. According to the FBI, there were 127 anti-Muslim assaults reported in the U.S. last year. It is important to note that the numbers are likely to be much higher given that hate crimes are far underreported. Additionally, there are no legal provisions making it mandatory for law enforcement to report hate crimes to the FBI.

Source: Pew Research Center; Federal Bureau of Investigations (FBI)

Congress continues to investigate the role played by outside foreign actors in potentially influencing voter opinion. A recent analysis by the UK-based anti-racist organization, Hope Not Hate, found that anti-Muslim voices online are “using twitter bots, fake news, and the manipulation of images to influence political discourse.” In one particular case, the organization found that tweets from anti-Muslim activist Pamela Geller were magnified by 102 bots that automatically tweeted or retweeted her content. Additionally, researchers found that “terror attacks in the U.K. have been exploited by anti-Muslim activists over social media,” in which a number of anti-Muslim voices have acquired “significant number of followers in their aftermath.” Such voices include former leader of the English Defense League (EDL), Tommy Robinson, who increased his twitter following by 17% in the hours and days following the Manchester attack. Further, the Oxford Internet Institute found that bots “flourished during the 2016 presidential election,” and that they “significantly impact [on] public life during important policy debates, elections, and political crises.”

Below, we highlight some of the revelations as a result of the ongoing investigations regarding information sharing on social media platforms. These incidents are believed to be a part of a wider effort coordinated by “Russian-based” networks to “spread racial hatred in an attempt to disrupt politics” in the U.S., U.K., and Europe. However, it is not just “Russian-based” networks that are spreading falsified information, as anti-Muslim voices have independently existed online for decades.

@SouthLoneStar

Source: metro.co.uk

In the aftermath of the deadly Westminster attack on March 22, 2017, Twitter account @SouthLoneStar tweeted an image of a Muslim woman wearing hijab walking across the bridge as bystanders surrounded an injured person. The twitter account claims that the woman “casually walks.” Upon closer inspection of the image, the woman is visibly distraught. However, the tweet went viral, retweeted hundreds of times in the right-wing sectors of the internet as anti-Muslim blogs circulated the image, using the image to support their claims that Muslims support terrorism. The young woman in the picture responded a couple days after the picture, talking of her “horror and distress at the incident and the abuse she suffered afterwards,” as her image was “plastered all over” by individuals “who draw conclusions based on hate and xenophobia.”

@TEN_GOP

Source: BBC

Another incident of misinformation following the London Westminster attack involved the tweeting of a screen grab from a live broadcast on Al-Jazeera’s Facebook page. The image was tweeted by the @TEN_GOP account, which had over 136,000 followers, with the caption suggesting “moderate Muslims” laughing at the attack. BBC quickly debunked the fake image, but the account was not suspended by Twitter until August. Per the Congressional investigation, it was revealed that @TEN_GOP was a “Kremlin-linked bot.” The impact of these bots cannot be understated as @TEN_GOP was even retweeted by the former digital director of the Trump campaign, Brad Parscale.

Heart of Texas

Source: CNN.com

As part of the congressional investigation, Facebook handed over 470 accounts and pages to the Senate Committee. This documentation revealed at least one incident involving a coordinated effort to shift anti-Muslim sentiment into action, by organizing a rally on May 21, 2016, entitled “Stop Islamization of Texas.” CNN reported “a handful of people turned out to protest the opening of a library at an Islamic center in Houston, Texas.” The rally was organized by the Heart of Texas, whose Facebook page had over 225,000 likes and was shut down by Facebook in September 2017 following the platform’s “takedown of accounts and pages,” that were “likely operated out of Russia.” While a group of individuals did show up for the protest, including two individuals who held up a banner proclaiming #WhiteLivesMatter, no one from “Heart of Texas” showed up. On the events page for the rally, anti-Muslim rhetoric was present including one comment that read: “Need to blow this place up. We don’t need this shit in Texas.” This led the local Council on American-Islamic Relations chapter to call on the FBI to probe the threat. Another event organized by the page called for the secession of Texas, and claimed that a “Killary Rotten Clinton” victory would lead to an influx of “refugees, mosques, and terrorist attacks.” The origins of the Heart of Texas Facebook page was revealed following an investigation into ads generated by the Internet Research Agency, a suspected “troll farm” located in St. Petersburg Russia. Adrian Chen of The New Yorker, describes troll farms as “outfits that operate armies of sock-puppet social-media accounts.”

Secured Borders

Source: New York Times

The Daily Beastreported that a similar anti-Muslim and anti-immigrant protest was organized in Idaho in August 2016. The event’s page, entitled “Citizens before refugees,” was hosted by “Secured Borders,” and stated, “Due to the town of Twin Falls, Idaho, becoming a center of refugee resettlement, which led to the huge upsurge of violence towards American citizens, it is crucial to draw society’s attention to this problem.” The announcement was filled with xenophobic sentiment as the event organizers called for the banning of “Muslim refugees,” and “demand[ed] open and thorough investigation of all the cases regarding Muslim refugees!” Secured Borders had over 130,000 likes on Facebook before it was shut down by the social media platform in August 2017, when Facebook discovered that Secured Borders was also the work of the Russia-based Internet Research Agency.

This manufacturedhysteria is a part of a growing and global trend facilitated by the rise of social media. It is important to note that anti-Muslim sentiment was not created by foreign-controlled online bots. Such discriminatory and prejudicial attitudes have existed long before the advent of social media as reflected structurally in our immigration and criminal justice policies. The othering of Muslims has become a key tactic in our politics, as national, state, and local election campaigns have consistently employed anti-Muslim rhetoric and policies in hopes of attracting more votes.

This was the socio-political climate in the United States in 2016, in which anti-Muslim rhetoric was prevalent in our politics and mainstream media, and assaults against Muslims reached new levels. Studies showed how attacks carried out by Muslims received far greater attention than those committed by non-Muslims, reinforcing the false perception that Muslims are exclusively responsible for terrorism. Political rhetoric and disproportionate media framing played a role in fomenting division and hatred against Muslims amongst the American public. This shows that automated bots and “trolls” aren’t necessarily creating new messaging, rather they are mimicking what is already being shared, “stacking more wood atop an existing bonfire of partisanship and social division,” and it isn’t limited to just pro-Trump content, as Buzzfeed News states. In conclusion, what we are witnessing from “troll” accounts is the amplification of content representative of existent attitudes and sentiments.

Social media has broken the monopoly held by conventional media on news and information sharing. Thus, any individual with access to the internet and social media is able to publish stories, whether factual or not, and has the potential for it to go viral. With economic capabilities, such information can be weaponized, influencing individuals’ perceptions and increasing divisions. For instance, the data-mining company Cambridge Analytica, principally owned by billionaire Trump supporter, Robert Mercer, was hired by the Trump campaign during the 2016 presidential campaign. The company, which has been likened to a “propaganda machine” by a communications professor at Elon University in North Carolina, engaged in coordinated micro-targeting campaigns directing their messaging to specific demographics.

As Congress continues to investigate the role of foreign entities in influencing the 2016 U.S. presidential election, it is important to call attention to the growing anti-Muslim rhetoric in our day-to-day experiences in addition to social media platforms. Discriminatory, abusive, and violent rhetoric targeting Muslims, immigrants, and People of Color not only has a large presence online but is also present and thriving in our politics and mainstream media. When it comes to social media, those with great influence are able to spread misinformation at alarming rates. Anti-Muslim activists online have profited from the attention gained as posts go viral; their follower count drastically increases and T.V. networks respond by giving these voices another platform. Evidence shows that bot accounts amplify divisive and offensive content that is representative of existent sentiment, proving that Islamophobia is not an imaginary issue but a real problem in our societies. It continues to manifest online without any deterrence, as social media platforms have defended their actions not to remove such content because they “believe there is a legitimate public interest in its availability.”

In addition to thriving online, such discriminatory and dangerous views are present in our political discourse, manifest in our policies such as immigration travel bans targeting Muslim-majority countries, and is translated into physical violence as anti-Muslim assaults continue to reach record numbers. As terrorism expert Dr. Marc Sageman has stated, rhetoric is not “cost-free;” violent speech has real consequences. As societies become more digitalized, violent and discriminatory speech is amplified online through bots and “troll farms.” Anti-Muslim sentiment has existed in America long before the rise of social media, thus any efforts to combat such attitudes online must also take into account the very real feelings that exist at a societal level. What we see and experience online is a reflection of the present state of our society.