Social Media Promote Tribalism in Pakistan

Social media newsfeeds are driven by users' profiles to reinforce their preferences and prejudices. Newsfeeds are customized for each user. Any posts that don't fit these profiles don't get displayed. The result is increasing tribalism in the world. American and British intelligence agencies claim that Russian intelligence has used social media to promote divisions and manipulate public opinion in the West. Like the US and the UK, Pakistan also has ethnic, sectarian and regional fault-lines that make it vulnerable to similar social media manipulation. It is very likely that intelligence agencies of countries hostile to Pakistan are exploiting these divisions for their own ends. Various pronouncements by India's current and former intelligence and security officials reinforce this suspicion.

Tribalism:

All human are born with tribal instincts. People embrace group identities based on birthplace, language, region, sect, religion, nation, school, sports team, etc to define themselves.

Such group affiliations can give people a sense of belonging but they are sometimes also used to exclude others with the purpose of promoting hostility and violence. Social media platforms are being used both ways: To unite and to divide people.

Social media platformslike Facebook and Twitter are powerful magnets for marketers, extremist groups and intelligence agencies. They spend a lot of time and money on such platforms to reach and manipulate their targets.

Trolls and bots proliferate and societies become more deeply divided along political, ethnic, racial, religious, ideological and regional lines. It is a problem that all nations in the world have to respond to.

Developed nations in Europe and North America with stronger institutions are generally more capable of dealing with the consequences of such divisions. But the increasing social media penetration in less capable developing nations with weak institutions cause them to sometimes descend into violent riots. In a recent piece titled "Where Countries Are Tinderboxes and Facebook is a Match", the New York Times has mentioned recent examples of riots and lynchings caused by social media posts in India, Indonesia, Mexico, Myanmar and Sri Lanka.

Brexit and Trump:

The unexpected result of Brexit, the British vote to leave the European Union, shocked many in the UK and Europe. It was soon followed by an even bigger shock with the unexpected election of Donald J. Trump as the President of the United States. Western intelligence agencies have now concluded that Russian intelligence agency sponsored trolls played a major role in manipulating the public opinion in the United Kingdom and the United States.

In February 2018, the US justice department indicted 13 Russians and three Russian entities in an alleged conspiracy to defraud the United States, including by tampering in the 2016 presidential election on behalf of Donald Trump and against Hillary Clinton, according tomedia reports.

The US DOJ indictment identified the Internet Research Agency, a St Petersburg-based group to which millions of impostor social media accounts have been traced, as a primary offender. The indictment also charged Russian individuals who funded the alleged election tampering conspiracy or who otherwise participated in it.

Some of the Russian social media posts were used to organize protests and counter protests in the United States on issues relating to race and religion.

US Senator Richard Burr confirmed that two groups converged outside the Islamic Da’wah Center of Houston in 2016, theTexas Tribunereported. One had gathered at the behest of the “Heart of Texas” Facebook group for a “Stop Islamification of Texas” rally, while the other, spurred on by the “United Muslims of America” Facebook page, had organized a counter-protest to “Save Islamic Knowledge.”

A Russian-sponsored Facebook ad appeared in late 2015 or early 2016, sources told CNN, and though it was meant to appear supportive of Black Lives Matter movement, it may also have conveyed the group as threatening to some white residents of those cities.

Indian Trolls:

It can be safely assumed that Russians are not alone in using social media against nations they see as hostile to them. It is also a safe bet that Indian intelligence agencies are most likely deploying their troll farms and bots to divide Pakistanis.

India's ruling BJP party has extensively used social media apps to spread rumors, innuendo, fake news, outright lies and various forms of disinformation against anyone seen to be even mildly critical of their leader Narendra Modi. Their harshest abuse has been targeted at the Opposition Congress party leaders, various liberal individuals and groups, Muslims and Pakistanis.

Swati Chaturvedi, author ofI Am a Troll, has cited many instances of hateful tweets from Modi-loving Hindu trolls, including Singer Abhijeet's lies to generate hatred against Muslims and Pakistan and BJP MP Hukum Singh's false claim of "Hindu exodus" from Kairana in western Uttar Pradesh blaming it on Muslims.

Vikram Sood, a former top spy in India, has elaborated on India's covert warfare options to target Pakistan in the following words: "The media is a favorite instrument, provided it is not left to the bureaucrats because then we will end up with some clumsy and implausible propaganda effort. More than the electronic and print media, it is now the internet and YouTube that can be the next-generation weapons of psychological war. Terrorists use these liberally and so should those required to counter terrorism."

In a 2013 speech at Sastra University, Indian Prime Minister Modi's National Security AdvisorAjit Dovalrevealed his covert war strategy against Pakistan as follows: "How do you tackle Pakistan?.....We start working on Pakistan's vulnerabilities-- economic, internal security, political, isolating them internationally, it can be anything..... it can be defeating Pakistan's policies in Afghanistan...... You stop the terrorists by denying them weapons, funds and manpower. Deny them funds by countering with one-and-a-half times more funding. If they have 1200 crores give them 1800 crores and they are on our side...who are the Taliban fighting for? It's because they haven't got jobs or someone has misled them. The Taliban are mercenaries. So go for more of the covert thing (against Pakistan)..."

Summary:

Social media newsfeeds are driven by users' profiles to reinforce their preferences and prejudices. Newsfeeds are customized for each user. Any posts that don't fit these profiles don't get displayed. The result is increasing tribalism in the world. American and British intelligence agencies claim that Russian intelligence has used social media to manipulate public opinion in the West. Like the US and the UK, Pakistan also has ethnic, sectarian and regional fault-lines that make it vulnerable to similar social media manipulation. It is very likely that intelligence agencies of countries hostile to Pakistan are exploiting these divisions for their own ends. Various pronouncements by India's current and former intelligence and security officials reinforce this suspicion.

While testifying before a joint hearing of the U.S. Senate’s Commerce and Judiciary committees, Zuckerberg said his company was introducing the latest new artificial intelligence tools to target fake accounts.

However, digital analysts and rights activists warn that while these actions would help protect data henceforth, Facebook can’t do much to undo the damage that might’ve already been done owing to the data leaks from the past.

“There is no way of undoing a particular case of data theft. Short of deleting or destroying the database, no other action would be useful, and it’s nearly impossible since as they say ‘the data has left the building’,” says Asad Baig, the founder and executive director of Media Matters for Democracy, while speaking with The Diplomat.

“The fact of the matter is, [Cambridge Analytica] has Facebook user data, including the users from Pakistan and if someone wants to exploit it for profiling, and use it for political gains to fine-tune their messages for a local public nothing much can be done about it, and the parties who exploit this data will have an undue advantage in their political campaigns.”

CEO and founder of Digital Rights Foundation, Nighat Dad, agrees that previous damage can’t be undone, but adds that Facebook needs to completely rethink its model to serve users.

“What Facebook can certainly do is to ensure that it takes strict measures to protect the data of its users in the future. This can only be done by strong privacy policies and their implementation that serve the users instead of the corporation itself,” she told The Diplomat.

While fake news has impacted voting patterns the world over, it has become especially problematic in Pakistan with all leading political parties asking their social media teams to create fake profiles as part of their social media strategy.

Talking to The Diplomat off the record, social media managers from the ruling Pakistan Muslim League-Nawaz (PML-N), and the two main opposition parties Pakistan Tehrik-e-Insaf (PTI) and the Pakistan People’s Party (PPP), confirmed that creation of fake Facebook and Twitter accounts to propagate their narratives was the official policy of each party.

“Everyone’s running fake Facebook accounts and Twitter bots, so we’re just keeping pace with what others are doing,” a social media executive of the PML-N who requested anonymity told The Diplomat. “It was the PTI that started this trend. So we’re just countering propaganda with propaganda,” they added, citing the fact that one of the rumours that the PML-N social media team has had to counter in recent weeks was the false report that the party has hired Cambridge Analytica’s services for the upcoming elections.

Kaleem Hafeez, a member of the PTI social media team, told The Diplomat that his party isn’t ruling out the possibility of the PML-N purchasing data to manipulate elections, considering the party’s control over the IT ministry.

“Our data analysts are monitoring what other parties are doing, and the undemocratic tools and methods being used to rig elections digitally,” Hafeez said. “Considering that the PML-N was involved in heavy on-field rigging in the 2013 balloting, it won’t be a surprise if they do the same digitally as well.”

Digital analysts are also critical of what they dub the IT ministry’s failure to protect users’ data in Pakistan.

“The IT ministry should have… as promised, enacted the data protection law alongside the Prevention of Electronic Crimes Act in 2016. A law which was much more predatory in nature [PECA] was given priority whereas a law that stands to provide protection to citizens’ data was delayed,” says Asad Baig.

“Now it’s too late. Only in the next term can we see something happening about. Meanwhile, if someone for instance now, chooses to exploit local data sources, they can do so with impunity.”

Critics say the greatest failure of the IT ministry is that there is no data protection law in Pakistan.

“The first and foremost thing that the ministry of IT [should] be doing at the moment is to introduce the data protection legislation that has been long overdue,” says Nighat Dad. “If the said law turned out to be a good one, it will ensure that people’s data is not being misused on or by the social media platforms.”

However, Ali Warsi, associate digital editor at the Daily Times, believes that the sheer volume of fake accounts being run from the country makes it impossible for anyone to control the spread of fake news.

“How exactly would [Mark Zuckerberg] track the fake accounts? This is too difficult to actually implement,” he said while talking to The Diplomat.

“What I have observed interacting with [social media executives] over the years is that [the parties] are good at making fake accounts. But that’s all they can do actually as far as the skill set is concerned. Disciplined use of data is something far beyond their capacity,” Warsi adds.

Even so, Warsi doesn’t believe any online data theft would have a huge impact on the Pakistani elections.

“[Pakistan has just] 17 percent internet penetration. The political parties wouldn’t really invest in online ads or Google adwords, and that too with such precision that they’d use Cambridge Analytica’s data,” he maintains.

Meanwhile, activists underscore that in Pakistan the spread of fake news and misinformation is a much graver problem than the impact it might have on polling.

Russian news may be biased – but so is much western mediaPiers RobinsonManipulation of the news for propaganda purposes is not the prerogative of the west’s enemies. It’s vital to look at all media, including the UK’s, with a critical eye

Whatever the accuracy, or lack thereof, of RT and whatever its actual impact on western audiences, one of the problems with these kinds of arguments is that they fall straight into the trap of presenting media that are aligned with official adversaries as inherently propagandistic and deceitful, while the output of “our” media is presumed to be objective and truthful. Moreover, the impression given is that our governments engage in truthful “public relations”, “strategic communication” and “public diplomacy” while the Russians lie through “propaganda”.

Neither of these claims has significant academic support. A substantial body of research conducted over many decades highlights the proximity between western news media and their respective governments, especially in the realm of foreign affairs. For reasons that include overreliance on government officials as news sources, economic constraints, the imperatives of big business and good old-fashioned patriotism, mainstream western media frequently fail to meet democratic expectations regarding independence. In our own study of UK media coverage of the 2003 Iraq invasion, Manchester University found that most UK mainstream media performed to reinforce official views rather than to challenge them.

As for the supposedly benign communication activities of our own governments – again, there are ample grounds to challenge the understanding that the “strategic communication” activities of our governments can be understood as free from the kind of manipulative “propaganda” of which the Russian government is accused. Indeed western governments frequently engage in strategies of manipulation through deception involving exaggeration, omission and misdirection. This was recently observed quite clearly during the run-up to the Iraq war when intelligence was manipulated in order to mobilise public support for the Iraq invasion.

Moreover, the recent Chilcot report describes how, in the early days after 9/11 “regime-change hawks” in Washington argued that “a coalition put together for one purpose (against international terrorism) could be used to clear up other problems in the region”. Tony Blair had discussed how phases 1 and 2 of the “war on terror” would require a “dedicated tightly knit propaganda unit”.

One might reasonably conclude from all this evidence that the western public fell foul of a major deceptive propaganda campaign which involved exploiting terrorism threats in order to “clear up other problems” and which was instigated by our own governments and communicated through “our” media. Propaganda and deception is not, it would appear, the sole preserve of non-western states; it is alive and well in western democracies.

These are confusing times for consumers of the news, and the issue of which media outlets should be trusted is as demanding and critical as ever. Given the level of conflict and potential conflict in the world today, plus pressing global issues regarding environmental crisis, poverty and resources, it is essential that people learn to navigate the media and defend themselves against manipulation. The first step towards becoming more informed is to avoid seeing our governments and media as free from manipulation while demonising “foreign” governments and media as full of propagandistic lies.

Speaking of Pashtun Tahaffuz Movement (PTM) for the first time, the DG ISPR referred to several questions pertaining to the sudden emergence of the movement.

"How did Manzoor Ahmed Masood was renamed as Manzoor Pashteen; how did this campaign start on social media; how were 5000 social media accounts made in Afghanistan in a single day; how was a 'topi' (cap) started manufacturing outside the country and coming into Pakistan; how did small group of individuals started staging anti-Pakistan protests outside the country," he questioned.

In this regard, the DG ISPR also noted publishing of articles by foreign newspapers and live telecast of Pashteen by foreign media outlets on Facebook and Twitter.

Major General Ghafoor told the media that he met with Manzoor Pashteen and Mohsin Dawar, who shared their concerns. "They came to our office, we had a discussion for an hour or two about Naqeeb Mehsud, missing persons, unexploded ordnance [in tribal areas] and check-posts issues."

He said that he separated Mohsin Dawar and Manzoor Pashteen from other people and took them to his office, adding, "Then I got them to speak to all GOCs and IG FC, got them time, [told them] all your issues should be resolved, go meet the GOCs.

"They returned and also held a meeting, and I received a text from Mohsin Dawar thanking me for facilitating and getting their issues resolved," the DG ISPR said.

He, however, said that "those who are enemies of Pakistan and still want to see the country unstable, if they join you and start praising you then one needs to look inward what is this happening."

Major General Ghafoor further said that Chief of Army Staff (COAS) General Qamar Javed Bajwa gave strict instructions not to deal with PTM gatherings through force anywhere.

No action has been taken against them so far, the army spokesperson pointed out, adding that "we have many proofs of how they are being used".

On the incident in Wana, South Waziristan, the DG ISPR said the Mehsud tribe has fought against terrorism for years. The [tribe] then fought among itself, and the casualties were rescued by Army helicopters.

A propaganda was instigated that a girl was killed by Army firing, he said.

"Pakistan has achieved peace by rendering sacrifices in the past 20 years. What we achieved, nobody was able to achieve. Now, it's time to be united and progress."

"We are not [affected] by false slogans on social media. The nation's love for the army has only increased in the [past] 10 years."

“We cannot respond to [everyone]. We are focused on our work,” he added.

The army spokesperson further said that a lot of accusations were made but time proved all the accusations to be false.

“No army [in the world] has been as successful as Pakistan army in the war against terrorism,” he said.

New York, June 5, 2018--The Committee to Protect Journalists today condemned comments from Major General Asif Ghafoor, spokesperson for Pakistan's military and intelligence agencies, who accused journalists of sharing anti-state remarks on social media.

At a press conference yesterday, Ghafoor derided the rise of social media troll accounts, which he said spread propaganda against the army and state, and said that Pakistan's spy agency, the Inter-Services Intelligence (ISI), was monitoring such accounts and those that engage with them, including journalists.

During his presentation, Ghafoor showed a graphic featuring an alleged troll account's Twitter activity and the journalists and other individuals allegedly connected to the account, who, Ghafoor said, redistributed anti-state and anti-army propaganda from the troll's account.

The journalists featured on the graphic include Ammar Masood and Fakhar Durrani, both with media Jang Media Group, Umar Cheema from the Jang-owned daily The News, Azaz Syed from the Jang-owned broadcaster Geo TV, and Matiullah Jan with the broadcaster Waqt News. Cheema received CPJ's International Press Freedom Award in 2011.

"Displaying photos of journalists alleged to help push anti-state propaganda in Pakistan is tantamount to putting a giant target on their backs," said Steven Butler, CPJ's Asia program coordinator in Washington, D.C. "General Ghafoor should apologize for his comments and explain how security forces might help promote journalist safety in Pakistan, where reporters and editors are routinely threatened, attacked, and killed for their work."

Pakistani authorities have cracked down on press freedom ahead of national parliamentary elections scheduled for July 25. Recently, CPJ documented disruptions to the distribution of Dawn newspaper and access to television channel Geo TV.

Facebook announced Tuesday afternoon that it has removed 32 Facebook and Instagram accounts or pages involved in a political influence campaign with links to the Russian government.

The company says the campaign included efforts to organize counterprotests on Aug. 10 to 12 for the white nationalist Unite The Right 2 rally planned in Washington that weekend.

Counterfeit administrators from a fake page called "Resisters" connected with five legitimate Facebook pages to build interest and share logistical information for counterprotests, Facebook said. The imminence of that event was what prompted Facebook to go public with this information.

In a blog post from the head of Facebook's cybersecurity policy, the company says that those accounts were "involved in coordinated inauthentic behavior" but that their investigation had not yielded definitive information about who was behind the effort.

However, Facebook's top security officials said the campaign involved similar "tools, techniques and procedures" employed by the Russian Internet Research Agency during the 2016 campaign.

There are not many details presented about the origin of these pages, but there is a link established between a page involved in organizing Unite The Right counterprotests and an IRA account.

Facebook noticed that a known Internet Research Agency account had been made a co-administrator on a fake page for a period of seven minutes — something a top Facebook official called "interesting but not determinative."

The actors behind the accounts were more careful to conceal their true identities than the Internet Research Agency had been in the past, Facebook said.

While Internet Research Agency accounts had occasionally used Russian IP addresses in the past, the actors behind this effort never did.

"These bad actors have been more careful to cover their tracks, in part due to the actions we've taken to prevent abuse over the past year," wrote Nathaniel Gleicher, head of cybersecurity policy at Facebook. "For example, they used VPNs and internet phone services, and paid third parties to run ads on their behalf."

Both the Republican and Democratic leaders of the Senate intelligence committee were less reserved about placing the blame for this campaign on the Russian government.

"The goal of these operations is to sow discord, distrust, and division in an attempt to undermine public faith in our institutions and our political system. The Russians want a weak America," said Sen. Richard Burr, the Republican chairman of that committee.

Added Sen. Mark Warner, the top Democrat on the panel, "Today's disclosure is further evidence that the Kremlin continues to exploit platforms like Facebook to sow division and spread disinformation."

This most recent political influence campaign consisted of pages with names like "Aztlan Warriors," "Black Elevation," "Mindful Being" and "Resisters."

The pages were created between March 2017 and May 2018 and had a total of 290,000 followers. Over this time period, they generated 9,500 posts and ran 150 ads for about $11,000. They also organized about 30 events, only two of which were slated for the future.

Now openly admitted, governments and militaries around the world employ armies of keyboard warriors to spread propaganda and disrupt their online opposition. Their goal? To shape public discourse around global events in a way favourable to their standing military and geopolitical objectives. Their method? The Weaponization of Social Media. This is The Corbett Report.

Analyzing, let alone countering, this type of provocative behavior can be difficult. Russia isn’t alone, either: The U.S. tries to influence foreign audiences and global opinions, including through Voice of America online and radio services and intelligence services’ activities. And it’s not just governments that get involved. Companies, advocacy groups and others also can conduct disinformation campaigns.

Unfortunately, laws and regulations are ineffective remedies. Further, social media companies have been fairly slow to respond to this phenomenon. Twitter reportedly suspended more than 70 million fake accounts earlier this summer. That included nearly 50 social media accounts like the fake Chicago Daily News one.

Facebook, too, says it is working to reduce the spread of “fake news” on its platform. Yet both companies make their money from users’ activity on their sites – so they are conflicted, trying to stifle misleading content while also boosting users’ involvement.

Real defense happens in the brain

The best protection against threats to the cognitive dimension of cyberspace depends on users’ own actions and knowledge. Objectively educated, rational citizens should serve as the foundation of a strong democratic society. But that defense fails if people don’t have the skills – or worse, don’t use them – to think critically about what they’re seeing and examine claims of fact before accepting them as true.

American voters expect ongoing Russian interference in U.S. elections. In fact, it appears to havealready begun. To help combat that influence, the U.S. Justice Department plans to alert the public when its investigations discover foreign espionage, hacking and disinformation relating to the upcoming 2018 midterm elections. And the National Security Agency has created a task force to counter Russian hacking of election systems and major political parties’ computer networks.

Let’s get this out of the way first: There is no basis for the charge that President Trump leveled against Google this week — that the search engine, for political reasons, favored anti-Trump news outlets in its results. None.

Mr. Trump also claimed that Google advertised President Barack Obama’s State of the Union addresses on its home page but did not highlight his own. That, too, was false, as screenshots show that Google did link to Mr. Trump’s address this year.

But that concludes the “defense of Google” portion of this column. Because whether he knew it or not, Mr. Trump’s false charges crashed into a longstanding set of worries about Google, its biases and its power. When you get beyond the president’s claims, you come upon a set of uncomfortable facts — uncomfortable for Google and for society, because they highlight how in thrall we are to this single company, and how few checks we have against the many unseen ways it is influencing global discourse.

In particular, a raft of research suggests there is another kind of bias to worry about at Google. The naked partisan bias that Mr. Trump alleges is unlikely to occur, but there is a potential problem for hidden, pervasive and often unintended bias — the sort that led Google to once return links to many pornographic pages for searches for “black girls,” that offered “angry” and “loud” as autocomplete suggestions for the phrase “why are black women so,” or that returned pictures of black people for searches of “gorilla.”

I culled these examples — which Google has apologized for and fixed, but variants of which keep popping up — from “Algorithms of Oppression: How Search Engines Reinforce Racism,” a book by Safiya U. Noble, a professor at the University of Southern California’s Annenberg School of Communication.

Dr. Noble argues that many people have the wrong idea about Google. We think of the search engine as a neutral oracle, as if the company somehow marshals computers and math to objectively sift truth from trash.

But Google is made by humans who have preferences, opinions and blind spots and who work within a corporate structure that has clear financial and political goals. What’s more, because Google’s systems are increasingly created by artificial intelligence tools that learn from real-world data, there’s a growing possibility that it will amplify the many biases found in society, even unbeknown to its creators.

Google says it is aware of the potential for certain kinds of bias in its search results, and that it has instituted efforts to prevent them. “What you have from us is an absolute commitment that we want to continually improve results and continually address these problems in an effective, scalable way,” said Pandu Nayak, who heads Google’s search ranking team. “We have not sat around ignoring these problems.”

For years, Dr. Noble and others who have researched hidden biases — as well as the many corporate critics of Google’s power, like the frequent antagonist Yelp — have tried to start a public discussion about how the search company influences speech and commerce online.

There’s a worry now that Mr. Trump’s incorrect charges could undermine such work. “I think Trump’s complaint undid a lot of good and sophisticated thought that was starting to work its way into public consciousness about these issues,” said Siva Vaidhyanathan, a professor of media studies at the University of Virginia who has studied Google and Facebook’s influence on society.

Dr. Noble suggested a more constructive conversation was the one “about one monopolistic platform controlling the information landscape.”

In the United States, about eight out of 10 web searches are conducted through Google; across Europe, South America and India, Google’s share is even higher. Google also owns other major communications platforms, among them YouTube and Gmail, and it makes the Android operating system and its app store.

Google’s influence on public discourse happens primarily through algorithms, chief among them the system that determines which results you see in its search engine. These algorithms are secret, which Google says is necessary because search is its golden goose (it does not want Microsoft’s Bing to know what makes Google so great) and because explaining the precise ways the algorithms work would leave them open to being manipulated.

But this initial secrecy creates a troubling opacity. Because search engines take into account the time, place and some personalized factors when you search, the results you get today will not necessarily match the results I get tomorrow. This makes it difficult for outsiders to investigate bias across Google’s results.

A lot of people made fun this week of the paucity of evidence that Mr. Trump put forward to support his claim. But researchers point out that if Google somehow went rogue and decided to throw an election to a favored candidate, it would only have to alter a small fraction of search results to do so. If the public did spot evidence of such an event, it would look thin and inconclusive, too.

“We really have to have a much more sophisticated sense of how to investigate and identify these claims,” said Frank Pasquale, a professor at the University of Maryland’s law school who has studied the role that algorithms play in society.

In a law review article published in 2010, Mr. Pasquale outlined a way for regulatory agencies like the Federal Trade Commission and the Federal Communications Commission to gain access to search data to monitor and investigate claims of bias. No one has taken up that idea. Facebook, which also shapes global discourse through secret algorithms, recently sketched out a plan to give academic researchers access to its data to investigate bias, among other issues.

Google has no similar program, but Dr. Nayak said the company often shares data with outside researchers. He also argued that Google’s results are less “personalized” than people think, suggesting that search biases, when they come up, will be easy to spot.

“All our work is out there in the open — anyone can evaluate it, including our critics,” he said.

Search biases mirror real-world onesThe kind of blanket, intentional bias Mr. Trump is claiming would necessarily involve many workers at Google. And Google is leaky; on hot-button issues — debates over diversity or whether to work with the military — politically minded employees have provided important information to the media. If there was even a rumor that Google’s search team was skewing search for political ends, we would likely see some evidence of such a conspiracy in the media.

That’s why, in the view of researchers who study the issue of algorithmic bias, the more pressing concern is not about Google’s deliberate bias against one or another major political party, but about the potential for bias against those who do not already hold power in society. These people — women, minorities and others who lack economic, social and political clout — fall into the blind spots of companies run by wealthy men in California.

It’s in these blind spots that we find the most problematic biases with Google, like in the way it once suggested a spelling correction for the search “English major who taught herself calculus” — the correct spelling, Google offered, was “English major who taught himself calculus.”

Singer argues that brands, ISIS recruiters, reality stars and Russian bots are all playing in the same arena online.When P.W. Singer set out to write a book about military use of social media in 2013, he couldn’t have known exactly what kind of rabbit hole he was entering.

---

In their conception of weaponized social media, everyone from brand marketers and reality stars to terrorist recruiters and military personnel are now competing with one another in a viral attention battleground where troll armies, misinformation and bot networks are weapons of choice.

Singer spoke with Adweek about how this social media atmosphere evolved, why brands need to pay attention to Russian bots and how artificial intelligence personas could be the future of online propaganda.

The following has been edited for length and clarity.

How did this book originally take shape?

Initially it was looking at social media use in war, but very quickly, war becomes melded with terrorism—so you think about the rise of ISIS in 2014. And then we start to see its use by criminal groups—cartels, gangs—and then it morphs into politics, where all the things that we were seeing in, for example, Russia and Ukraine start moving over into Brexit and American politics and the like. It was a pretty extensive journey, and along the way, the project got bigger and bigger.

The challenge of this topic and I think why we all weren’t handling it well is how big it is. So people were looking at just one slice of it, one geographic region or just one target and missing out on the larger trend.

For example, the people interested in terrorism were looking at ISIS’ use of social media but they weren’t aware of, say, how Russia was using it.

The people who were in these political worlds didn’t understand digital marketing or pop culture so they were missing things that anyone with an ad background or who knew what Taylor Swift does would go, ‘Of course.’ The approach was to bring all this together—to bring together all the classic research in history and psychology studies and sociology and digital marketing.

But the second thing about this space is that you can both research it and jump into it. So we joined online armies, both actual ones—you can download apps to join Israeli Defense Forces operations—to competing online tribes.

We set traps, we trolled Russian trolls.

And then the third thing that hadn’t been done that was really striking was talking to key players. So we went out and interviewed an incredibly diverse set of people to learn from them—everyone from the recruiters for extremist groups to tech company executives to the pioneers of online marketing to reality stars to generals.

How do the concepts and tactics you talk about in your book go beyond ordinary digital marketing?

“LikeWar” is a concept that brings together all these different worlds. If cyberwar is the hacking of the networks, LikeWar is the hacking of the people on the networks by driving ideas viral through a mix of likes, lies and the network’s own algorithms. It’s this strange space that brings together Russian military units using digital marketing to influence the outcome of elections to teenagers taking selfies and live-feeding but influencing the outcome of actual battles. You’re seeing some of these techniques from ad companies, marketing and the like. They’re being used for different purposes, but they’re playing out all in the exact same space.

How Israel’s Army Revolutionized Wartime Social Media

Facebook CEO and founder Mark Zuckerberg once said, “The question isn’t, What do we want to know about people?, It’s, ‘What do people want to tell about themselves?’” When it comes to the state of Israel and its representatives, there is a lot they want to tell about themselves. From PR crises like multiple wars with Gaza, waves of stabbing attacks and never-ending global political tensions, I’ve seen firsthand over the last seven years how Israel utilizes social media as a tool for combatting international media bias. Yet in Israel and in the global Jewish community, there is fierce criticism of Israel’s so-called “PR failures.” I believe this criticism is unwarranted. Having been a critical observer of both the Israeli government and the Israel Defense Forces from the perspective of an outsider, I believe the state of Israel, and more specifically the IDF, have completely changed the game when it comes to nation branding and public relations.

Despite unrelenting bias in the international media, the Israel Defense Forces has revolutionized modern warfare in the public sphere. In 2012, Israel was the first state to declare the beginning of an operation, before even holding a press conference,by launching the operation using Twitter— explaining why, how, when and where in under 140 characters. In 2014, the IDF provided data and evidence (photos and videos) within minutes of carrying out activities in order to stop the rocket-fire fromGaza. What other army in the world does this? Last I checked, the United States was not sending out photographic evidence of the terrorist weapons facilities they hit in real time on social media or on any other platform.

Since then, the IDF has only become more effective at distributing facts in real time. WhereasPalestinianterrorist organizations likeHamashave repeatedly distributed false and misleading information and even falsified photos and videos, the IDF is prompt, professional and highly effective at distributing photographic and video evidence, including detailed explanations. In the recent riots at the border of Gaza, the IDF releasedvideo footagewithin minutes of terrorists firing guns across the border in a riot which the Palestinians had claimed to the international press was “non-violent” (an incident that unfortunately occurred repeatedly).

That is not to say every action of the IDF is justifiable and morally correct — that would not be true of any army in the world. But there is no question that the IDF goes above and beyond to distribute factual information in a way that no other army in the world would do, could do or has to do to comply with international law. Indeed, even when mistakes are made, the IDF is clear and professional about the aftermath —investigatingincidents of civilian casualties quickly and releasing the findings of the investigations to the public.

Few in the Western world are aware, but the IDF has a massive following in Arabic — and their Arabic spokesman, Avichay Adraee, is anextremely well-known public figurein the Arab world. Whether they love him or hate him, Adraee is getting the message out loudly and clearly on networks like Al Jazeera Arabic, and through his massive social media following of over one million on Facebook.

As the digital director of one of the few pro-Israel organizations that operates on social media in Arabic, I can testify to the fact that the impact of having an Israeli voice from the IDF, who is ”reachable” on Twitter and Facebook, is tremendous. What other army in the world has that advantage? The average citizens that Adraee is reaching on a day-to-day basis are being faced with the reality that what they have been taught about the alleged inhumanity of the IDF is simply not the case.

Sponsored Links

Please Bookmark This Page!

Blog Posts

The Organization of Pakistani-American Entrepreneurs (OPEN) hosted a dinner on Wednesday Feb 20, 2019, for young tech entrepreneurs from Pakistan who came this year toSilicon Valleyto attend StartUp Grind Global Conference. About 50 people, including 20 from Pakistan, attended the event.…

Last Saturday, Pakistani-American community joined Javed Ellahie and Sabina Zafar in celebrating their recent election victories in city council elections in San Francisco Bay Area which includesSilicon Valley. Ellahie has been elected to Monte Sereno City Council while Sabina Zafar won a seat on San Ramon City Council in November 2018 elections. The event was organized by American Pakistani Political Action…