Such manipulation may be conducted for purposes of propaganda, discreditation, harming corporate or political competitors, improving personal or brand reputation or plain trolling among other things. To accomplish these objectives, online influencers, hired professionals and/or software − typically Internet bots such as social bots, votebots and clickbots − may be used.

Cognitive hacking refers to a cyberattack that aims to change users' perceptions and corresponding behaviors.[1][2][3]

High-arousal emotion virality: It has been found that content that evokes high-arousal emotions (e.g. awe, anger or anxiety) is more viral and that this also hold when surprisingness, interestingness, or usefulness is taken into consideration.[7]

Simplicity over complexity: Providing and perpetuating simple explanations for complex circumstances may be used for online manipulation. Often such are easier to believe, come in advance of any adequate investigations and have a higher virality than any complex, nuanced explanations and information.[8] (See also: Low-information rationality)

Peer-influence: Prior collective ratings of an web content influences ones own perception of it. In 2015 it was shown that the perceived beauty of a piece of artwork in an online context varies with external influence as confederate ratings were manipulated by opinion and credibility for participants of an experiment who were asked to evaluate a piece of artwork.[9] Furthermore, on Reddit it has been found that content that initially gets a few down- or upvotes often continues going negative, or vice versa. This is referred to as "bandwagon/snowball voting" by reddit users and administrators.[10]

Information timeliness and uncorrectability: Clarifications, conspiracy busting and fake news exposure often come late when the damage is already done and/or do not reach the bulk of the audience of the associated misinformation[12][better source needed]

The proliferation of online sources represents a vector leading to an increase in media pluralism but algorithms used by social networking platforms and search engines to provide users with a personalized experience based on their individual preferences represent a challenge to pluralism, restricting exposure to differing viewpoints and news feed. This is commonly referred to as "echo-chambers" and "filter-bubbles".

With the help of algorithms, filter bubbles influence users' choices and perception of reality by giving the impression that a particular point of view or representation is widely shared. Following the 2016 referendum of membership of the European Union in the United Kingdom and the United States presidential elections, this gained attention as many individuals confessed their surprise at results that seemed very distant from their expectations. The range of pluralism is influenced by the personalized individualization of the services and the way it diminishes choice.[16]

Research on echo chambers from Flaxman, Goel, and Rao,[17]Pariser,[18] and Grömping[19] suggest that use of social media and search engines tends to increase ideological distance among individuals.

Comparisons between online and off-line segregation have indicated how segregation tends to be higher in face-to-face interactions with neighbors, co-workers, or family members,[20] and reviews of existing research have indicated how available empirical evidence does not support the most pessimistic views about polarization.[21] A study conducted by researchers from Facebook and the University of Michigan, for example, has suggested that individuals’ own choices drive algorithmic filtering, limiting exposure to a range of content.[22] While algorithms may not be causing polarization, they could amplify it, representing a significant component of the new information landscape.[23]

Known as "Effects" operations, the work of JTRIG had become a "major part" of GCHQ's operations by 2010.[25] The unit's online propaganda efforts (named "Online Covert Action"[citation needed]) utilize "mass messaging" and the "pushing [of] stories" via the medium of Twitter, Flickr, Facebook and YouTube.[25] Online "false flag" operations are also used by JTRIG against targets.[25] JTRIG have also changed photographs on social media sites, as well as emailing and texting colleagues and neighbours with "unsavory information" about the targeted individual.[25] In June 2015, NSA files published by Glenn Greenwald revealed new details about JTRIG's work at covertly manipulating online communities.[28] The disclosures also revealed the technique of "credential harvesting", in which journalists could be used to disseminate information and identify non-British journalists who, once manipulated, could give information to the intended target of a secret campaign, perhaps providing access during an interview.[25] It is unknown whether the journalists would be aware that they were being manipulated.[25]

Furthermore, Russia is frequently accused of financing "trolls" to post pro-Russian opinions across the Internet.[29] The Internet Research Agency has become known for employing hundreds of Russians to post propaganda online under fake identities in order to create the illusion of massive support.[30] In 2016 Russia was accused of sophisticated propaganda campaigns to spread fake news with the goal of punishing Democrat Hillary Clinton and helping Republican Donald Trump during the 2016 presidential election as well as undermining faith in American democracy.[31][32][33]

In a 2017 report[34]Facebook publicly stated that its site has been exploited by governments for the manipulation of public opinion in other countries – including during the presidential elections in the US and France.[11][35][36] It identified three main components involved in an information operations campaign: targeted data collection, content creation and false amplification and includes stealing and exposing information that's not public; spreading stories, false or real, to third parties through fake accounts; and fake accounts being coordinated to manipulate political discussion, such as amplifying some voices while repressing others.[37][38]

In 2016 Andrés Sepúlveda disclosed that he manipulated public opinion to rig elections in Latin America. According to him with a budget of $600,000 he led a team of hackers that stole campaign strategies, manipulated social media to create false waves of enthusiasm and derision, and installed spyware in opposition offices to help Enrique Peña Nieto, a right-of-center candidate, win the election.[39][40]

In the run up to India's 2014 elections, both the Bharatiya Janata party (BJP) and the Congress party were accused of hiring "political trolls" to talk favourably about them on blogs and social media.[29]

In December 2014 the Ukrainian information ministry was launched to counter Russian propaganda with one of its first tasks being the creation of social media accounts (also known as the i-Army) and amassing friends posing as residents of eastern Ukraine.[42][29]

In Wired it was noted that nation-state rules such as compulsory registration and threats of punishment are not adequate measures to combat the problem of online bots.[50]

To guard against the issue of prior ratings influencing perception several websites such as Reddit have taken steps such as hiding the vote-count for a specified time.[10]

Some other potential measures under discussion are flagging posts for being likely satire or false.[51] For instance in December 2016 Facebook announced that disputed articles will be marked with the help of users and outside fact checkers.[52] The company seeks ways to identify 'information operations' and fake accounts and suspended 30,000 accounts before the presidential election in France in a strike against information operations.[11]

Inventor of the World Wide WebTim Berners-Lee considers putting few companies in charge of deciding what is or isn't true a risky proposition and states that openness can make the web more truthful. As an example he points to Wikipedia which, while not being perfect, allows anyone to edit with the key to its success being not just the technology but the governance of the site − its coordination of countless volunteers and ways of determining what is or isn't true.[53]

Furthermore, various kinds of software may be used to combat this problem such as fake checking software or voluntary browser extensions that store every website one reads or use the browsing history to deliver fake revelations to those who read a fake story after some kind of consensus was found on the falsehood of a story.[original research?]

This page is based on a Wikipedia article written by contributors (read/edit).Text is available under the CC BY-SA 4.0 license; additional terms may apply. Images, videos and audio are available under their respective licenses.