Tag Archives: social marketing

The United Nations has embraced the use of behavioral science to help it craft effective development activities and interventions. As it notes on this November 2016 blog:

Across the globe, all people – poor or rich – sometimes make choices that are not conducive to their own well-being. Saving enough for retirement, eating healthy, investing in education – all too often we humans postpone intended actions to ‘tomorrow’, succumb to inertia or get stuck in habits.

In light of the extensive research on the cognitive biases that influence human decision-making, there is a broad consensus that traditional economic models are insufficient for effective policy-making. Behind every policy lie assumptions about how humans will behave in light of new regulations and why we act the way we do.

UNDP has embraced the idea of network nudges, where people are influenced by the behavior of friends and members of their extended social network, and that people observe other people’s behavior as guidelines for what’s acceptable and desirable. UNDP has been cooperating with the UK Behavioural Insights Team since 2013, and UNDP’s report, Behavioural Insights at the United Nations – Achieving the 2030 Agenda, advocates this approach for inclusion in every policy maker’s toolbox and presents 10 valuable case studies. This is from the page at the aforementioned link:

In 2016, the UNDP Innovation Facility collaborated with the newly engaged UN Behavioural Science Advisor to work on behaviorally-informed design with 8 UNDP Country Offices in all 5 regions: Bangladesh, Cameroon, China, Ecuador, Jordan, Moldova, Montenegro and Papua New Guinea. This Progress Report highlights the potential of behavioural insights to help achieve the Sustainable Development Goals and provides an overview of the 8 initiatives.

Behavioural insights draw from research findings from psychology, economics and neuroscience. These insights about how people make decisions matter for development. They matter for policy-formulation and addressing last mile problems.

UN Secretary General Ban Ki-moon noted that, “In order to succeed, Agenda 2030 must account for behavioural insights research… Our organization, our global agenda – and most importantly the people worldwide they are intended to serve – deserve nothing less than the best science available. A human-centered agenda requires a rigorous, research-based understanding of people.”

The report shows that approaching development challenges with behavioural insights leads to better diagnoses of problems and to better designed solutions. Public policy and programme officials around the world can achieve better outcomes — often at low or no cost — simply by leveraging our current understanding of human psychology and behaviour.

Apps, social media, text messaging/SMS and other information and communication technologies (ICTs) are already playing a crucial role in educating people regarding public health issues, reaching marginalized communities and helping those that may be targets of harassment and discrimination. But in all of these tech4good initiatives, the importance of safety and security for those doing the outreach and those in the target audience is critical. People trying to promote a tech4good initiative do not want the technology to be used by hostile parties to identify, track and target people based on their health, lifestyle or beliefs.

For those interested in using ICTs to reach marginalized communities, or those interested in how to communicate vital information about topics that are frowned-upon in religiously conservative communities, the new publication Pioneering HIV services for and with men having sex with men in MENA: A case study about empowering and increasing access to quality HIV prevention, care and support to MSM in a hostile environment, is well worth your time to read. The United States Agency for International Development (USAID) funded this project, and the 48-page publication was produced by the International HIV/AIDS Alliance and co-authored by Tania Kisserli, Nathalie Likhite and Manuel Couffignal. The publication includes two pages on how ICTs help to reach hidden communities threatened by police raids and rising homophobia in the MENA (Middle East and North Africa) region – for instance, how applications such as Grindr that are frequently accessed by men having sex with men (MSM) in the MENA region and provide virtual venues for disseminating information on HIV prevention, treatment and support services.”

The publication includes two pages on how ICTs help to reach hidden communities threatened by police raids and rising homophobia in the MENA (Middle East and North Africa) region – for instance, how applications such as Grindr that are frequently accessed by men having sex with men (MSM) in the MENA region and provide virtual venues for disseminating information on HIV prevention, treatment and support services.”

This is from the report (note that this is with British spellings):

In 2015, the partners of the MENA programme implemented a pilot online peer outreach project to reach more MSM, in partnership with the South East Asian Foundation B-Change Technology.

In order to improve the understanding of the online habits and behaviours of MSM, two anonymous web surveys were launched online to collect information among MSM (living in Algeria, Lebanon, Morocco and Tunisia), recruited via Facebook and instant messaging channels. The first survey assessed technology use and included questions about mobile devices and tech-based sexual networking. The second survey collected further data on social media behaviours, with questions about using social networks, interpersonal communications, and negative experiences online. The results confirmed the penetration of internet and mobile technologies in urban centres, and highlighted the widespread use by MSM of mainstream social networks (predominantly Facebook) and global gay dating apps, especially in the evening. The predominant website for sexual networking was reported to be Planet Romeo; the predominant smartphone app for sexual networking was Grindr. The results also revealed that while MSM use smartphone instant messaging (SMS and Whatsapp mainly) to communicate and chat with friends, they tend to use the telephone when communicating with health providers. Sexual networking among this cohort demonstrated a preference for web-based methods versus offline (public space) networking. A significant proportion of negative experiences using social media or apps was also reported, in particular cases of breach of confidentiality online.

Based on these findings, the partners designed a pilot information and communications technology (ICT)-based intervention. Experienced peer educators created avatars representing different profiles of beneficiaries, collectively designed an online peer outreach intervention and developed the corresponding standard operating procedures and M&E framework. This was identified as the most feasible output based on existing resources and ICT experience. Building the capacity of community groups for this intervention would result in more effective use of popular social media platforms for MSM-peer outreach activities. Local trainings of ‘online peer educators’ were organised to strengthen digital security, content creation systems, online outreach procedures, conduct of peer educators online, and M&E framework to measure the outcomes towards the HIV continuum of care.

The trained ‘online peer educators’ created ‘virtual peer educators’ accounts/profiles and contacted MSM though internet and social media in their respective countries, mainly on Facebook, Whatsapp, Grindr, Hornet, Planet Romeo, Badoo, Tango and Babel, and mostly during evening and night shifts. The objective was to contact MSM not reached by the usual outreach in public spaces, and hence continue expanding the package of prevention services available to MSM. They provided interpersonal communications on HIV and STIs, disseminated IEC materials online, encouraged them to take an HIV test and referred them to prevention services provided by the partner organisations, as well as public health services in their country.

This test phase lasted from July to September 2015 in Agadir, Beirut, Tunis and Sousse. The results were promising; during the month of September 2015, the six online peer educators of ASCS in Agadir for instance reached 546 MSM via chat rooms, websites, apps and instant messaging. They referred 148 MSM for an HIV test and 86 MSM for an STI consultation. During this period ASCS noticed an increase of number of MSM visiting the association to collect condoms and lubricant; ASCS peer educators appreciated this new type of outreach work compared to street outreach, the latter being uneasy due to growing harassment of police. Some challenges that peer educators faced online were similar to ‘traditional’ or face-to-face outreach work: high interest in sexual health, initially reluctance to visit association or uptake services, or to change risk behaviour.

“The virtual prevention pilot project has allowed us to reach a significant number of MSM, in particular those who remain hidden and aren’t reached through our outreach activities in the streets.” — peer educator and university student in Morocco

Some of the lessons learned from this pilot project:

Overall high acceptability: many MSM are eager to engage in an online conversation about HIV and STI prevention, rights and services; virtual spaces are perceived as safe to talk freely about sexual practices with no face-to-face bias; however, a significant proportion of MSM contacted online refused any discussion relating to sexual health and HIV.

Strong operational procedures and human resource capacity are required to maintain a high quality ICT tool that maintains privacy and confidentiality; consequently, organisational ICT capacity needs to be assessed and strengthened before initiating an online prevention project.

Monitoring and evaluation challenges: it is not easy to measure service use or user engagement online or to clearly show the link between use of ICT and uptake of services; monitoring of referral pathways between outreach CSOs and friendly providers needs to be aligned to track referral from virtual spaces to services.

Folklore, rumors and contemporary myths / legends often interfere with development aid activities and government initiatives, including public health programs – even bringing such to a grinding halt. They create ongoing misunderstandings and mistrust, prevent people from seeking help, encourage people to engage in unhealthy and even dangerous practices, and have even lead to mobs of people attacking someone or others because of something they heard from a friend of a friend of a friend. With social media like Twitter and Facebook, as well as simple text messaging among cell phones, spreading misinformation is easier than ever.

Added to the mix: fake news sites set up specifically to mislead people, as well as crowdsourced efforts by professional online provocateurs and automated troll bots pumping out thousands of comments, countering misinformation efforts has to be a priority for aid and development organizations, as well as government agencies.

Anyone working in development or relief efforts, or working in government organizations, needs to be aware of the power of rumor and myth-sharing, and be prepared to prevent and to counter such. This page is an effort to help those workers:

cultivate trust in the community through communications, thereby creating an environment less susceptible to rumor-baiting

quickly identify rumors and misinformation campaigns that have the potential to derail humanitarian aid and development efforts

quickly respond to rumors and misinformation campaigns that could derail or are interfering with humanitarian aid and development efforts

And, FYI: I do this entirely on my own, as a volunteer, with no funding from anyone. I update the information as my free time allows.

It wasn’t getting a journalism degree, or being a journalist, that made me a skeptic when it comes to sensational stories. It was a folklore class. Urban Folklore 371, to be exact. It was a very popular class at Western Kentucky University back in the late 1980s, both for people getting a degree in folklore studies and for people needing humanities courses for whatever their degree program was, like me. Class studies focused on contemporary, largely non-religious-based legends, customs and beliefs in the USA. One class might focus on watching a film about the games kids play on a playground and how those games explore the things they fear – marriage, childbirth, stranger danger, being ostracized by their peers, etc. Another class might review the difference versions of the “vanishing hitchhiker” story and why such stories are so popular in so many different cultures, and how the story changes over time.

I heard at least one student say, “That’s not a true story?! I always thought it was!” at least once in every class. Because of that class, I realized there were legends being told as truth all around me, by friends, by family, even by newspapers. “I heard it from my cousin” or “My friend saw it in a newspaper” or “My Mom saw it on Oprah” was usually the preface to some outlandish story told as fact. But the class taught me that, in fact, no woman was ever killed by spiders nesting in her elaborate hairdo, that there has never been a killer with a hook for a hand that attacked a couple in a parked car in a nearby town, that there is no actor who has never had a gerbil removed from his anus, and on and on and on.

I became the “um – that’s not true” girl at various places where I worked. And then via email. And I still am, now on social media. And what I have learned from being little Ms. Debunker is that people REALLY do NOT like these stories debunked. In fact, pointing out the facts that prove these stories aren’t true, no matter how gently I try to do it, often makes people very angry.

Back in the 1990s, a friend sent me yet another forwarded email. This time, the text said the email was from Microsoft Founder Bill Gates, that he’d written a program that would trace everyone to whom the email message was sent, and that he was beta testing the program. The email encouraged people to forward the message and said that if it reaches 1,000 people, everyone on the list would receive $1,000. Of course, it wasn’t true – I knew it as soon as I saw it. She’d sent me several of these type of emails – one that said people that forwarded the message would get a free trip to Disney World, another said we’d all get free computers, and on and on. I had been deleting them, but I was tired of it. So I looked online, found a site that debunked the myth, and sent her the link. I didn’t make any judgement statements; I just said, “This is a myth. Here’s more info. You might want to let everyone know you sent to, as well as the person you got it from,” or something similar.

She was not happy with me. In fact, it almost ended our friendship. She told me that the Internet was “a place for having fun” and “you can’t win if you don’t play” and what did she have to lose by forwarding the message even if it sounded fishy?

And that kind of reaction kept happening. Three new friends I made back in 2010, after I’d moved back to the USA, all unfriended me on Facebook the same day, outraged that I pointed out several things they were posting as their status updates – about how Facebook was going to start charging users, about how putting up a disclaimer on your Facebook page would stop the company from being able to sell your information, and on and on – were all urban legends, all untrue. Their reaction was almost verbatim of what that friend via email had said: Facebook is “a place for having fun” and “it’s better to be safe and share it” and what did they have to lose by sharing the message even if it sounded fishy? Also, they said they did not have time to “check every single thing online.”

Now, in 2016, I have friends that are furious with me for posting science-based web sites that debunk their posts from quack sites like the “Food Babe” claiming that GMOs cause cancer or that vaccines cause autism (to be clear, these are MYTHS). Two journalists – JOURNALISTS – were mad at me when I pointed out that a status update one had shared – it urged users to use the Facebook check-in function to say they were at Standing Rock in North Dakota, that this would somehow prevent the Morton County Sheriff’s Department there from geotargeting DAPL protesters – was promoting false information. I wasn’t just annoyed by the message – I found it imprudent, and yet another example of slackervism or slacktivism: people truly wishing to assist the protesters were checking in on Facebook rather than doing something that would REALLY make a difference, like sending funds to support the protest efforts or writing their Congressional representatives in support of the protesters. It also misdirects people from the nefarious ways law enforcement really does surveil people on social media. I would have thought journalists would know better than engage in such behavior.

Contemporary legends online cause harm, and it’s bothered me long before the Standing Rock/Facebook book check-in myth. Since 2004, I have been gathering and sharing examples of how rumors and urban / contemporary myths often interfere with relief and development activities, and government initiatives, including public health initiatives — even bringing such to a grinding halt. These myths create ongoing misunderstandings among communities and cultures, prevent people from seeking help, encourage people to engage in unhealthy and even dangerous practices, cultivate mistrust of people and institutions, and have even lead to mobs of people attacking someone or others for no reason other than something they heard from a friend of a friend of a friend. With the advent of social media like Twitter and Facebook, as well as just text messaging among cell phones, spreading misinformation is easier than ever.

Based on my experience as a researcher and a communications practitioner, and everything I’ve read – and I read a LOT on this subject – rumors that interfere with development and aid/relief efforts and government health initiatives come from:

misinterpretations of what a person or community is seeing, hearing or experiencing,

from previous community experiences or their cultural beliefs,

willful misrepresentation by people who, for whatever reason, want to derail a development or relief activity,

unintentional but inappropriate or hard-to-understand words or actions by a communicator, or

the desire of an individual or community to believe an alternative narrative, a desire that is stronger than the facts

But are these recommendations enough anymore? I’m not sure. Because BuzzFeed reported that fake news stories about the USA Presidential election this year generated more engagement on Facebook than the top election stories from 19 major news outlets COMBINED – that included major news outlets such as The New York Times, The Washington Post, CNN, and NBC News, and on and on. And a new study from Stanford researchers evaluated students’ ability to assess information sources, and described the results as “dismaying,” “bleak” and a “threat to democracy,” as reported by NPR News. Researchers said students displayed a “stunning and dismaying consistency” in their responses, getting duped again and again. The researchers weren’t looking for high-level analysis of data but just a “reasonable bar” of, for instance, telling fake accounts from real ones, activist groups from neutral sources and ads from articles. And the students failed. Miserably. And then there’s my own experience seeing the reaction a lot of people have to references to sites like snopes.com or truthorfiction.com or hoax-slayer.com or the Pulitzer Prize-winning site Politico that debunk myths; those people claim that “These sites aren’t true. They’re biased.” And that’s that – just a simple dismissal, so they can continue to cling to falsehoods.

National Public Radio did a story a few days ago about a man in Los Angeles who decided to build fake news sites that publish outrageous, blatantly false stories that promote stories that extreme far-right groups in the USA (also known as “alt-right”) would love to believe; he thought that when these stories were picked up by white supremacist web sites and promoted as true, he and others, particularly major media outlets, would be able to point out that the stories were entirely fiction, created only as bait, and that the white supremacists were promoting such as fact. But instead, thousands of people with no formal association with white supremacists groups shared these stories as fact – reaching millions more people. He wrote one fake story for one of his fake sites on how customers in Colorado marijuana shops were using food stamps to buy pot. Again, this story is NOT TRUE. But it led to a state representative in Colorado proposing actual legislation to prevent people from using their food stamps to buy marijuana; a state legislator was creating legislation and outrage based on something that had never happened.

BTW, to see these fake news sites for yourself, just go to Google and search for snopes is biased, and you will get a long list of links to fake news sites, most right-wing, all fighting against debunking fact-based sites like Snopes. I refuse to name those fake news sites because I don’t want them to get any more traffic than they already do.

Competent decision-making depends on people – the decision-makers – having reliable, accurate facts put in a meaningful and appropriate context. Reason – the power of the mind to think, understand and form judgments by a process of logic – relies on being able to evaluate information regarding credibility and truth. But fact-based decision-making, the idea of being logical and using reason and intellect, have become things to eschew. The Modis Operandi for many is go with your gut, not with the facts. Go not for truth, but truthiness.

I always thought that last bullet in my list of why people believe myths, “the desire of an individual or community to believe an alternative narrative, a desire that is stronger than the facts,” was easy to address. Now, given all the aforementioned, I’m not at all sure.

I’m going to keep calling out myths whenever I see them, and if it costs me Facebook friends, so be it. I prefer the truth, even when the truth hurts, even when the truth causes me to have to reconsider an opinion. There is a growing lack of media literacy and science literacy in the USA – and, indeed, the world. And the consequences of this could be catastrophic – if they haven’t been already. People need to be able to not just access information, but also to analyze it and evaluate the source. That’s just not happening. And I’ve no idea how to change things.

8:10 am Nov. 28, 2016 Update: Filippo Menczer, Professor of Computer Science and Informatics and Director of the Center for Complex Networks and Systems Research at Indiana University, Bloomington, authored the article Why Fake News Is So Incredibly Effective, published in Time andThe Conversation. Excerpts: “Our lab got a personal lesson in this when our own research project became the subject of a vicious misinformation campaign in the run-up to the 2014 U.S. midterm elections. When we investigated what was happening, we found fake news stories about our research being predominantly shared by Twitter users within one partisan echo chamber, a large and homogeneous community of politically active users. These people were quick to retweet and impervious to debunking information.” Also of note: “We developed the BotOrNot tool to detect social bots. It’s not perfect, but accurate enough to uncover persuasion campaigns in the Brexit and antivax movements… our lab is building a platform called Hoaxy to track and visualize the spread of unverified claims and corresponding fact-checking on social media. That will give us real-world data, with which we can inform our simulated social networks. Then we can test possible approaches to fighting fake news.”

A North Carolina man read online that a pizza restaurant in northwest Washington, DC, was harboring young children as sex slaves as part of a child-abuse ring, so he drove six hours from his home to the restaurant, and not long after arriving, he fired from an assault-like AR-15 rifle. No one was injured, and he’s been arrested, but, as The New York Times notes, “the shooting underscores the stubborn lasting power of fake news and how hard it is to stamp out. Debunking false news articles can sometimes stoke the outrage of the believers, leading fake news purveyors to feed that appetite with more misinformation. Efforts by social media companies to control the spread of these stories are limited, and shutting one online discussion thread down simply pushes the fake news creators to move to another space online. The articles were exposed as false by publications including The New York Times, The Washington Postand the fact-checking website Snopes. But the debunking did not squash the conspiracy theories about the pizzeria — instead, it led to the opposite. ‘The reason why it’s so hard to stop fake news is that the facts don’t change people’s minds,’ said Leslie Harris, a former president of the Center for Democracy & Technology, a nonprofit that promotes free speech and open internet policies.”

Dec. 9, 2016 update

“Fakes, News and the Election: A New Taxonomy for the Study of Misleading Information within the Hybrid Media System”

Abstract:
The widely unexpected outcome of the 2016 US Presidential election prompted a broad debate on the role played by “fake-news” circulating on social media during political campaigns. Despite a relatively vast amount of existing literature on the topic, a general lack of conceptual coherence and a rapidly changing news eco-system hinder the development of effective strategies to tackle the issue. Leveraging on four strands of research in the existing scholarship, the paper introduces a radically new model aimed at describing the process through which misleading information spreads within the hybrid media system in the post-truth era. The application of the model results in four different typologies of propagations. These typologies are used to describe real cases of misleading information from the 2016 US Presidential election. The paper discusses the contribution and implication of the model in tackling the issue of misleading information on a theoretical, empirical, and practical level.

On December 2, 2015, the United Nations Alliance of Civilizations (UNAOC) held a Symposium on Hate Speech in the Media, with senior officials calling for a global mobilization of citizens to help counter messages that promote xenophobia, violent extremism and prejudice. The symposium was the first of a series that UNAOC will host, called Tracking Hatred,. The next symposium will be held in Baku, Azerbaijan, in April.

Cristina Gallach, UN Under-Secretary-General for Communications & Public Information, said during the UNAOC event, “Hate speech has been with us for a long time. We will never forget the slaughter of over 800,000 Tutsis and moderate Hutus during a brief three month period in Rwanda in 1994. We will never forget either the six million Jews plus five million others who perished because of one hateful vision… Today, however, more than ever, individuals are using hate speech to foment clashes between civilizations in the name of religion. Their goal is to radicalize young boys and girls, to get them to see the world in black and white, good versus evil, and get them to embrace a path of violence as the only way forward.” She wasn’t just referring to Daesh (also known as ISIL or ISIS), though they are the most high-profile right now and, therefore, they were the primary focus of this event.

From what I’ve read about the symposium, there were lots of comments by speakers about enforcing laws that prohibit incitement of hatred or violence, and about social media companies being compelled to quickly delete content. I’m wary of this kind of talk, as governments use cries of “hate speech” to arrest people that are critical of the government or a religion, such as this 14-year-old boy in Turkey, or these teens in Egypt, or Raif Badawi in Saudi Arabia. I much prefer strategies focused on communications activities that establish and promote a narrative that pushes back against hate and prejudice, and was glad to see that strategy as a focus of two of the CTED panels, one called “privacy and freedom of expression in the digital age” and another that I am very interested in, called “Use of Internet and communications technology for counter-messaging purposes” – the link goes to webcast of the panel, moderated by Steven Siqueira, Acting Deputy Director, CTITF Office- UN Counter-Terrorism Centre (UNCCT) – so wish there was a transcription from this panel! If you want to listen to just a bit, here’s my absolute favorite: go to around the 14:00 point and listen to Humera Khan, Executive Director of Muflehun – she gives realistic, practical advice on mobilizing youth to counter online messages of hate. And then listen to Jonathan Birdwell of the Institute for Strategic Dialogue, right afterward, talking about teaching young people to critically engage with what they read online, and the importance of digital literacy. And then jump to around 36:00 and listen to Abdul-Rehman Malik, who has a provocative, assertive, right-on challenge to governments on this subject. The questions and answers after these three present is worth your time as well. The entire session lasts about 90 minutes, and is really worth your time to listen to (please, UN, release it as a podcast!).

I hope the people involved in these UN and civil society efforts know that, in the last 24 hours, Muslims on Twitter have hilariously trolled a Daesh leader’s call to violence – humor is a powerful tool in fighting against prejudice, and these tech-savvy Muslims are doing it brilliantly. I hope they know about online groups like Quranalyzeit and Sisters in Islam, tiny organizations doing a brilliant job online of countering extremist messages regarding Islam, and doing it as Muslims and from an Islamic perspective. Or about Mohamed Ahmed, a middle-aged father and gas station manager, and one of many Muslims in Minneapolis, Minnesota frustrated by Daesh’s stealthy social media campaigns, and countering it with a social media campaign of his own, AverageMohamed.com.

AND I HOPE EVERYONE KEEPS TALKING. Because I think they are talking about activities and messages that will really work in stopping the violence, and will make all aid and development efforts – about water, about reproductive health, about agricultural, WHATEVER – actually work, actually be sustainable. I so wish all of these efforts were getting more attention online, in traditional media, among all United Nations agencies, among NGOs, and among politicians.

Back in my university days, for my major in journalism, I had to take a class about ethics in journalism. And it changed my life. My “aha” moment was when the professor handed out two different news articles about the same event – a funeral. Both articles were factually accurate. In our class discussion, we noted that one story seemed rather cut-and-dry: the hearse was “gray”, the women “wore black” and they “cried openly”, etc. After reading the other article, the class said their impression was that the family of the deceased was very well off and very high class – the hearse was “silver”, the women wore “black haute couture” and they “wept”, etc. There were lots more examples we came up with that I cannot remember now, but I do remember that I started paying a lot more attention to words that were used in the media to describe events. And I still do.

I use the following as an example not to invite debate about the ethics of abortion or access to abortion, but, rather, for you to think about words, to think about those debates and the words different people use when talking about the same things. One side talks about reproductive health, personal choice, reproductive choice, health clinics, pregnancy termination, freedom from government interference and abortion access. The other side brands itself as pro-life and uses words like murder and killing and baby parts and abortion industry. You can know which side a politician is on based on which words that person uses. Both sides regularly petition the media to use particular phrasing when talking about any news related to abortion services.

Another example: the estate tax versus the death tax. In the USA, when someone dies, the money and goods that they had that is passed along to heirs, usually children, is taxed by the federal government. When this tax is described as an estate tax, polls show that around 75% of voters supported it. But when the exact same proposed policy is described as a death tax, only around 15% of voters supported it.

Sometimes, our choice of words to describe something is unconscious, a matter of our education or background. Other times, the choices are quite deliberate and meant to add subtext to our message, to sway the reader or listener in a particular direction. As I said on Facebook in response to someone who implied she doesn’t think word choice matters when describing groups, people or events:

If there is anything I’ve learned in my almost 50 years – WORDS MATTER. The words we use influence, inspire, change minds, create understanding or misunderstanding, fuel flames or put them out. Using a double standard to talk about people that murder others brands one “just really sad”, and the other “terrorism.” It demonizes entire groups while letting off other groups that are the same. It’s hypocrisy.

Here is a wonderful web site from a student project at the University of Michigan on the power of word choice in news articles (it includes rather wonderful lesson plans you can use in your own trainings!). It hasn’t been updated in a long while, but the examples it uses – many from the second Iraqi war with the USA – are excellent for showing the power of word choices to influence opinion – and even create misunderstanding. Also see this lesson plan for students in grade 9 – 12 regarding word choice and bias from Media Smarts, “Canada’s centre for digital and media literacy,” for more examples of how word choices can influence our opinions.

I am fascinated with propaganda – information meant, specifically, to encourage a particular way of thinking – and with social engineering, the social science regarding efforts to influence attitudes and social behaviors on a large scale – call it propaganda for good.

Propaganda is communications not just to create awareness, but to persuade, to change minds, and to create advocates. It’s communications for persuasion. These are communications activities undertaken by governments, media, corporations, nonprofits, public health advocates, politicians, religious leaders/associations, terrorist groups, and on and on, and they aren’t automatically bad activities: such messaging has inspired people to wear seat belts even before there were laws requiring such, to not drink and then drive, to engage in activities for sex that prevent HIV, to read to their children, to spay and neuter their pets, to a lessening of intolerance among different groups, and on and on.

I use these techniques myself, to a degree, in trying to get nonprofits and government agencies to embrace virtual volunteering and in recruiting for diversity and in creating welcoming environments for everyone at nonprofit organizations and within government initiatives. I’m not just trying to create awareness about those concepts and practices; I’m trying to create buy-in for them, to break down resistance to them, to get initiatives to embrace them. I’m evangelizing for those concepts.

My fascination with communications for persuasion, not just awareness, is also why I’m fascinated with the rhetoric in the USA about how Daesh – what most Americans, unfortunately, call ISIS, ISIL or the Islamic State – uses social media to persuade. There are few details in the mainstream media and in politicians’ rhetoric on how this is really done – just comments like “He was radicalized by ISIS on Twitter,” which makes it sound like the app is somehow causing people to become terrorists. That’s why I was so happy to find this blog by J.M. Berger, a nonresident fellow in the Project on U.S. Relations with the Islamic World at Brookings and the author of “Jihad Joe: Americans Who Go to War in the Name of Islam”. The blog, “How terrorists recruit online (and how to stop it),” provides concrete information on how Daesh uses social media to recruit members – and it sounds a lot like the same techniques various cults have used to recruit members, before social media. The blog also provides concrete ways to counter the message, and how reporters can avoid robotically amplify the Daesh message.

December 28, 2015 addition: in an analysis paper released in early 2015, J.M. Berger and Jonathon Morgan, as part of the The Brookings Project on U.S. Relations with the Islamic World, answer fundamental questions about how many Twitter users support ISIS, who and where they are, and how they participate in its highly organized online activities. It notes that, in its 2014 tracking of Twitter accounts that support ISIS, 1,575 of them tweeted more than 50 times per day on average, with 545 tweeting more than 150 times per day. “These prolific users—referred to in ISIS social media strategy documents as the mujtahidun (industrious ones)—form the highly engaged core of ISIS’s social media machine. These users may not tweet every day, but when they do, they tweet a lot of content in a very short amount of time. This activity, more than any other, drives the success of ISIS’s efforts to promulgate its message on social media. Short, prolonged bursts of activity cause hashtags to trend, resulting in third-party aggregation and insertion of tweeted content into search results. Prior to the start of Twitter’s aggressive account suspensions, highly organized activity among the mujtahidun—who at one point we may have numbered as many as 3,000, including bots—allowed ISIS to dominate certain hashtags and project its material outside of its own social network to harass and intimidate outsiders, as well as to attract potential recruits.”

And here’s another article I was pleased to find, Fighting ISIS online, talking about the tiny and not-so-effective effort to counter Daesh online, and which notes:

Humera Khan, executive director of Muflehun (Arabic for “those who will be successful”), a Washington, D.C., think tank devoted to fighting Islamic extremism, says people like her and (Paul) Dietrich who try such online interventions face daunting math. “The ones who are doing these engagements number only in the tens. That is not sufficient. Just looking at ISIS-supporting social-media accounts—those numbers are several orders of magnitude larger,” says Khan. “In terms of recruiting, ISIS is one of the loudest voices. Their message is sexy, and there is very little effective response out there. Most of the government response isn’t interactive. It’s a one-way broadcast, not a dialogue.”…

Social-media research has shown that messages from friends and peers are more persuasive than general advertising. Other bodies of research show that youth at risk of falling into many kinds of trouble, from drugs to gangs, often benefit from even small interventions by parents, mentors, or peers. But so far, major anti-ISIS programs don’t involve that kinds of outreach.

That emphasis is mine. I find these articles fascinating – and woefully ignored by governments and moderate Muslims in the fight online, and via traditional media, against Daesh.

This article from The Atlantic explores the strategy further: “ISIS is not succeeding because of the strength of its ideas. Instead, it exploits an increasingly networked world to sell its violent and apocalyptic ideology to a microscopic minority—people who are able to discover each other from a distance and organize collective action in ways that were virtually impossible before the rise of the Internet.”

I would love to see moderate, peace-focused Islamic social groups with a good understanding of online communications, like Muflehun, Quranalyzeit and Sisters in Islam, receive grants to hire more staff, train other organizations, and create a MUCH larger, more robust movement on social media with their loving, pro-women, Islamic-based messages. Such tiny organizations are doing a brilliant job of countering extremist messages regarding Islam, and doing it as Muslims and from an Islamic perspective. But they are drowned out by Daesh. Governments also need to not do this.

December 11, 2015 addition: Mohamed Ahmed, once a typical middle-aged father and gas station manager, is one of many Muslim Minneapolians to do whatever he can to fight extremism in his state. Frustrated by the Islamic State’s stealthy social media campaigns, Mr. Ahmed decided to make a social media campaign of his own. Ahmed has used his own money to produce and develop his website, AverageMohamed.com. On his site, Ahmed creates cartoons and videos so average people can share “logical talking points countering falsehood propagated by extremists.” More about how Minnesota Muslims work to counter extremist propaganda.

The reality is that the Hulk, Smash! strategy will not work to fight terrorist ideology and the violent results of such. Nazism survived the bombing and defeat of Nazi Germany. Bombing cities is not what marginalized the Ku Klux Klan, and bombing cities does not stop people like (and that have supported the ideas of) Timothy McVeigh or Eric Rudolph or Jim Jones. We know what’s work. Let’s fund it and do it.