Mike Caulfield's latest web incarnation. Networked Learning, Open Education, and the Wiki Way

Menu

Author: mikecaulfield

A couple people asked me to expand on comments made in my recent Familiarity = Truth post. In it I say this about the Buzzfeed finding that over 50% of Clinton supporters who remember fake anti-Clinton headlines believed them:

[A] number like “98% of Republicans who remember the headline believed it” does not mean that 98% of Republicans who saw that headline believed it, since some number of people who saw it may have immediately discounted it and forgotten all about it.

What does that mean? And how does it mitigate the effect we see here?

Well, for one, it means that despite Buzzfeed’s excellent and careful work here, they have chosen the wrong headline. The headline for the research is “Most Americans Who See Fake News Believe It, New Survey Says”. But the real headline should be the somewhat more subtle “Most Americans Who Remember Seeing Fake News Believe It, New Survey Says”

Why is that important? Because you can see the road to belief as a series of gates. Here’s the world’s simplest model of that:

Sees it > Remembers it > Believes it

So, for example, if we start out with a thousand people, maybe only 20% see something. This is the filter bubble gate.

But then, a certain amount of those people who see it process it enough to remember it. And this is not a value neutral thing — many decades of psychological research tells us we notice things which confirm our beliefs more than things that don’t. Noticing is the first part of remembering. So we should guess that people that remember a news story are more likely to believe it than those that don’t. Hence, when we read something like “Over 50% of people who remembered a fake headline believed it” this does not mean that 50% of people who read it believed it, because remembering something predicts (to some extent) belief.

Let’s call this the “schema gate” since it lets through things that fit into our current schemas.

So how big is this effect? From the data we see, it’s smaller than I would have thought. I say this because when we look at the numbers of people who remember a headline, the Trump and the Clinton numbers are not that far off. For instance, 106 Clinton supporters saw the famous murder-suicide headline, compared to 165 Trump supporters. While that certainly is quite a bit more Trump supporters (and even more on a percentage basis) we have to assume a good percentage of that difference is due to different networks of friends and filter bubble effects. If you assume that highly partisan Republicans are going to have 50% or 75% more exposure to Anti-Clinton stories, then there isn’t much room left for much of a schema gate effect.

This leads to an interesting question — if we are really attached to a schema gate effect, then we have to dial down our filter bubble effect. Maybe filter bubbles impact us less than we think, if this many Democrats are seeing this sort of story in their feed?

There’s a couple other far out ways to make the math work, but for the most part you either get evidence of a strong filter bubble gate and a weaker than expected schema gate, or vice versa. Or you get both things somewhat weaker than expected.

In any case, it’s one of the more fascinating results from the study, and if Buzzfeed or anyone else is looking to put a shovel into a new research question, this is where I’d suggest to dig.

If you’re the sort of person who just wants to jump into what I’ve launched and started building with the help of others, you can go here now, see what we’re launching, and come back to read this later. For the rest of you, long theoretical navel-gazing it is…

A New Project

I’m working on a new initiative with AASCU’s American Democracy Project. I’ve chosen “Digital Polarization” as my focus. This phrase, which enjoyed a bit of use around the time of Benjamin Barber’s work in the 1990s but has not been used much since, is chosen partially because it is remains a bit of a blank slate: we get to write what it means in terms of this project . I mean to use it as a bit of a catch-all to start an array of discussions on what I see as a set of emerging and related trends:

The impact of algorithmic filters and user behavior on what we see in platforms such as Twitter and Facebook, which tend to limit our exposure to opinions and lifestyles different than our own.

The rise and normalization of “fake news” on the Internet, which not only bolsters one’s worldview, but can provide an entirely separate factual universe to its readers

The spread of harassment, mob behavior, and “callout culture” on platforms like Twitter, where minority voices and opinions are often bullied into silence.

State-sponsored hacking campaigns that use techniques such as weaponized transparency to try and fuel distrust in democratic institutions.

All good. So why, then, “digital polarization” as the term?

Digital Polarization

It’s probably a good time to say that on net I think the Internet and the web have been a tremendous force for good. Anyone who knows my history knows that I’ve given 20 years of my life to figuring out how to use the internet to build better learning experiences and better communities, and I didn’t dedicate my life to these things because I thought they were insignificant. I still believe that we are looking at the biggest increase in human capability since the invention of the printing press, and that with the right sort of care and feeding our digital environments can make us better, more caring, and more intelligent people.

But to do justice to the possibilities means we must take the downsides of these environments seriously and address them. The virtual community of today isn’t really virtual — it’s not an afterthought or an add-on. It’s where we live. And I think we are seeing some cracks in the community infrastructure.

And so as I’ve been thinking about these questions, I’ve been looking at some of history’s great internet curmudgeons. For example, I don’t agree with everything in Barber’s 1998 work Which Technology and Which Democracy?, but so much of it is prescient, as is this snippet:

Digitalization is, quite literally, a divisive, even polarizing, epistemological strategy. It prefers bytes to whole knowledge and opts for spread sheet presentation rather than integrated knowledge. It creates knowledge niches for niche markets and customizes data in ways that can be useful to individuals but does little for common ground. For plebiscitary democrats, it may help keep individuals apart from one another so that their commonalty can be monopolized by a populist tyrant, but for the same reasons it obstructs the quest for common ground necessary to representative democracy and indispensable to strong democracy.

Barber’s being clever here, and playing on multiple meanings of polarization. In one sense, he is predicting political polarization and fragmentation due to new digital technologies. In another he is playing on the nature of the digital, which is quite literally polarizing — based on one and zeros, likes and shares, rate-ups and rate-downs.

Barber goes on, pointing out that this polarized, digital world values information over knowledge, snippets over integrated works, segmentation over community. He’s overly harsh on the digital here, and not as aware, I think, of the possibilities of the web as I’d like. But he is dead on about the risks, as the last several years have shown us. At its best the net gives us voices and perspectives we would have never discovered otherwise, needed insights to pressing problems just when we need them. But at its worst, our net-mediated digital world becomes an endless stream of binary actions — like/don’t like, share/pass, agree/disagree, all in an architecture that slowly segments and slips us into our correct market position a click at a time, delivering us a personalized, segregated world. We can’t laud the successes of one half of this equation without making a serious attempt to deal with the other side of the story.

The “digital polarization” term never took off, but maybe as we watch the parade of fake news and calculated outrage streaming us these days its as good a time as any to reflect along with our students on the ways in which the current digital environment impacts democracy. And I think digital polarization is a good place to start.

This is not just about information literacy, by the way. It’s not about digital literacy either. Certainly those things are involved, but that’s the starting point.

The point is to get students to understand the mechanisms and biases of Facebook and Twitter in ways that most digital literacy programs never touch. The point is not to simply decode what’s out there, but to analyze what is missing from our current online environment, and, if possible supply it.

And that’s important. As I’ve said before, as a web evangelist in education its so easy to slip into uncritical practice and try to get students to adopt an existing set of web behaviors. But the peculiar power of higher education is we aren’t stuck with existing practice — we can imagine new practice, better practice. And, in some cases, it’s high time we did.

A Student-Powered Snopes, and More

And so we have the Digital Polarization Initiative. The idea is to put together both a curriculum that encourages critical reflection on the ways in which our current digital environment impacts civic discourse, and to provide a space for students to do real work that helps to address the more corrosive effects of our current system.

Right now I am in the process of building curriculum, but we have the basics of one of the projects set up and outlined on the site. The News Analysis project asks students to apply their research skills and disciplinary knowledge to review news stories and common claims for accuracy and context. Part of the motivation here is for students to learn how to identify fake news and misinformation. Part of the motivation is for students to do real public work: their analysis become part of a publicly available wiki that others can consult. And part of it is try try and model the sort of digital practice that democracy needs right now.

In my dream world, students not only track down fake news, but investigate and provide fair presentations of expert opinion on claims like “the global warming this year was not man-made but related to El Niño” or “Cutting bacon out of your diet reduces your risk of bowel cancer by 20%.” Importantly, they will do that in the context of wiki, a forgotten technology in the past few years, but one that asks that we rise above arguing our personal case and try instead to summarize community knowledge. Wiki is also a technology that asks that we engage respectfully with others as collaborators rather than adversaries, which is probably something we could use right about now.

There will be other projects as well. Analyzing the news that comes through our different feeds is an easy first step, but I’d love to work with others on related projects that either examine the nature of present online discourse or address its deficiencies. And we’re trying to build curriculum there as well to share with others.

In any case, check it out. We’re looking to launch it in January for students, and build up a pool of faculty collaborators over the next couple weeks.

Almost a month ago, I wrote a post that would become one of my most popular on this site, a post on the They Had Their Minds Made Up Anyway Excuse. The post used some basic things we know from the design of learning environments to debunk the claim that fake headlines don’t change people’s minds because “we believe what we want to believe.” The “it didn’t matter” theory asserts that only people who really hated Clinton already would believe stories that intimated that Clinton had killed an FBI agent, so there was likely no net motion in beliefs of people exposed to fake news.

This graf from BGR most succinctly summarizes the position of the doubter of the effects of fake news:

On a related note, it stands to reason that most individuals prone to believing a hyperbolic news story that skews to an extreme partisan position likely already have their minds made up. Arguably, Facebook in this instance isn’t so much influencing the voting patterns of Americans as it is bringing a prime manifestation of confirmation bias to the surface.

In the weeks after the election I saw and heard this stated again and again, both in the stories I read and in the questions that reporters asked me. And it’s simply wrong. As I said back in November, familiarity equals truth: when we recognize something as true, we are most often judging if this is something we’ve heard more often than not from people we trust. That’s it. That’s the whole game. See enough headlines talking about Eastasian aggression from sources you trust and when someone asks you why we are going to war in Eastasia you will say “Well, I know that Eastasia has been aggressive, so maybe that’s it.” And if the other person has seen the same headlines they will nod, because yes, that sounds about right.

How do you both know it sounds right? Do you have some special area of your brain dedicated to storing truths? A specialized truth cabinet? Of course not. For 99% of the information you process in a day truth is whatever sounds most familiar. You know it’s true because you’ve seen it around a lot.

More on that in a minute, but first this update.

Buzzfeed Confirms Familiarity Equals Truth

Here’s what they did. They surveyed 3,015 adults about five of the top fake headlines of the last weeks of the election against six real headlines. Some sample fake headlines: “FBI Agent in Hillary Email Found Dead in Apparent Murder-Suicide” and “Pope Francis Shocks World, Endorses Donald Trump for President, Releases Statement.” Some sample real ones: “I Ran the CIA. Now I’m Endorsing Clinton” and “Trump: ‘I Will Protect Our LGBTQ Citizens'”.

They then asked respondents whether they had seen that headline, and if they had, whether that headline was accurate. Perhaps unsurprisingly, Trump voters who had seen pro-Trump headlines believed them at high rates that approached or exceeded belief in true news stories. Ninety-six percent of Trump voters, for example, who had seen a headline that Trump sent his own plane to rescue 200 marines believed it. Eighty-nine percent of Trump voters having seen headlines about Trump protesters having been paid thousands of dollars to protest believed it.

This in itself should give second thoughts to the thesis that fake news only affects extreme partisans: it’s absurd to claim that the 98% of Republicans who remembered that headline and believed it represent a particularly partisan fringe.

Now, caveats apply here: surveys about political matters can get weird, with people occasionally expressing themselves in ways that they feel express their position rather than their literal belief (we had debates over this issue with the “Is Obama a Muslim?” question, for instance). Additionally, we are more prone to remember what we believe to be true than what we believe to be false — so a number like “98% of Republicans who remember the headline believed it” does not mean that 98% of Republicans who *saw* that headline believed it, since some number of people who saw it may have immediately discounted it and forgotten all about it.

Here’s the stunning part of the survey. As mentioned above, Trump voters rated pro-Trump and anti-Clinton stories true on average, and overwhelmingly so. The lowest percentage of Trump voters believing a fake headline was accurate was 76%, and the highest was 96% with an average of 86% across the five headlines. But even though the headlines were profoundly anti-Clinton, 58% of the Clinton voters who remembered seeing a headline believed the headline was accurate.

Familiarity Trumps Confirmation Bias

I want to keep calling people’s attention to the process here, because I don’t what to overstate my claim. If I read the study correctly, 1,067 Clinton voters completed it. Of those voters, 106, or 10%, remembered seeing a headline stating that an FBI Agent implicated in leaks of Clinton’s emails had died in a suspicious murder-suicide. The fact that this tracks people who remembered the headline and not people who saw it is important to keep in mind.

Yet among those 10% of Clinton supporters who remember seeing the headline “FBI Agent Suspected in Hillary Leaks Found Dead in Apparent Murder-Suicide” over half believed it was accurate.

These 10% of Clinton voters who ended up seeing this may differ in some ways from the larger population of Clinton voters. They may have slightly more conservative friends. They may be younger and more prone to get their news from Facebook. In a perfect world you would account for these things. But it is difficult to believe that any adjustments are going to overcome a figure like this. Over fifty percent of Clinton voters remembering fake headlines that were profoundly anti Clinton believed them, and no amount of controlling for differences is going to get that down to a non-shocking level.

Why would Clinton voters believe such a headline at such high rates? Again, familiarity equals truth. We chose the people we listen to and read, and then when thinking about a question like “Did Obama create more jobs than George W. Bush?” we don’t think “Oh, yes, the Wall Street Journal had an article on that on page A4.” We simply ask “Does that sound familiar?”

That Troublesome Priest

So how does this work? I’ll diverge a bit here away from what is known and try to make an informed guess.

Facebook, with its quick stream of headlines, is divorced from any information about their provenance which would allow you to ignore them. My guess is each one of those headlines, if not immediately discarded as a known falsehood, goes into our sloppy Bayesian generator of familiarity, part of an algorithm that is even less transparent to us than Facebook’s.

Confirmation bias often comes a few seconds later as we file the information, or as we weight its importance. Based on our priors we’re likely to see something as true, but maybe less relevant given what know. I’d venture to guess that the Clinton voters who believed the murder-suicide look very much like certain Clinton voters I know — people who will “hold their nose and vote for her” even though there is something “very, very fishy about her and Bill.” The death of the FBI agent is perhaps in the unproven, but disturbing range.

You see this in practice, too. I’ve had one Clinton voter tell me “I’m not saying she killed anyone herself, or even ordered it. But sometimes if you’re powerful and you say someone is causing you problems, then other people might do it for you. Like in Becket.”

That is a close to verbatim quote from a real Clinton voter I talked to this election. And for me statements like that are signs that people really do wrestle with fake news, because no matter what your opinion of Clinton is, she most definitely has not had people killed. (And no, not even in that “Who will rid me of this troublesome priest?” Becket way.)

Given our toxic information environment and the human capacity for motivated reasoning, I’m certain that many folks were able to complete the required gymnastics around the set of “facts” Facebook provided them. I’m just as sure a bunch of people thought about that Olympic-level gymnastics routine and just decided to skip it and stay home. How many? I don’t know, but in an election won by less than 100,000 votes, almost everything matters.

In any case, I said this all better weeks ago. I encourage you to read my more comprehensive treatment on this if you get the chance. In the meantime, I’d remind everyone if you want to be well-informed it’s not enough to read the truth — you also must avoid reading lies.

When I got a Shuttleworth Flash Grant one year ago, I knew just what I wanted to do. I wanted to make Wikity.

The idea of Wikity would evolve much over the next year, but the core idea of Wikity was simple: what if we bent the world of social media a bit away from the frothy outrage factory of Twitter and Facebook towards something more iterative, exploratory, and constructive? I took as my model Ward Cunningham’s excellent work on wiki and combined it with some insights on how to make social bookmarking a more creative, generative endeavor. The shortest explanation of Wikity I can provide is this: Wikity is social bookmarks, wikified.

It took me four months just to get to that explanation.

What does “wikified social bookmarks” mean? Well, like most social bookmarking tools, we allow for people to host private sites, but encourage people to share their bookmarks and short notes with the world. And while the mechanisms are federated, not centralized, we allow people to copy each other’s bookmarks and notes, just like Delicious or Pinboard.

That’s what’s the same. But we also do three things current social bookmarking sites do not.

We don’t bookmark pages. We bookmark and name ideas, data, and evidence. A single page may have multiple bookmarks for the different ideas, theories, and data referenced in the page.

We provide a simple way of linking these “idea” bookmarks, so that finding one idea leads naturally to other ideas. Over time you create an associative map of your understanding of an issue.

As we revisit pages over time, we expand and update them, building them out, adding notes, sources, counterarguments, summaries, and new connections.

And the end result, after a year and about 300(!) hours of work, is something I love and I use every day. It’s a self-hosted bookmark manager for linked ideas and data that has (for me) revolutionized my ability to think through issues and find connections between ideas I would have otherwise missed. If you want to see me construct an argument about something, you can read my blog. But if you want some insight into how I conceptualize the space, you can visit my Wikity site, and follow a card like Anger Spreads Fastest.

I use it every day, and have accumulated over 2,000 “cards” in it, varying from interesting clippings on subjects of interest to more lengthy, hyperlinked reflections. As outlined a couple years ago in my Federated Education keynote, the cards often start out as simple blockquotes or observations, but often build over time into more complex productions, with links, sources, bibliographies, videos, and additional reflections.

(I’m tempted to recount every development decision here, to explain all the expansion and tweaking the product has undergone to get to its current state. I know some people may be looking at it and thinking “300 hours? Really?” But the path to product was never a straight one.)

Wikity Today

Wikity, like all Shuttleworth projects, is open. It’s constructed as a WordPress theme, and you can download it from GitHub. It does a lot more than your average theme, but installation is a simple as uploading it to your WordPress theme directory and applying the theme. I learned PHP specifically for this project, so it’s not the most beautiful code you’ve ever seen. But it does work, and shows another way of thinking about our web interfaces and daily habits.

Now that Wikity is easily installable as a theme on self-hosted WordPress, I’ll be phasing out signups on the wikity.cc site, which I was running as a central space for new users to try out Wikity. In its place I hope to put an aggregation site that makes it easier for different people’s Wikity installs to see what other people are writing about. That will reduce the cost and effort of running and maintaining an enterprise server for other people’s content. I’ll be reaching out to the owners of the 127 sites on there.

I should also mention to some early users that the scope of what Wikity does has actually been reduced in many ways. There’s a way in which this is sad, but in other ways the biggest advance over the past year in Wikity has been realizing that the core of Wikity could be expressed as “social bookmarks, wikified.” If you’ve ever built a product, you know what I mean. If you haven’t — trust me, it’s a painful but necessary process.

You can still use Wikity for a variety of things other than wikified bookmarks and notes — I worked with a professor in the Spring, for example, to use it to build a virtual museum, and as far as I can tell, Wikity is the simplest way to run a personal wiki on top of WordPress. But the focus is a hyperlinked bookmarking and notetaking system, because after a year of use and 2,000 cards logged, I can tell you that is where the unique value is. The beautiful thing is if you think the value is somewhere else the code is up there and forkable — sculpt it to your own wishes!

Finally a promise: Wikity core is safe from the demons of decay, at least for now. It will continue to be maintained and improved, mainly because I am addicted to using it personally. On top of that, we’re currently organizing a Wikity event for Christmas break, to introduce educators to the platform as a learning and research tool for students.

Now lets talk about some of the struggles we’ve been through here, and where we’re going in the future.

The Long and Winding Road

I have to admit, I thought early on that there would be larger appetite for Wikity. There may still be. But it has proved harder than thought.

Part of the reason, I think, is that the social bookmarking world that I expected Wikity to expand on is smaller than I thought, and has at least one good solid provider that people can count on (Pinboard, written and maintained by the excellent Maciej Cegłowski). More importantly, people have largely built a set of habits today that revolve around Twitter and Facebook and Slack. The habits of personal bookmarking have been eroded by these platforms which give people instant social gratification. In today’s world, bookmarking, organizing, and summarizing information feels a bit like broccoli compared to re-tweeting something with a “WTF?” tag and watching the likes roll in.

I had a bunch of people try Wikity, and even paid many people to test it. The conclusion was usually that it was easy to use, valuable, cool — and completely non-addictive. One hour into Wikity people were in love with the tool. But the next day they felt no compulsion to go back.

We could structure Wikity around social rewards in the future, and that might happen. But ultimately, for me, that struggle to understand why Wikity was not addictive in the ways that Twitter and Facebook were ended up being the most important part of the project.

I began, very early on, compiling notes in Wikity on issues surrounding the culture of Twitter, Facebook, social media, trolling, and the like. Blurbs about whether empathy was the problem or solution. Notes on issues like Abortion Geofencing, Alarm Fatigue, and the remarkable consistency of ad revenue to GDP over the last century. Was this the battle we needed to have first? Helping people understand the profound negative impact our current closed social media tools are having on our politics and culture?

I exported just my notes and clippings on these issues the other day, from Wikity, as a pdf. It was over 500 pages long. I was in deep.

As the United States primary ramped up, I became more alarmed at the way that platforms like Facebook and Twitter were polarizing opinions, encouraging shallow thought, and promoting the creation and dissemination of conspiracy theories and fake news. I began to understand that the goals of Wikity — and of any social software meant to promote deeper thought — began with increasing awareness of the ways in which our current closed, commercial environments our distorting our reality.

Recently, I have begun working with others on tools and projects that will help hold commercial social media accountable for their effect on civic discourse, and demonstrate and mitigate some of their more pernicious effects. Tools and curriculum that will help people to understand and advocate for the changes we need in these areas: algorithmic transparency, the right to modify our social media environments, the ability to see what the feed is hiding from us, places to collectively fact-check and review the sources of information we are fed.

Wikity will continue to be developed, but the journey that began with a tool ended at a social issue, and I think it’s that social issue — getting people to realize how these commercial systems have impacted political discourse and how open tools might solve the problem — that most demands addressing right now. I don’t think I’ve been this passionate about something in a very long time.

I’ve had some success in getting coverage of this issue in the past few weeks, from Vox, to TechCrunch, to a brief interview on the U.S.’s Today Show this morning.

I think we need broader collaborations, and I think open tools and software will be key to this effort. This is a developing story.

So it’s an interesting end to this project — starting with a tool, and getting sucked into a movement. Wikity is complete and useful, but the main story (for me) has turned out to lead beyond that, and I’m hurtling towards the next chapter.

Was this a successful grant? I don’t know what other people might think, but I think so. Freed from the constrictions of bullet pointed reports and waterfall charts, I just followed it where it led. It led somewhere important, where I’m making a positive difference. Is there more to success than that?

Thanks again to the Shuttleworth Foundation which kicked me off on this ride. I’ll let you all know where it takes me in the future.

(And to my Wikity fans and users — don’t worry: Wikity is not going away. As long as I can’t live without it, it’s going to continue to be developed, just a bit more slowly).

Here’s a fake story that was shown a number of places on the web during the campaign, claiming that protesters of Donald Trump were being paid. This has been covered so many times by so many fake and satirical sites that it is now an article of faith among Republicans, due to exposure effects.

Here’s a major source of that hoax:

You’ll note the publish date: November 11.

That’s what the site looks like today. But we can see what it looked like previously, courtesy of archive.org’s Wayback Machine.

Here’s what it looked like in March, sporting a publish date of March 24:

Here it is in June, sporting a date of June 16:

And in September it sported a date of September 11:

So it’s safe to conclude that one of the tricks in the fake news toolbelt is creating a feeling of recency through altering dates.

Another note — give the date futzing, I’m not sure we can trust the view counter, but captures from the Wayback do show it ticking up in a reasonable way. If the view counter is accurate (big if) we may also have a ratio of shares to reads.

The page was shared 423,000 times. It was viewed (by people coming through all sources, including but not limited to Facebook) 70,000. If (and again, a big if) we can trust the counter, the maximum click-through rate from Facebook would be 71/423, or 17%. In reality, it would likely be lower than that, as a significant number of people would come through other sources.

To put it another way, at least 83% of people who shared this never looked at it. Note that this is actually more than a recent study that says only 60% of people share without reading on Twitter.

So it’s a big if as to whether we can trust this counter, but if we can, there’s a couple interesting possibilities:

People share without reading on Facebook more than on Twitter

More highly viral content has a worse share-to-clickthru ratio

Explicitly political content has worse share-to-clickthru ratios

None of these are are firm conclusions, incidentally — just ideas I’ll be keeping in mind for the future.

I’m playing with this idea of Facebook posting as “rebuttal shopping”. The idea is that a lot of stuff that goes viral on Facebook is posted as an implicit rebuttal to arguments that the poster feels are being levied against their position. This stuff tends to go viral on Facebook because the minute the Facebook user sees the headline they know this is something they need, an answer to a question or criticism that irks them.

Maybe the idea holds together and maybe it doesn’t. But I’ll occassionally be looking at new things that are trending on Facebook and seeing how they do or don’t fit that pattern.

Today in rebuttal shopping we have this, on Trump’s IQ:

I don’t think I have to prove this fake: there is no official record of Presidential IQ scores throughout history, and of course intelligence tests didn’t become common until well into the 20th century. Outside that, we even have the card description here, which references a “think thank” that proposed this, and the weird phrase “Intelligence professors”. But if you want to click through, it is a blog post on Prntly.com that references a “report” that turns out to be someone talking on a forum.

Number of shares on this are not groundbreaking, but clearly in viral territory for something posted two days ago: 24,836 shares.

One of the comments on it, liked 59 times, seem to support the rebuttal shopping idea:

“Democrats love to talk about IQ’s. Well they won’t be talking about Trumps. They thought Obama was the smartest president ever.”

However, a lot of the other comments rant about unrelated things, so maybe comments aren’t the best result here.

This IQ issue seems to be a ongoing thing. A satirical article on Empire News last year claiming that Obama scored the lowest on an IQ test of any president in history garnered over 30,000 shares, and a chain letter hoax in the early 2000s had George W. Bush as the the lowest in history.

Where does this leave “rebuttal shopping”? I’m not sure. The new Trump fake news seems to follow the rebuttal pattern, as does the Obama story. The Bush story confirms something that wasn’t much disputed at the time, however, and looks more like shopping for confirmation than rebuttal. It certainly is not resolving any cognitive dissonance.

We’ll play with the idea a few more days and see what it brings to the fore.

The Stream is a weird place. Your Facebook feed, for example, is a series of posts by various people that in some ways resembles a forum, but in other ways it’s not at all like a forum. When you post something to Facebook, there’s not an explicit prompt you are responding to, which seems non-problematic when you are posting a cool new video you like, but a bit weirder when you are posting random articles.

A recent Jay Rosen tweet thread got me thinking about this a bit more deeply. Rosen suggests that the reason that fake news spiked before the election was demand-driven: many Republican voters were feeling uneasy about voting for Trump, and articles where Hillary was knocking off FBI agents and funding ISIS helped them feel better about that:

This is interesting, because the place where I became obsessed with the fake news phenomenon was in the primary, when a lot of Sanders supporters I respected and admired for their intelligence suddenly were posting bizarre vote-rigging stories.

The “study” the page linked was referred to as a “Stanford Study”, a claim which took 60 seconds to debunk. It was the work of a current Stanford student of psychology. No background in politics or polling. No Stanford appointment. And the study itself was just a paper, written and shared via Google Drive — it hadn’t been peer-reviewed or even designed above and beyond what one might do for a blog post. When you dug into the paper, there was nothing there — the computation of some effect sizes between states with paper trails and those without, and language which indicated that the author might not in fact have understood how exit polls work, or have been aware of the shift in demographic support for Clinton between 2008 and 2016. (In fact, in this respect there was an error that would disqualify it from ever being taken seriously).

I didn’t expect most of my friends would get the math, but even so, without the call to authority, and considering the major barriers to rigging an election, one would assume people wouldn’t re-share it. But reshare it they did. And quite a lot.

And this is where I think Rosen’s point is interesting. If you think about the Stream, with it’s lack of explicit prompts, how does one know what to share? One thought is that as you go through your stream you are doing something like shopping — you’re explicitly in the market for something. And very often that something is a rebuttal to an implied argument that is giving you some cognitive dissonance.

For the Trump supporters, the dissonance was that Trump was unqualified and racist and Hillary was just (in their minds) corrupt. But was she corrupt enough? And if not, how could they vote for him? Changing Clinton to a murdering ISIS follower allowed them to follow their gut, which really wanted to vote for Trump. It gave their gut the evidence it needed to rule the day.

For the more militant Sanders conspiracists, the dissonance was between the results they felt Sanders should have and the ones that he got. People’s gut had told them Sanders would be broadly popular, but the reality was that he was not quite popular enough. On the verge of having to accept that, Facebook threw out a lifeline for the gut: the election was rigged. Stanford scientists had proved it.

If you go through a few of the public (e.g. share to all) posts on this, which you can do with this search, you’ll find something really interesting — so many of the comments people write when sharing the piece are of the type “I knew it! I knew it!”. It’s the sort of reaction someone has when they are struggling to maintain belief in the face of cognitive dissonance and suddenly stumble on a lifeline.

(Note, the post above is a public post (e.g. meant to be shared to the world). You can’t see, and shouldn’t share private/friends posts with that search. And even though the post is public I’m blacking out the person’s name out of consideration)

This isn’t to say that all this is innocent in the least. The “Stanford Study” that wasn’t actually a study or from Stanford was shared at least a 100,000 times via different sites shared on Facebook, including by HNN (share count of their version: 82,000), which changed the headline to the more zippy “Odds Hillary Won Without Widespread Fraud: 1 in 77 Billion Says Berkeley, Stanford Studies”. (Spoiler alert: the Berkeley study wasn’t from Berkeley either). And it got a big assist from state-sponsored entities like the Russian-owned RT News in this episode of Redacted Tonight, which was viewed approximately 125,000 times on YouTube:

(That’s YouTube views, BTW, which are serious stuff — you have to sit down and watch a significant percentage of it before the view will register).

The RT segment ups the ante, really highlighting the Stanford name, to much laughs. It’s a study out of a little community college called Stanford, the host jokes (again, it’s not). It’s getting to the point it’s really embarrassing, the host says, how people won’t admit it was rigged. How much evidence do you need?

There’s a similar story in the past couple days with a Breitbart story being shared that passed around a ludicrous map with a misleading headline about Trump winning the popular vote (in the heartland).

Now any person with half a brain can see how ridiculous this map is — if they stop to look at it. And any person that can stop to parse a sentence can see the gymnastics required here to claim this victory.

The people reposting this are not stupid. But crucially, they don’t stop to think about it. They see and they click, I think, because they know the moment they see this that this is a rebuttal they have been in the market for. They don’t need to evaluate it, because this is precisely what they have been looking for.

This is a bit rambling, but what I mean to say is I think Rosen is onto something here about the nature of the supply-side. I’m starting to think of feed skimming as a sort of shopping experience, where you know the ten sorts of things that you are looking for this week. Some paper towels, a new sponge to replace the ratty old one, and a rebuttal to your snobby cousin who posted that article that made you feel for twenty seconds that you might be wrong about something. Just what I was looking for!

As I’ve said before, this doesn’t mean that the news only confirmed what you thought already. In fact, quite the opposite: this process, over time, can pull you and your friends deeper and deeper into alternate realities, based on well-known cognitive mechanisms. But thinking of this process as not so much one of discovery as rebuttal shopping — often brought on by cognitive dissonance — is useful at the moment, and I thank Jay Rosen for that.