The emergence of mainstream social networks like Twitter and Facebook makes it easier for misleading, false, and agenda driven information to quickly and seamlessly spread online, deceiving people or influencing their opinions.

There has always been misleading information. Local TV news promos always tell you to stay tuned to hear about x at 6pm only to find out x really meant y.

The decentralisation process and these affects has been going on forever.

First we had the really old days, where few books were printed at all and at great cost. Less misleading information in print, but not very democratic.

Then the printing press and more people could express themselves. So misleading books and documents became more common.

Later the media grew, and with it the amount of sources went up even further. Niche publications that spread 'unpopular' opinions appeared and misleading info spread further.

And with every medium since, the effects grew. Radio, TV, the internet in general, social media sites. Really, it's just an inevitable consequence of less gatekeepers and more people being able to publish their thoughts.

Not so fast, Mr Bond. You mean to tell us that the information in early manuscript such as the Bible and other religious tracts were not misleading? My point is that though they may have been culturally agreed to be true these few authoritative books suffered from a lack of open access to information. So I believe we need to separate out honest intention towards truth and actual empirically and theoretically truthful writings. Another way to put this is that it is not just the technology but the social relations that go with that technology that matter.

There has always been misleading information, but that's not a good thing. Growing up, I think the promise of the internet was that with a more decentralized source of information we'd be better equipped to see and understand what's really going on in the world. What the spread of this kind of fake news shows is that it was never really the fault of the TV news and big corporations that there was misleading information. We want to be lied to.

People don't want to be lied to. They want a narrative, they want to be told a story that integrates information into something digestible. All narratives are simplifying, some are based on emotion, some are based on facts, but all are false, to one degree or another.

What's really going on is very hard to figure out. The Arab Spring took everyone by surprise; it's not enough to look at individual leaders and what they say and do, it's the superposition of multiple trends that are the cause of things.

Different people have different appetites for trying to figure it all out, and some have a hard time with unpleasant or contradictory information. They can become numb and start becoming avoidant, clinging to simpler inflexible positions that make life easier to live.

Others develop shortcuts to understanding the world and start coming to very strange conclusions, conspiracy theories that connect not through cause and effect, but by who benefits, outcome oriented post hoc rationalisation. Cynicism protects the ego by dismissing people who talk about the actual complexities as naive. But fundamentally they want to believe there's order to the chaos.

I personally think that the lack of a common enemy has created some kind of social autoimmune disorder, where reality isn't testing and proofing ideas, and our collective attention is free to amplify small irritants into national issues. We don't have enough real problems. Even terrorism today is a non problem, looked at in proportion to its lethality.

The sad thing is we have plenty of big problems—overpopulation, pollution, environmental collapse—we just can't get behind them because we're wired for tribalism, and there's just not enough common understanding on a large scale to put any wood behind the arrow of sustainability.

Hopefully the fact that deception is now obvious will cause people to re-evaluate the way they use their trust networks. I hope in the future that people use trust graphs as a way to estimate the reliability of a claim they see from an unknown party.

What if we instead could provide the tools to establish a soundness graph. Evaluate arguments on their own merits rather than their source.

Even Arguments based on false premises can be sound. Just as arguments can be based on true premises and still be unsound. Helping people identify which is which should at least raise the quality of disagreements.

Even Arguments based on false premises can be sound. Just as arguments can be based on true premises and still be unsound. Helping people identify which is which should at least raise the quality of disagreements.

When its premises are false, an argument is always unsound. But it can still be valid. A sound argument is one that is both valid and has true premises.

About builiding a soundness/validity graph, I've dabbled with building a webapp for that (though the graph is more implied than visual). It's still very basic, but if someone has ideas where exactly it should go or how it could be engaging to a community of critical thinkers, please contact me.

My own thinking is that it has to be less of an app and more of a protocol/federated thing augmenting existing channels. Think something like a github bot doing automated reviews with an SO-like community of meta-data authors annotating news articles and such

That "soundness graph" is the gold standard of reasoning that we're consistently not achieving. Probably because it's too computationally expensive in general for our brains. So instead we use shortcuts.

As a shortcut, trust graph is actually pretty good. Consider this example monologue:

This guy says really interesting things about cars, but my good friend Sally the Car Mechanic says he's talking nonsense; she's an expert in the domain, so I'll approach the new guy with huge dose of scepticism.

Note how trust graph implicitly takes care of known unknowns and unknown unknowns - I know little about cars, Sally knows a lot, so she's able to evaluate the situation better than me. Note also how it handles intent - Sally is my good friend, I trust her to have my best interest in mind, so I know what she's saying is her real opinion, and not e.g. trying to keep me as a customer of her workshop.

Intent is hard to judge, but is unfortunately very important when dealing with information that's not directly and independently testable (which is most of them, including especially conclusions drawn from testable facts). Trust graphs, or Evidence-based Ad Hominems™, are a very powerful shortcut for evaluating information.

> Hopefully the fact that deception is now obvious will cause people to re-evaluate the way they use their trust networks

People only trust sources with which they agree with at first place. Communities believing in a certain narrative will not re-evaluate anything, and all communities believe in specific narratives. The web just made that fact more obvious. No amount of technology of fact checking tool will change that.

Only the people who understand that the truth is often a valuable asset will find these tools valuable as well.

And they are not even wrong to do so; their only problem is that they are more certain of themselves than they should be.

If someone says something that contradicts the information you already have, you should only change your belief proportional to how often they are right and you are wrong. And the only way to estimate the degree of relative correctness is to observe how often they agree with you ...

Now I'm just thinking aloud (awrite?), but maybe a general strategy to convince someone would be to find an authority that they trust more than themselves, or to become such an authority yourself. And the last part is hard, since you can only disagree with them if they will easily change their mind, thus changing their estimate of relative correctness in your favor.

> Our results indicate that alt-right communities within 4chan and Reddit can have a surprising level of influence on Twitter

It's also common for 4chan users to have accounts elsewhere, just to force memes or opinions. It's super easy to create multiple Twitter accounts and just start posting away.

You can't do this on 4chan with the same level of effectiveness. You can't target a user directly, and there's a lot of obscure culture that you will have to understand to properly communicate and not be outed as a shill by the community. Plus, there's no incentive to post for people who are used to Facebook, Twitter, Tumblr, etc. You're not gaining followers, you'll never get upvotes, you'll never get likes. I believe this is why the other networks will never influence 4chan the same way.

Plus, people are just plain offensive on 4chan, which can be scary if you're not used to it. I remember when there were raids on Tumblr with gore images, and Tumblr users responded by going to 4chan and posting back gore. They only ended up traumatizing themselves.

It comes from a very old image that floated around on 4chan as long as 8 years ago[0]. In a funny way it depicted how memes and information is spread between the various networks online. I think lots of people from 4chan have understood for a long time how these large digital networks interact with each other and ultimate they later realised how to exploit this over the years.