Some very smart people from the worlds of media, business, and technology made great arguments for each side. It was entertaining and informative. I’m pleased to say that the optimists won, according to the audience vote at the end.

Why would anyone think we can’t fix the fake news problem? In brief, it is hard to define, pervasive, systemic, and there will always be bad actors trying to game the system. Think of it like hacking, or information warfare. Plus, Google and Facebook make money on fake news, and some say they’re just giving users what they want.

I unsuccessfully tried to get a question in at the end about the faulty premise of the debate. How can you even ponder a cure until you’ve more clearly defined the problem? As I pointed out in my last post, there are many varieties of fake news (propaganda, misinformation, counterfeit news sites, and yes, lies, damned lies). And it is almost impossible to define the concept of “news” itself, or “truth.”

Looking beyond this one debate, fake news has inflamed passions, as it may have tilted the US presidential election and encouraged a nut to shoot up a pizzeria. Any discussion about solutions inevitably gets into areas like censorship, free speech, the roles of media and the government, and the responsibility of business.

I don’t think it will be as easy as fighting spam (this CIO article implies that AI has met its match here). But I do think we can find a fix, or more likely, a series of fixes, assuming we can agree on a definition, and what might qualify as solving this.

I attempt to do so below, and also share my thoughts on the most contentious issues.

Defining the Problem

We’ll never get rid of misinformation, bias, wacky theories, rumors or propaganda. I propose defining fake news as lies or false information packaged as news. Let’s include counterfeit news sites and any gaming of algorithms and news feeds to propagate false information.

The Social Network’s Role

Some place the problem at the doorstep of social networks and online news aggregators, such as Facebook and Google respectively. Others say that it is not the platform’s jobs to be truth-tellers. Should they hire fact checkers? Who then checks the fact checkers?

Many say that Facebook and Google have no incentive to clean up the mess, as their business models are based on clicks and sharing regardless of veracity. I completely disagree. If they don’t, their brands and reputations (and hence businesses) will take a beating. No one wants to spend time in places where there is lots of junk.

They can and should take measures to combat fake news. I mean, they’re already policing their sites for bullying, obscenity, grisly pictures and other clearly unacceptable things.

It could involve a combination of crowd correction, e.g. a way for users to flag fake news items, and technology akin to spam detection. For all the grousing that it is too hard a problem to solve, check out these articles:

Some argue for greater regulation and transparency. Since algorithms play a growing role in determining what news we see on the networks, shouldn’t we all better understand how they work? Why not make them public, like open source software?

Others say that doing this would make it easier for bad actors to understand and manipulate the programs.

Can’t the government come up with laws to make sure that news feeds are unbiased and don’t spread false information? Or, perhaps there should be some watchdog group or fact checking organization to keep the networks “honest.”

Again, I think it is incumbent on the tech companies to clean up the mess. But this should not go so far as making them hand over their algorithms. It’s their intellectual property. And I am leery of government oversight or any third party organization that enforces truth telling by decree.

I am in favor of setting up a group that proposes standards in fake news detection and eradication. This industry body could factor in interests of all parties – the social networks, government, users, and media to issue guidelines and also audit the networks (on a voluntary basis – think the MPAA movie ratings, the Parental Advisory Label for recorded music, or Comics Code Authority).

If Facebook, Google, Reddit, Apple News and others want to earn the seal of approval, they’d need to open up their systems and algorithms to inspection to show they are not aiding the propagation of fake news.