News and blog

The good, the bad and the ugly: how the press reports on AI

Artificial intelligence is no stranger to the sensationalist headline. It’s one of the hot topics that seems to suffer dramatic and often misguided press coverage on a daily basis. Within the mess of doom and gloom and apocalyptic predictions, there is the odd glimmer of hope and opportunity, but it is infrequent at best, non-existent at worst.

The success of sci-fi epics like Terminator, Westworld and WALL.E, among so many others, show us that the discussion on AI is finding a mainstream audience. Whether these films approach AI as a terrifying power that will never again be reined in once released, or encourage us to sympathise with the machines (anyone who didn’t develop an attachment to WALL.E doesn’t have a heart), they all anthropomorphise AI.

Many of the darker plots often turn into a race against time, where a small group of plucky humans fight off the robot uprising.

However, in the real world, we seem to have developed an issue with distinguishing between fact and fiction; much of the press surrounding AI seems to have misjudged their role to inform, instead aiming for cheap thrills and the blockbuster action that has become so popular.

Whatever opportunities or threats AI could pose, publications need to ensure they don’t go looking for drama and blow information out of proportion. Occasionally an article rises up from the negativity and gives a more realistic outlook on the future of AI, but these are easy to miss when your eye’s drawn away by ‘shock revelations’, and ‘end of humanity’ claims.

To get an idea of what AI is dealing with, here’s a taste of the headlines we’ve experienced over the years:

The Good

This article, based on the opinions of Nigel Shadbolt, AI professor at Southampton University and cofounder of the Open Data Institute, explores the potential benefits of AI and the risks we need to be wary of. If it wasn’t for AI’s general apocalyptic coverage, this would probably be an article solely on what we need to consider and ensure we prevent against when implementing AI, but of course it needed to start off on the defence so that it didn’t simply add to the unfounded negativity with credible points.

The article offers food for thought on how we need to approach AI responsibly. It feels realistic; there are no predictions of an army of murderous artificially intelligent androids, but equally there’s a sense of weighing up the possible negative consequences AI could have if we don’t approach implementation with the appropriate level of caution and responsibility.

Most valuable articles on AI tend to discuss the real world issues we could face and the ways in which we can gain from advancements in this technology. Some of the fears surrounding AI are justified to varying degrees and many have expressed their concern – the most commonly quoted being Stephen Hawking and Elon Musk.

However, while we cannot necessarily always predict future outcomes, most negative consequences only seem likely under two conditions; we don’t sufficiently plan both for the limits that should be imposed on AI and the possible social side-effects, or we design it with evil in mind.

AI will never itself be inherently good or evil, but most technology has the potential to be used as a weapon in the wrong hands. This is an unfortunate fact of progress that we need to contend with, but it should not deter us from continued research.

The Bad

Will.i.am, well known for his AI and machine learning expertise, gives us his opinion on the future of humanity alongside artificial intelligence…

Wait, what?

Basing an article on the dangers of AI on the word of a popstar is a pretty low blow. In other news, Donald Trump leads a womens’ rights rally, Kim Kardashian writes a book on the challenges of self-driving vehicles and Wayne Rooney talks about the opportunities of quantum computing. By all means, we should encourage free speech, but do we really want to block up the press with pointless opinion pieces from those with famous names, but no background in the given field, rather than articles that have collated the views of a balanced and informed group of experts?

Now, in defense of The Black Eyed Peas frontman, the points he raises don’t entirely fill me with despair, like many of those relating to AI, but I would prefer to hear them discussed by a credible source as part of an article that doesn’t digress from the topic in hand to talk about the interviewee’s latest music video three quarters of the way through. However, I’m sure we can all agree on one thing: if the doomsday predictions of a robot uprising become a reality, we will all take comfort in the fact that Will.i.am’s pop career will not be threatened –

This, somewhat depressingly, is probably the middle tier of AI reporting. Its sole purpose isn’t to scare readers and it’s technically illustrating more than one side of the discussion on AI. Unfortunately it has used Will.i.am to do so, but we all make mistakes.

Much of the subpar AI press is simply down to bad journalism practices more than anything else. This could be misreporting, drawing bizarre conclusions from the given information, not relying on credible sources, or sometimes all three at once.

The Ugly

If the trigger words ‘shock report’ and the excessive use of caps lock wasn’t enough to warn you that this article was looking to shake you to your very core, then maybe the Getty images littered throughout would give you a hint: cue large (unexplained) explosion, robotic arm holding a human skull (presumably as a trophy after wiping out the human race) with the threatening caption: ‘Humans may be in the way of AI’s goals’, I, robot style androids infiltrating the office and a bizarre image combining (another) explosion, flying saucers (!?) and androids in what looks like a low budget action film. If a reason was needed to doubt the content, I think we have it.

This article has a definite agenda, as so many almost identical articles on AI do – spread fear and halt research. If the press showed the same passion for an undeniable threat like climate change, maybe we would be making more progress towards reaching the goals set out by the Paris Agreement for 2020.

And yet, we don’t see the same images suggesting mass extinction and destruction littered throughout articles on global warming. This is essentially a Hollywood-esque movement towards drama and discord. Mainstream media has tapped into a very popular theme: AI gone wrong. With these storylines being a popular public choice when it comes to films and TV, it’s not hard to see why the press favours this angle.

However, moulding real world news on popular fiction is an atrocious way to go about reporting. AI will certainly bring up issues, like all change does, but we need to discuss these with a level head and reign in the exaggerations and sensationalism.

Every time we approach a revolution there are both positive and negative repercussions, but with any hope, we will learn to handle each new age better than the last. If we had always erred on the side of caution and fear when confronted with possible change, we would have remained static as a species, devoid of progression.

The digital age brought us the internet, the industrial revolution saw an increase in working standards, the stone age saw us first utilising tools. Every change, both big and small, experienced by humanity will have its problems, we just need to make sure these are minimised as we continue to broaden the scope of human knowledge. After all, if we didn’t ask the question ‘what next’ would we truly be human?

One thing we know for sure; fear mongering will not help, regardless. Without balanced, well reasoned discussions that look at all elements of the implementation of AI, we are simply mass producing clickbait and misleading readers.