More sinister ambitions have surfaced recently though, raising the question of just how problematic AI that is indistinguishable from human interaction could be. With AI becoming ever more ingrained into our lives, we also need to consider that while an AI program may not have any ulterior motive, these algorithms can output results that are not expected and far from helpful.

The democratization of AI clearly has mixed results: releasing this technology to the public could help us solve a shortage of qualified developers, or put it in the hands of irresponsible or malicious actors leaving regulators only able to play catch-up. Although researchers are finding more and more ways to identify deep fake videos, this sets a disturbing precedent for other more easily falsified materials, such as documents, images, or even voice recordings that are still held up as reliable identification. With more of our lives moving online, and cybersecurity (especially user awareness) severely lagging behind the capabilities of hackers and now AI forgers, we all need to start questioning what we see online a lot more.

Trusting the hand that misleads you

Skepticism of AI is vitally important, says Peter Flach, Professor of Artificial Intelligence at Bristol University and a data mining specialist, and ‘people are too willing to accept something an algorithm has found out.’ Overly trusting an algorithm is understandable, as there is an underlying assumption that the program’s creators aim to provide an unbiased and beneficial result for the user. Clearly, this is not always true. In the case of film recommendations on Netflix, for example, how do you know that the streaming giant is not disproportionately pushing its own content via algorithm? The intentions behind an AI algorithm should always be considered, and unwavering trust in the reliability of AI to ‘do what is best for you’ is not sensible in such a ruthlessly competitive online environment.

Trust of AI and the associated risks - whether the fault of programmers, algorithms, or our own bad habits - is based on a fundamental misunderstanding of how AI works, which can cause outrage when something contradicts that mistaken belief. Most modern forms of AI (as opposed to Symbolic Learning - or ‘Good, Old-Fashioned AI’) use probability as a foundation of their calculations: input data is fed into a neural network, each node of the network decides how likely the data is to satisfy a particular function, and a ‘weight’ is applied to indicate how confident the node is about its highly educated guess.

Run over thousands of layers of nodes, the algorithm spits out an output that may be up to 99.9% accurate, with 99.9% confidence in that accuracy - but it is never 100% correct because statistical algorithms of this kind do not link (or ‘symbolise’) their data to anything real. This is all well and good when the results of an algorithm are expected, or when their mistakes are humorous, but this misunderstanding of the omniscience of AI can have drastic consequences for someone’s dignity, career, or their life.

Bad data

Latanya Sweeney, Professor of Government and Technology at Harvard, was one day Googling herself (as seems to be the start of many such stories), and an ad appeared asking if she had ever been arrested. Her own study later found ‘statistically significant discrimination in ad delivery’ by Google’s algorithm against ‘racially associated names’, which could be attributed to the massive disparity in arrest records for African American people in the US, argued to be a result of institutional racism in the police service.

Bad (or biased) data is a serious problem in AI - as demonstrated by the fatal crash of a Tesla on autopilot, in which ‘neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky’ (in this case, the lack of contrast between trailer and sky can be considered ‘bad data’) - but treating AI as an infallible answer to our problems is just as dangerous.

Spurred on by sensationalism in both directions, the public has been horrifically misled about what AI can and can’t do - and what we should use it for. Calculating the perfect alloy mixture for each component of an Antarctic deep-sea probe is perfect work for AI. Choosing books that are appropriate and entertaining for a 10 year-old is something that parents should probably double check themselves.

Educating the public about this is crucial, especially as services like Alexa and Google Home are gaining more and more access to personal information with no validation process as to who is talking. With a growing threat of AI-based cyber attacks, even a basic understanding of AI could help users consider its shortcomings, and think again about how much they trust an algorithm to run their daily lives.

Charles Towers-Clark is Group CEO of Pod Group, an IoT connectivity & billing software provider. His book ‘The WEIRD CEO’ covers AI & the future of work. Follow him @ctowersclark

I have been working in the M2M, IoT, and data space since founding Pod Group (a provider of IoT connectivity & billing software) in 1999, and have become greatly interested in how new technologies affect our working lives. Seeing the changes brought by automation, senso...