Should we be worried about algorithms?

By John Freeman

| 22 October 2018

There has been some recent national media coverage about local authorities using ‘algorithms' to predict the likelihood of a child being abused. Should the prospect of a ‘Big Brother' approach worry us, or should we be encouraged by the possibility of improving outcomes for children?

Algorithms, machine learning and AI tend to get confused. A useful starting point is that algorithms don't need computers. The word itself derives from a Persian mathematician based in Baghdad, al-Khwarizmi, who was working around the time of 780 CE - no computers were available then! An algorithm is just a list of instructions showing how to accomplish a task.

Daniel Kahneman in ‘Thinking, Fast and Slow' (if you haven't read it, you must!) shows that almost all of the time, we ‘think fast' using limited data, without thinking about how we are thinking - working unconsciously, on autopilot - and then drawing rapid conclusions before taking action. Fast thinking follows what Kahneman calls ‘heuristics' - learned responses or simple rules of thumb - like ‘bigger is better' or ‘if some of X is good, more of X is better'. Often this is fine, but where there are many complex variables or complex situations, we very often make mistakes unless we deliberately ‘think slow' - consciously working things out so that we don't miss the trees for the wood.

Kahneman's ‘heuristics' are algorithms, but they are very limited. Conscious or ‘slow' thinking uses more complex algorithms, and in the professional world these are often codified into ‘flow diagrams', ‘process charts' or ‘operational descriptions'. Even the most talented professional needs an aide memoire to avoid slipping into ‘fast thinking'. (The downside is that these developments can be said to ‘de-professionalise' practitioners, but I'd much prefer to have a surgeon who went through a checklist with their team before cutting me open rather than a prima donna who thought things looked OK and went ahead without checking…)

So, we've all been using algorithms for years. What's new is clearly the operation of computers in applying the algorithm. Now, I have a 45-year track record in using data and IT (from my PGCE dissertation ‘The use of a computer program to analyse objective tests in science education' to being chair of the National Consortium of Examination Results) and I've come across most of the advantages and the pitfalls from personal experience. The GIGO law ‘Garbage In, Garbage Out' was invented by early computer scientists: if the data is poor, any analysis using that data will also be poor, even if the computer is working properly and is well-programmed. Machine learning can be extremely powerful - Google Translate enables me to talk to the Spanish-speaking members of our extended family with barely a glitch - and computers now routinely beat even the best chess players. But unless machine learning is fed the appropriate information, we will get garbage out that reinforces our own biases.

What do I conclude? Perhaps it's obvious, but a computer-based algorithmic system is likely to be effective and rapid if it is fed accurate and relevant data and is properly set up to assess and take account of missing or partial data. Compared to a human decision-maker with the same data, the computer-based system ought to be better - computers don't get bored or have off-days. However, it's important that algorithmic systems are checked to ensure that they do not simply codify and automate human biases such as ‘tattoos are bad'. (Kahneman gives an excellent (if scary) analysis of the variation in the proportion of prisoners granted parole by US courts according to the length of time before lunch.)

Computer-based algorithmic systems are the future, but so are professionals, both to ensure the algorithms themselves are appropriate, and to ensure that the output of the algorithm is reviewed intelligently and not applied blindly (‘computer says "no"').

John Freeman is a children's services consultant and former DCS. This blog first appeared on the ADCS website