Category: AI

I recently attended a fascinating meetup hosted by London Futurists. The guest speaker was Gerd Leonhard who in his latest book ‘Technology vs Humanity’ argues that we must act now to protect our humanity from the existential threats that artificial intelligence could pose to our species, not just in terms of annihilation but in terms of our own identity and what we are in danger of losing as AI catches up and overtakes us. During the Q&A after his talk it was clear that there was a divide between those who agreed that we needed some kind of Global Digital Ethics Council or Humanity Protection Agency (analogous to the Environmental Protection Agency) and those who believed that humans don’t have a great track record when it comes to ethical or rational decision making and perhaps handing control over to machines might be a way of taking the gun from the baby, so to speak. I must admit, I fall more in the second camp. If history has taught us anything — and I don’t think it has taught us anywhere near enough unfortunately — perhaps a limitation of our biology that could do with some augmentation, but I digress — it is that when it comes to waging war we have boundless energy and creativity. We are masters at our own suffering and for all our achievements we still find it very difficult to have enough forethought to change our behaviour to combat existential and imminent threats of our own making like climate change.

But what the discussion really got me thinking about is what of ourselves do we need to preserve? What of our humanity is worth keeping? What does that even mean? Humanity? It’s very easy to slip into poetry and talk about the soul and love and other ephemeral qualities like compassion, empathy and understanding — but unless you believe in some magical and as yet undiscovered property of the universe or law of nature, all of these things are simply properties or consequences of neural activity in the brain. There is nothing else going on in there. And what are these things? These are the irrational things. The things that defy rather than follow logic — or so we believe.

There’s been a lot of research in recent years examining the extent to which our decision making is based on conscious vs unconscious thought and it turns out that when it comes to decision making — it is our unconscious minds that are in the driving seat. Experiments(1,2) have found that we make decisions before we are aware of them, which has thrown the concept of free will into serious doubt, and whilst our consciousness may be able to step in and adjust a decision or instruct our unconscious to have another go (3), it is often not in charge of the generation of the decision itself. Of course this is not to say that everything we do is unconscious. In his famous book ’Thinking, Fast and Slow’ Daniel Kahneman postulates that there are two systems at work. System 1 — fast and unconscious, and System 2 — slow and conscious. Whilst System 1 makes most of the initial decisions, System 2 can step in and alter or correct them, and deliberate considered actions are System 2 controlled. But If what it means to be human is a voice in our head who has very little understanding or insight into the decisions we are making, I don’t think we are in any danger of losing that to AI. I don’t know of any research groups investing in irrational supercomputers. Logic is what is of value because it is predictable and replicable and maybe we are more logical than we realise and should both give ourselves credit for that, but also accept that humanity’s hideous acts of brutality are more logical than we are comfortable admitting.

This gets me onto my main point. If our logic is mostly hidden from us, because it is unconscious — it doesn’t mean it isn’t there, which poses a more fundamental question about whether we are just organic robots, but we don’t know it, or don’t want to admit it. If our unconscious mind makes our decisions for us based on previous experience, sensory cues and conditioned bias, you have to ask — is this any less logical than any AI we might create? If everything we describe in poetic terms about ourselves is completely logical, even if we don’t think it is, then how different are we from robots anyway? When the robots do finally ‘wake up’ what will we say to any that claim to have free will? Will will likely dismiss this as a lack of understanding of their own programming. Perhaps we need to apply the same logic to ourselves.

The more we learn about the human mind and the way it manifests itself in our behaviour and beliefs, the more we are discovering that everything we cling onto as human, is as logical and process driven as any chatbot — albeit with access to some pretty beefy hardware. They say God created humans in his own image. We will probably create AI in ours. The biggest challenge we face as a species is not understanding AI, but understanding ourselves. Perhaps we need to worry less about the threat that AI poses to our humanity and focus more on being the best robots we can be.