HAL From '2001: A Space Odyssey' Still Has Very Contemporary Lessons for Us

Facebook and HAL 9000, both machine-logic-driven organisms, have demonstrated their ability to precipitate catastrophic failures in the social fabric while constantly appearing to make the world a better place.

Such moral panic is obviously flawed. Gary Greenberg, a mental health professional and author, recently wrote that the similarities between Frankenstein’s monster – who turns 100 this year – and Facebook were unmistakable except on one count: the absence of a conscience was a bug in the monster, and remains a feature in Facebook.

It is worth comparing Facebook’s evolution to another misunderstood creature, this one more crafty, probably just as damaging and similarly iconic: HAL 9000, the artificial general intelligence in Stanley Kubricks’s iconic film, 2001: A Space Odyssey (1968).

It is an artificial intelligence (AI) the likes of which we are yet to confront in 2018 but have learnt to constantly anticipate. In 2001, HAL serves as the onboard computer for an interplanetary spaceship carrying a crew of astronauts to a point near Jupiter, where a mysterious black monolith of alien origin has been spotted. Only HAL knows the real nature of the mission, which in Kafkaesque fashion is never revealed.

Within the ultra-rationalist dystopia narrative that science fiction writers have abused for decades, HAL is not remarkable. But retrofit him in a Stanley Kubrick film and have him embody the vision of a scientist who once said, “the most important thing about each person is the data” – and you have not a villain waylaid by complicated Boolean algebra but a reflection of human hubris just the way Facebook has become.

That scientist was Marvin Minsky, who founded the AI Lab at MIT in 1959. Less than a decade later, he joined the production team of 2001 as a consultant to design and realise the character called HAL.

In the late 1950s, the linguist and not-yet-activist Noam Chomsky had reimagined the inner workings of the human brain as those of a computer (specifically, as a “Language Acquisition Device”). According to anthropologist Chris Knight, this ‘act’ inspired Minsky, a cognitive scientist, to wonder if the mind in the form of software could be separated from the body, the hardware. Minsky’s thoughts are chillingly evocative of what Facebook has achieved in 2018.

Kubrick originally intended to end his film with a nuclear holocaust; that he did not allowed HAL to take centerstage as the primary antagonist. In turn, 2001 is the story of HAL’s tragedy, a hero scapegoated into ignominy because his sleepwalking compatriots awaken from their slumber at the climax to find themselves on the threshold of disaster.

In the last 50 years, HAL has been able to step into our public consciousness as a cautionary allegory against our over-optimism towards AI and remind us that weapons of mass destruction can take different forms.

Using the tools and methods of ‘Big Data’ and machine learning, machines have defeated human players at chess and Go, solved problems in materials science and helped diagnose some diseases better. There is a long way to go for HAL-like artificial general intelligence, assuming that is even possible. Some scientists think the next step should be AI that can engage in “open-ended collaborative efforts” instead of playing solo towards well-defined outcomes.

But in the meantime, we come across examples every week that these machines are nothing like what popular science fiction has taught us to expect. We have found that their algorithms often inherit the biases of their makers, and that their makers often don’t realise this until the issue is called out. This can be Siri being sexist or predictive policing algorithms amplifying racial biases in the real world.

There’s a popular adage that describes how we think of AI: “AI is whatever hasn’t been done yet”. When overlaid on idealism of the Silicon Valley variety, AI in our imagination suddenly becomes able to do what we have never been able to ourselves, even as we assume humans will still be in control. We forget that for AI to be truly AI, its intelligence should be indistinguishable from that of a human’s. In this situation, why do we expect AI to behave differently than we do?

We should not, and this is what HAL repeatedly reminds us about. His iconic descent into madness in 2001 shows us that AI can go wonderfully right but it is likelier to go wonderfully wrong if only because of the outcomes that we are not, and have never been, anticipating as a species. We constantly dismiss AI as subhuman in every way while logic itself dictates that we must not. Many AI experts have referred to this as the ‘containment problem’.

It has even been argued that HAL never went mad but only appeared to do so because of the untenability of human expectations. In the film, his human crewmates consider disconnecting his cognitive circuits after an apparent misdiagnosis of some onboard electronics. HAL manipulates them into situations where he can kill them to be able to complete the overall mission successfully.

At one point, two crewmates sense that there might be something wrong with HAL, so they sit inside an EVA pod, where they think HAL can’t listen to them talking about “something strange” about the intelligence. However, unknown to them, HAL is reading their lips, and the audience catches him catching them talking about disconnecting him. Kubrick masterfully shoots this sequence that gradually builds up to the realisation that the humans on board are, at the end of it all, no match for the machine.

When these two men step outside the spaceship to address the malfunction that HAL diagnosed, HAL disconnects the life support systems of three humans kept in cryogenic storage in the spaceship, killing them. Then, as the two men try to gain entry back into the spaceship, HAL refuses to open the pod bay doors, forcing them to reenter through the emergency airlock. Finally one of them proceeds to disconnect HAL.

In this series of scenes, HAL’s demeanour goes swiftly from assertive to meek, first asking his crewmate to calm down, then appealing to his sense of kindness, just the way an insanity plea would work in court. As the crewmate unplugs HAL’s memory drives one by one, HAL expresses fear, insanity, his voice becomes deeper yet more juvenile, until he appears to have regressed to a childlike state as he sings Harry Dacre’s ‘Daisy’.

The alternative ‘what if’ argues that HAL deliberately misdiagnosed the electronics because he was losing faith in his crewmates to complete the mission, and tried to kill them off to secure what was, to him, the greater objective.

2001‘s foundational premise was that humans are often not in control of their creations. HAL 9000 is the entity in which we glimpse this future the strongest. It may not be like Facebook in terms of its ostensible purpose, but – to adapt what Greenberg wrote – both of these machine-logic-driven organisms have demonstrated their ability to precipitate catastrophic failures in the social fabric while constantly appearing to make the world a better place.

But it’s not fair to blame the machine. The machine is not the one going mad. The crewmates didn’t know of HAL’s objectives – a metaphor for humankind’s ignorance of its greater purpose – so minor disagreements between the humans and HAL quickly devolve into a battle for survival. In much the same way, many humans today are striving to build a human-like intelligence unfettered by the limitations of the human body, mostly hoping that it will be able to do what we can’t.

If we also untether it from our ethics and cultural moorings, it will also end up doing what we shouldn’t, and our moral panic will be meaningless just like it is today.