Artificial intelligence as a subject of stories goes back a long way. The Jewish story of the golem, the tin man in The Wizard of Oz, Pinocchio, Pygmalion — they are all tales of artificial people looking for some element of humanity.

This is what a lot of the old stories focussed on. I suppose it was after Mary Shelley’s Frankenstein that storytellers used artificial creatures as sources of horror. They started to get less tangible too — taking on roles that would previously be taken by malicious gods or spirits. The characters of SkyNet in Terminator and HAL 9000 in 2001: A Space Odyssey are difficult to pin down. They appear to exist everwhere, controlling the environment through computer networks.

Obviously these stories have moved with the times, as public perceptions of computers have advanced. The robot/woman in Metropolis or from 50s pulp scifi are more closely related to the Mechanical Turk than to the Puppet Master from Ghost in the Machine or the Red Queen from Resident Evil.

The real AI research done these days is (obviously) somewhat behind this curve. There are actually two reasons for this. The first one, of course, is that AI is really goddamn hard. I mean, even turning written sentences into something a computer can understand (called natural language processing) is a huge field fraught with terrible difficulties. The language we use to speak to each other is so filled with ambiguity and relies so much on context and implicit knowledge in the reader. If you haven’t seen any of the films or know any of the stories I’ve mentioned above, would you understand this post?

Even when I’m not discussing dodgy scifi there’s no reason for you to understand everything I say. There’s a classic ambiguous sentence in linguistics and AI which is always trotted out on these occasions:

Time flies like an arrow.

There are at least five different meanings for this sentence. The fact that you’ll initially think “time travels quickly and in a straight line, like an arrow flies” is only because of a huge amount of cultural knowledge that’s really hard to explain to a computer. (If you’re wondering what some of the other interpretations are, here’s Groucho Marx with the first one: “time flies like an arrow; fruit flies like a banana”.) To bring syntactic ambiguity back into the world of Hollywood, I’d now like to quote that classic exchange from Wayne’s World:

“Can I be frank?”

“Can I still be Garth?”

The second reason why artificial intelligence is not as advanced as it could be is rather ironic: hubris. Considering that the stories of mankind’s creations exceeding our control are meant to teach us humility, it is the same humility that might well have helped AI research along.

In the late 60s and early 70s the great results achieved in AI research were dwarfed by the expectations. It was a classic case of the war being “over by Christmas”. No matter what spectacular insights or advances into computer vision or knowledge representation were made, they were guaranteed to be well beneath the fantastical claims made.

A few damning papers were released, rubbishing particular areas of AI research. Consequently, the whole field received a massive downturn in funding for nearly twenty years. Fortunately a lot of research was continued under other names.

But it seems that the new humility learned from this “AI winter” has reaped some rewards. The principal, I think, is the understanding that artificial intelligence is not limited to “creating artificial humans”. In fact, it’s often said that “AI is whatever we do not understand”. When we understand it, it becomes one of the ordinary tools of computing. This is certainly borne out by the things AI research has given us: mundane things like search engines, spam filters and computer chess programs.

Either way, AI will continue to provide interesting new technologies for us in the real world, and just as importantly provide a means to explore our humanity in art.