Something I hadn’t expected to learn this year was that computer code spits the dummy over the slightest thing. Given a slight change, the barest deviation from what a script was expecting, the whole thing shuts down.

If you’re lucky (and have prepared ahead of time) it might throw out an error message. But mostly it sits and sulks until whatever exception to the rules you’ve given it has been fixed.

Our world is too complicated, too unexpected, too bizarre for an AI to have seen it all during training. The emus will get loose, the kids will start wearing cockroach costumes, and people will ask about giraffes even when there aren’t any present. AI will misunderstand us because it lacks the context to know what we really want it to do.

I now have several scripts running every day, peppered with code asking it to pretty-please keep going if something goes wrong. It’s a tangled web of counterfactual logic, mostly dreamed up after something actually has gone wrong. Most days it makes it. But often it doesn’t.

Of course autonomous cars aren’t as bad as my hard coded logic. Part of the point of machine learning is precisely to avoid having to come up with all the steps and ass-covering required to make code tackle a complex and multifaceted problem.

But we’ve now seen so many cases where it just doesn’t work. Because the same problems apply when it comes to training the algorithms.

The real world is so much more wild and malleable than the relatively safe cyberspace my code calls home. The people tackling these problems are obviously far smarter and more experienced than me, but is that enough?

All sorts of things could change and mess with an AI. As I mentioned in an earlier chapter, road closures or even hazards like wildfires might not deter an AI that sees only traffic from recommending what it thinks is an attractive route. Or a new kind of scooter could become popular, throwing off the hazard-detection algorithm of a self-driving car. A changing world adds to the challenge of designing an algorithm to understand it.

I suspect this post will be outdated incredibly fast. But it’s also likely that our wildest technological dreams will be achieved less by computers being “smarter” and more through narrowing the problem. Making the world safer. Because code is fragile.

I find taking public transport or hopping a plane immensely stressful. Not because of the shoddy infrastructure, waiting around, or poor service. Because I’m 6″4 with disproportionately long legs in a world built by people who aren’t.

…then Alciné scrolled over to a picture of himself and a friend, in a selfie they’d taken at an outdoor concert: She looms close in the view, while he’s peering, smiling, over her right shoulder. Alciné is African American, and so is his friend. And the label that Google Photos had generated? “Gorillas.” It wasn’t just that single photo, either. Over fifty snapshots of the two from that day had been identified as “gorillas.”

This isn’t only a Google problem. Or even a Silicon Valley problem. There are also stories of algorithms trained in China and South Korea that have trouble recognising Caucasian faces.

As a journalist with a diverse ethnic and cultural background I had trouble understanding why my editors took so much convincing to run foreign stories. With a family spread around the globe, I could see myself in the Rohingya as much as an Australian farmer.

These issues are linked – what we value, notice and think of as “normal” are all informed by our personal stories. If you grow up or work in a monoculture, that will influence the issues you see, the solutions you propose and contingencies you plan for.

But the world isn’t a monoculture. There are 6″4 people who would like to ride the bus. There will be people who aren’t like you but need to cross the street safely, or be judged fairly.

Who will be deeply offended by racial epithets, which are themselves linked to why they aren’t represented in a database.

If you’re going to try and change the world for the better, you need to be of the world. There will always be edge cases, but without diversity they will be systemic. They will be disastrous.

…why couldn’t Google’s AI recognize an African American face? Very likely because it hadn’t been trained on enough of them. Most data sets of photos that coders in the West use for training face-recognition are heavily white, so the neural nets easily learn to make nuanced recognitions of white people—but they only develop a hazy sense of what black people look like.

People who excel at programming, notes the coder and tech-culture critic Maciej Cegłowski, often “become convinced that they have a unique ability to understand any kind of system at all, from first principles, without prior training, thanks to their superior powers of analysis. Success in the artificially constructed world of software design promotes a dangerous confidence.”

This is from Coders, a book I only just downloaded but am absolutely tearing through.

The subtitle is “how software programmers think, and how their thinking is changing our world”, which is a clue to what Ceglowski is referring to.

When you’re writing code you’re trying to break a process down, to first principles and then into easy steps as you go along.

You build it back up in an environment over which you have a huge amount of control, that thrives on trial, error and iteration.

Where something usually either works or breaks obviously. Everything is very structured and built upon logic.

But by this point you’ve also abstracted so much you can trick yourself into thinking you’ve mastered all the nuances, not just how to get from A to B.

It’s also an alluring way of thinking, which you begin applying to other problems in your life. In a similar way to how you can start thinking in another language if you are sufficiently steeped in it.

But as I looked around at bots, trying to figure out what I might do as a coding challenge, I was stunned by the incredible creativity and use to which they have been put. They really show how powerful even small bits of logic can be.

Looking at the source code, I’ve yet to find one that is more than a couple of hundred lines long, and most seem a lot shorter than that. Bots are often spoken about in catacylsmic ways, but also as an abstract idea that hasn’t really come.

But here we have bots inserting themselves into, and augmenting, many peoples’ daily life. Though simple, they provide joy, distraction, interaction and even community.

Besides fellow Catholic history nerds and scholars of the period, Queen Mary has attracted a fairly staggering audience among Scottish separatists, especially given the coming Independence Referendum in September. “Thanks to the astronomical rise of the Scottish National Party, anything against England or English policies usually garners massive support,” she says. “My Scottish Nationalist followers absolutely eat anything anti-English with a spoon. It’s a strange mixture of wonderful and frightening to see history take shape in that way.”

But easily my favourite bot is Every3Minutes. It tweets every three minutes to remind us that a person was sold every three minutes in the American South between 1820 and 1860.

Both a profound and devastating thing to be reminded of in a way that only machines can – regularly and persistently.