“Tay” was apparently supposed to be a natural language learning AI. Microsoft blame its failure on trolls who appear to have rapidly figured out how to manipulate it. Apparently this AI was as dumb as a post and inherently incapable of identifying the manipulation.

In this case the trolls may have performed a useful service. They’ve demonstrated that language and intelligence does not neatly reduce to algorithms, regardless of what Google and Microsoft engineers might believe. Google does better in this regard as demonstrated by the recent defeat in March of a South Korean Go grand master by AlphaGo. The AI replicates a player’s intuitive pattern recognition.

But Artificial Intelligence is still about algorithms. The question is what happens when the algorithms encounter something novel? Patterns work well at the scales of large numbers. The problem is what happens at the small scales of individual human unpredictability, where novelty is most likely?

Google’s AI will prove to be extremely useful. Their models may very well get us to levels of predictive automation that fundamentally change the way our technologies work, our automobiles being the most obvious example. But until the AI can move straight from winning at Go to learning to navigate a wholly unfamiliar situation, it will still be all about algorithms and the constraints they impose. It will still be artificial.

Artificial Intelligence seems to be a bit like artificial sweetener or hydrogenated vegetable oil. Something that is very useful in food production but only sort of tastes real.