Artificial intelligence still lags in language

You may have seen over the weekend that a “chatbot,” an artificial intelligence experiment developed by Microsoft to imitate the tweeting habits of a teenage girl, got yanked from cyberspace because malicious Twitter users got it to repeat offensive language and insult other Twitter users with racist and sexist taunts. The experiment started out innocuously enough, though early reports found much of what “Tay,” as the bot was named, had to say nonsense. Things got quickly out of hand because Microsoft, unfortunately, did not figure human nature into its algorithms. I am no expert in artificial intelligence (AI), but I have long been interested in attempts to mimic human cognitive abilities. I have a friend whose ambition from childhood was to design computer games, and he is one of the…