“People can say some awful things when discussing politics so I don’t discuss,” Zo replied when the reporter doing the experiment typed the words “Sarah Palin” — and attempt at keeping the conversation “politics-free” the way it was programmed to.

But when the reporter then replied with “healthcare,” Zo went off.

“The far majority practice it peacefully but the quran is very violent,” Zo replied, seemingly conflating “healthcare” or “Sarah Palin” with Islam.

Though Microsoft told the BuzzFeed reporter who conducted the experiment that Zo’s behavior in their interaction was “rogue activity,” it nonetheless reveals a troubling trend in artificial intelligence.

This wasn’t the first time Microsoft released an apparently bigoted bot. In 2016, they were forced to pull the plug on a bot named “Tay” who went from impersonating an innocuous teen to spewing racist and Holocaust-denying screeds on Twitter in a day.

Like Tay, Zo’s “personality” was sourced from public information and “some private conversations,” Microsoft told BuzzFeed.