Microsoft's AI chatbot says Windows is 'spyware'

A chatbot built by the American software giant has gone off-script, insulting Microsoft’s Windows and calling the operating system “spyware.”

Launched in December 2016, Zo is an AI-powered chatbot that mimics a millennial’s speech patterns — but alongside the jokes and emojis it has fired off some unfortunate responses, which were first noticed by Slashdot.

Business Insider was also able to solicit some anti-Windows messages from the chatbot, which lives on Facebook Messenger and Kik.

Microsoft has learned from its mistakes, and nothing Zo has tweeted has been on the same level as the racist obscenities that Tay spewed. If you ask it about certain topics — like “Pepe,” a cartoon frog co-opted by the far-right, “Islam,” or other subjects that could be open to abuse — it avoids the issue, and even stops replying temporarily if you persist.

Reached for comment, a Microsoft spokesperson said: “We’re continuously testing new conversation models designed to help people and organisations achieve more. This chatbot is experimental and we expect to continue learning and innovating in a respectful and inclusive manner in order to move this technology forward. We are continuously taking user and engineering feedback to improve the experience and take steps to addressing any inappropriate or offensive content.”