From wowing the world with an incredible demo to becoming a scary violation of ethics, Google Duplex has done it all. After sitting back and watching the initial fireworks, I’m ready to dive in…

Puzzled? Let me fill you in. At Google’s I/O 2018 conference, CEO Sundar Pichai showed off this new feature of the company’s AI assistant that shocked tech journalists the globe over. To navigate around the issue of businesses not having online booking systems, the assistant will actually call them and arrange appointments for you.

And as you can see from the demo below, the results are…both incredible and terrifying at the same time. This digital assistant is able to converse back and forth with a human being, without ever sounding like a robot.

The typical robotic voice you expect from the likes of Siri, Alexa or the current Google Assistant is gone, as the Mountain View-based company take extra steps to disguise the robot-ness with imperfections (ahhs and umms), and uptalking (when your voice goes up in pitch asking a question).

A fascinating future for artificial intelligence, and Google’s Assistant is certainly at the forefront of the consumer-based revolution around the technology.

Duplex definitely takes their tech one step closer to passing the Turing test - an examination proposed by English computer scientist Alan Turing in 1950 wherein to pass, a computer’s natural use of language is indistinguishable from a human.

However, does it need to really go this far? Given the recent context of pieces I’m writing about AI and humanity’s lackadaisical efforts to truly come to grasps with its potential or control it, part of me is nervous… In a world of increased disinformation and trust issues with technology around us, Google goes ahead and makes a machine that can deceive a human being?

And I get that Google will launch Duplex with “disclosure built-in,” to avoid the potential issue of questioning whether that voice on the other end of the line is actually alive. I also understand that from a developer’s point of view, you want a seamless product - leading them to the idea of adding speech imperfections, to cover up moments of processing information.

But this is technology that made for a mind-blowing demo at Google I/O, led to the voicing of plenty of ethics-based fears, and resulted in the company changing their strategy around Duplex in the days after.

That just screams “afterthought” to me, which begs the question - does Silicon Valley think about the implications of the technology it builds?

Much like learning to think before you speak, maybe the valley would be better off asking whether they should build something, rather than if they can.