Update, May 11: Google said in a statement to the Verge on Thursday evening that Duplex will identify itself: “We are designing this feature with disclosure built-in, and we’ll make sure the system is appropriately identified. What we showed at I/O was an early technology demo, and we look forward to incorporating feedback as we develop this into a product.”

Alexa and Siri never say “like” or “mmm-hmmm” to buy themselves time. Instead, the virtual assistants from Amazon and Apple are brief and to the point when they speak. They’re machines, after all, and their stilted cadence and brevity are reassuring. They show that we can still tell the difference between humans and robots.

Except that’s not true anymore. A new version of Google Assistant, the company’s answer to Alexa and Siri, can apparently dupe unsuspecting humans into believing it’s made of flesh and blood. This new capability is called Duplex, and two recordings of it conversing with regular humans, recently played to attendees of Google’s developer conference, I/O, are now online. The people on the other end of the phone line, taking bookings for a haircut and for a table at a restaurant, seemed to have no clue that they were talking to a machine. As a technological feat, it was impressive, with Duplex throwing in that very human “uuuuh” or a sassy “mmm-hmmm” when appropriate and making the words seem to flow together like a human would.

But clever as it may seem, Google Duplex comes with a host of ethical issues that relate more to the conduct of its creators than that of the A.I. itself.

Google hasn’t stated publicly whether it obtained informed consent from people at the salon and restaurant before Duplex called. If it did, and the subjects were aware of the chances of getting Duplex on the phone, the results aren’t quite as impressive. If they didn’t, well, a social scientist at any research university would be fired for something like that.

Yet this is perhaps the least significant of the ethical issues arising from setting Duplex loose on society. At Google I/O, company executives expressed how important it is for the company to “get it right” when it comes to A.I. Instead, Duplex could very well expand divides across class and socioeconomic strata.

Google Duplex was presented as an aid to people who are “busy” (read: socioeconomically successful) and therefore don’t have time to hang on the phone just to book an appointment. Instead, they can ask Google Assistant to take over. It will activate Duplex, which then calls the business in question.

It seems pretty clear who will be at the receiving end of those phone calls. It won’t be other “busy” people. It will be the restaurant hosts, the hair stylists, the receptionists, and others whose (often low-paid) job it is to field calls from clients. It is doubtful that Google Duplex will be used to set up meetings with people who are important to you—you’d want to do that yourself, right?

Duplex shields those who reside high up on the hill from those down below. It relieves them of the awkwardness of having to talk to someone of lower socioeconomic standing. The functionality will likely be very popular for that reason, but it is important to understand the extra burden put on those at the receiving end: They will spend their day trying to figure out whether the entity at the other end of the line is human. (One possible benefit: At least Duplex is unlikely to be a jerk to a service worker.)

There is a way around it. The machine could simply introduce itself when calling: “Hi, this is Google Duplex calling on behalf of …” By stating upfront that the call is being initiated by a machine, the person at the receiving end of the call at least gets a choice of whether to engage with a robot. We’ll likely all be engaged in conversations with robots sooner or later, anyway. Google is doing nothing wrong in trying to kick-start that culture now. But shouldn’t we at least create an expectation that the voice on the other end of the line will identify itself as an A.I.?

Duplex shields those who reside high up on the hill from those downbelow.

The underlying mentality from which something like Google Duplex emerges is an engineering affliction that tech critic Evgeny Morozov has dubbed solutionism. Many small businesses that rely on customer bookings (60 percent of them, according to Google’s own research) can’t afford to implement online booking systems that work with new technologies such as virtual assistants and chatbots, and even if they could, training the employees in the new system would be a drain on precious time and money. But instead of solving that problem, the engineers at Google identified the problem as being the part of the equation that isn’t a machine: the human picking up the phone.

If Google wants the world outside the Silicon Valley bubble to embrace artificial intelligence and machine learning, turning virtual assistants into impostors isn’t exactly going to help. Rather than honest persuasion, this practice seems more like manipulative coercion, which moral philosophers like John Rawls say is violation of sacrosanct principles of freedom and requires public justification.

If people start taking phone calls only to find out halfway through that they are talking to a digital impostor, Duplex will be another Glass. Bamboozling service industry workers to make life more convenient for those who are “busy” doesn’t seem quite fair. Some Google executives seem to be waking up to that fact. But if they let Duplex identify itself at the beginning of the call, or give it an inhuman voice, “People will probably hang up,” as Scott Huffman from Google said in an interview. And that’s exactly the point. It’s not right to rob people of that choice. Yet Google seems to have chosen a strategy of obfuscation rather than openness and choice.

In the end, it comes down to the same transparency challenge facing all Silicon Valley firms that rely on the algorithmic processing of consumer data and information. Since the algorithms are kept secret from us, we are asked to trust that companies like Google, Facebook, and Amazon will fulfill our information needs and treat our data with respect. Only after the fact do we find out, for instance, that Cambridge Analytica has used our Facebook data in ways we didn’t agree to, or that Google’s search results can be racist and sexist.

These companies will have to be more open about how the sausage is made if they want us to trust them. That includes not misleading us when we get a call from an A.I. Just be honest about it. Talking to friendly robots can be fun, especially if it serves a practical purpose. But nobody likes a fake person.