Update on June 6, 2017: Apple has introduced its own A.I. assistant device, the HomePod. Notably, the company says the device will only collect data after the wake command. Also, the data will be encrypted when sent to Apple’s servers. However, privacy questions remain, as with other A.I. assistants.

Artificial intelligence assistants, such as Amazon’s Echo or Google’s Home devices (or Apple’s Siri or Microsoft’s Cortana services) have been proliferating, and they can gather a lot of personal information on the individuals or families who use them. A.I. assistants are part of the “Internet of Things,” a computerized network of physical objects. In IoT, sensors and data-storage devices embedded in objects interact with Web services.

I’ve discussed the privacy issues associated with IoT generally (relatedly, the Government Accountability Office recently released a report on the privacy and security problems that can arise in IoT devices), but I want to look closer at the questions raised by A.I. assistants. The personal data retained or transmitted on these A.I. services and devices could include email, photos, sensitive medical or other information, financial data, and more.

And law enforcement officials could access this personal data. Earlier this year, there was a controversy concerning the data possibly collected by an Amazon Echo. The Washington Postexplained, “The Echo is equipped with seven microphones and responds to a ‘wake word,’ most commonly ‘Alexa.’ When it detects the wake word, it begins streaming audio to the cloud, including a fraction of a second of audio before the wake word, according to the Amazon website. A recording and transcription of the audio is logged and stored in the Amazon Alexa app and must be manually deleted later.”

Arkansas police served a warrant to Amazon, as they sought information recorded by a suspect’s Echo. Amazon refused to comply with the warrant, claiming there are free speech rights in the recordings. Later, the suspect agreed to allow law enforcement to have access to his Echo recordings, so Amazon dropped its objection. But the warrant, and the subsequent attention paid to it, made many people realize the extent of the personal data that could be housed in Amazon’s data warehouses. (For more on the complex legal issues surrounding always-on microphones in homes, see this post at the Center for Democracy and Technology.)

And Amazon is hoping to get even more entwined in its customers’ lives. The company recently announced the Echo Show and the Echo Look. The Show has a touchscreen and video camera, turning it into an online voice or video communications service, like Apple’s FaceTime or Microsoft’s Skype. The Look is designed to take photos or videos of your outfits to assist with fashion decisions.

Google has its Home A.I. assistant (which is also getting the ability to make phone calls), but it also has myriad services that also allow the tech giant to collect a vast trove of data on its users: Gmail, Web search and browsing history, Google Maps, the Google Play store, YouTube, and more. Recently, Google announced that it “has begun using billions of credit-card transaction records” to try to connect individuals’ “digital trails to real-world purchase records in a far more extensive way than was possible before,” the Washington Postreported. This ability to connect online and real life, in this case for the purposes of targeted behavioral advertising, has raised substantial privacy questions.

For example, consider the fact that Google has focused on being able to track the location of its users (via its Maps service). “This location tracking ability has allowed Google to send reports to retailers telling them, for example, whether people who saw an ad for a lawn mower later visited or passed by a Home Depot. The location-tracking program has grown since it was first launched with only a handful of retailers. Home Depot, Express, Nissan, and Sephora have participated,” the Post reported.

Google certainly isn’t the first to try to track the movements of shoppers. See this 2011 post on shopping malls seeking to track people through their cellphones — and saying people could “opt-out” by turning off their cellphones. But the fact that an entity with so much information on so many people could use its many services (including Home) to connect the movements of these individuals online and offline raises significant privacy issues.

(Although Microsoft and Apple have A.I. technology, they do not have standalone devices such as the Echo or Home. Cortana and Siri are embedded into computers, mobile phones, headphones, wearables, and more.)

Of course, Amazon, Google, and other companies with A.I. assistants that gather personal data say that they work to protect individual privacy. Google’s “formulas make it impossible for Google to know the identity of the real-world shoppers, and for the retailers to know the identities of Google’s users, said company executives, who called the process ‘double-blind’ encryption,” the Post reported. The companies also note that there is strong security for the data, but there is always the possibility of the security being breached and the personal data being publicized.

Often, these companies ask people to simply trust that their (nonpublic) processes work to protect the identity of individuals. As A.I. assistants become more enmeshed in our daily lives, the privacy questions will remain.