"Andrew Ingram" is a digital assistant that scans your emails, gives scheduling ideas for the meetings and appointments you discuss with your coworkers, sets up tasks, and sends invites to the relevant parties with very little assistance. It uses the advanced artificial-intelligence capabilities of X.ai, a New York–based startup that specializes in developing AI assistants. The problems it solves can save a lot of time and frustration for people (like me) who have a messy schedule.

But according to a Wired story published in May, the intelligence behind Andrew Ingram is not totally artificial. It's backed by a group of 40 Filipinos in a highly secured building on the outskirts of Manila who monitor the AI's behavior and take over whenever the assistant runs into a case it can't handle.

While the idea that your emails are being scanned by real people might sound creepy, it has become a common practice among many companies that provide AI services to their customers. A recent article in The Wall Street Journal exposed several firms that let their employees access and read customer emails to build new features and train their AI on cases it hasn't seen before.

Called the "Wizard of Oz" technique or pseudo-AI, the practice of silently using humans to make up for the shortcomings of AI algorithms sheds light on some of the deepest challenges that the AI industry faces.

...

"What I expect to see is that some companies are pleasantly surprised by how quickly they can provide an AI for a previously manual and expensive service, and that other companies are going to find that it takes longer than they expected to collect enough data to become financially viable," says James Bergstra, cofounder and head of research at Kindred.ai. "If there are too many of the latter and not enough of the former, it might trigger another AI winter among investors."