Nguyen, former engineering director for Google Apps, was referring to a slice of the technology behind his startup, Adatao, which just received $13 million in funding from Andreessen Horowitz. Adatao's value proposition comes in two parts: pInsights, a document-based visualization layer that provides end-users with simple, real-time querying of vast data sets; and pAnalytics, a monster data processing engine built on Hadoop and Apache Spark. All of this, including the ANN (artificial neural network) component, is made possible by the huge memory and processing power that, today, has become a commodity.

Adatao's mission is to bring big data analytics to the masses, enabling people to collaborate on Google Apps-like documents that incorporate charts derived from huge data sets. I saw a demo, and with a cluster of eight eight-core servers (each with 30GB of RAM) hosted on Amazon Web Services, queries of a multi-terabyte data set were blazingly fast. To deliver the ease of use promised, Adatao needs ANN to identify data objects on the fly in response to queries entered in plain English. According to Nguyen, the system can recognize as many as 20,000 objects.

If Adatao is sucessful, it could well be a game-changer. But what excited me most was the artificial intelligence aspect.

Immediately after the demo, I called my friend Miko Matsumura, vice president of marketing for Hazelcast, who has a masters in computational neuroscience from Yale. I told him that, according to my highly limited understanding, artificial intelligence has largely turned out to be a hardware problem rather than a software problem, and Adatao's ANN implementation seemed to provide a fresh example.

Miko immediately referred me to the work of Paul and Patricia Churchland, who once noted that those who deny the possibility of artificial intelligence are like a man waving a magnet in a dark room and declaring that magnetism cannot create light -- when he simply wasn't waving it fast enough to induce the current to light a bulb. Today you could argue that we have the huge memory and computing capacity necessary to begin lighting up artificial intelligence all over the place.

In fact, it's already happening, and the main practical application is big data analytics. As James Kobielus noted earlier this year, "Machine learning is so pervasive that we can often assume its presence in big data applications."

"Our warm and creepy future," is how Miko refers to the first-order effect of applying machine learning to big data. In other words, through artificially intellligent analysis of whatever Internet data is available about us -- including the much more detailed, personal stuff collected by mobile devices and wearables -- websites and merchants of all kinds will become extraordinarily helpful. And it will give us the willies, because it will be the sort of personalized help that can come only from knowing us all too well.

Somehow, it's not surprising that the first objective of machine learning on top of big data is to induce customers to spend more money and stay loyal. But the potential extends across every conceivable discipline, from healthcare to climatology. Thanks to the cheap, enormous computing resources that are making new intelligent systems possible, we are now entering a qualitatively different phase of computing. To deny that is to wave a magnet in the dark.