Will The Rise In Computing Power Make Ubiquitous Artificial Intelligence A Reality?

Like the Internet of Things that shed its drab M2M image to become the centerpiece of the digital transformation of industrial enterprises, artificial intelligence is sprouting a new life from its 50-plus years old roots. (Yes, we’ve been doing AI, admittedly with limited success, since the late 1950s.)

Conversations about AI seem to follow a course similar to that of the IoT narrative. Initially, IoT pundits were obsessed with the ability to connect billions of “things” to the Internet. Not only did most of these predictions proved overly optimistic, but the connection between sheer connectivity and meaningful business outcomes was loose, at best.

Today’s IoT narrative shifted to focus on business outcomes enabled by the data generated by connected devices. Industry matured from counting conduits to measuring the value of their content.

Hyped-up and often ill-informed and naïve predictions about AI today seem to start the same way and follow the same flawed logical progression. Many believe that the mere rise in computing power is sufficient to make ubiquitous artificial intelligence an overnight reality.

Artificial Intelligence, like the Internet of Things, isn’t simply about the underlying raw technology. Technologies relevant to both AI and IoT, such as pervasive connectivity, cloud computing, and inexpensive on-device edge computing are making rapid progress, faster than most enterprises are able to exploit effectively. But the need to demonstrate a clear business value isn’t any less important—or easier—than before.

The challenge facing both these (re)emerging technologies is to establish credible business models that leverage the ability to convert data and insight into advanced and automated decision making. And in both cases technology isn’t the highest barrier to adoption. Rather, the hurdles are, as often is the case, organization transformation and user adoption.