Not all Machine Learning (ML) Is Artificial Intelligence (AI): Part II

April 26, 2018August 31, 2018

This post was featured in our Cognilytica Newsletter, with additional details. Didn’t get the newsletter? Sign up here

One of the great things about being an analyst at Cognilytica is that we get to see a very wide range of technologies, adoption patterns, and implementations for technology across a very disparate set of industries and use cases. We’ve had conversations with a broad range of industries and companies from very large government agencies to tiny three-person consultancies. We’ve engaged with the largest companies in the world (the Fortune 10, let alone the Fortune 100) as well as startups who are disrupting different markets. In all these conversations and interactions, we’ve realized one thing: no two companies define Artificial Intelligence (AI) the same way, but they all insist they are doing it. Or at least some version of it.

However, if AI is to mean something and be a useful term to help delineate different technologies and approaches from others, then it has to be meaningful. A term that means everything to everyone means nothing to anyone. In one of our previous articles “Is Machine Learning Really AI?” we go over our opinions on what we believe AI has to mean to be useful. In summary, our view is that AI systems need to be able to sense and understand their environment, learn from past behaviors and apply that learning to future behaviors, and adapt to new circumstances by reasoning from experience and learning and then generating new learning from those new circumstances and experiences. Professor Alex Wissner-Gross, a guest on a recent podcast, further defines intelligence as being able to increase your future freedom of action, and determine on an individual basis what future outcomes you want based on actions.

As we have defined in the above article, Machine Learning is the set of technologies and approaches that provide a means by which computer systems can encode learning and then apply future information to that learning to come to conclusions. Clearly, Machine Learning is a prerequisite for AI. But as we’ve said before, ML is necessary, but not sufficient for AI. Likewise, not all ML systems are operating in the context of what we’re trying to achieve with AI.

So, Which Parts of ML are not AI?

In the above newsletter article, we talk about what parts of AI are not ML, but we didn’t dive into what parts of ML are not AI. In our conversations with customers we seem to find two divergent perspectives of ML. Some say that even the narrowest form of AI is still AI. Since we have not yet achieved Artificial General Intelligence (AGI), despite some attempts to get us close, then all practical implementations of AI in the field are narrow AI of one form or another. We find this reductio ad absurdum unhelpful. It’s not useful to call a data science effort that uses random decision forests (a form of ML) for the specific task of achieving a very specific learning outcome to be at the same level as attempts to build systems that can learn and adapt to new situations.

On the other hand, we’re in the camp with those that say that forms of predictive analytics that use the methods of Machine Learning are indeed ML projects, but they are not AI projects in themselves. In essence, using ML techniques to learn one narrow specific application, and in which that training model cannot be applied to different situations or has any way to evolve or adapt to new situations is not an AI-focused ML project. It’s ML without the AI. Hopefully this Venn diagram might be helpful as a way of explaining which parts of ML are contributory to AI and which parts are not:

The Data Science Revolution: ML for Predictive Analytics

Part of why we’re seeing a resurgence of interest to begin with in the field of AI is not only the development of better algorithms to do Machine Learning (notably Deep Learning), but also the sheer quantity of data we have and better processing power to deal with it. However, perhaps one of the more overlooked parts of this AI renaissance is that over the past years the entire field of Big Data emerged to deal with the voluminous amounts of data from internet, mobile, and all manner of networked systems. Not only did the Big Data revolution bring about new ways of managing and dealing with large data stores, but it helped usher in the fields of data science and data engineering to provide insight and hidden value in the data and better methods for manipulating large data sets.

It’s no wonder that the methods and techniques of Machine Learning are appealing to data scientists who have before had to deal with simply more advanced SQL or other data queries. ML provides a wide array of techniques, algorithms, and approaches to gain insight, provide predictive power, and further enhance the value of data in the organization, elevating data to information, and then to knowledge.

However what differentiates many data science-driven ML projects is that the models that are being built for these efforts and the scope of these projects are very narrowly constrained to a single issue, for example credit card fraud. Indeed, this fascinating Quora exchange between data scientists makes it clear that the ML approaches being used are being used to solve narrow problems of predictive analytics, not greater challenges of AI. In this way, these ML projects are not AI projects, but rather predictive analytics projects. We can call this “ML for Predictive Analytics”.

Likewise, there are other narrow applications of ML for specific single-task usage, such as forms of Optical Character Recognition (OCR) and even forms of Natural Language Processing (NLP) and Natural Language Generation (NLG) where ML approaches are used to extract valuable information from handwriting or speech. We’ve had OCR and NLP solutions for decades, and yet prior to this new AI summer wave, they’ve never called their systems AI. In this way, we can’t consider many forms of OCR and NLP, even ones that use ML approaches, to be AI. Rather, they have to be enabling some greater goal for any of these to be considered AI.

So, what is considered ML in the context of AI?

Clearly, in order to make AI work, we need ML, but we don’t need ML models narrowly built for something like credit card fraud detection to make intelligent systems work. Rather, what we need are ML systems that enable the AI efforts to learn not only the specific models in which they are taught, but rather a framework by which these systems can learn on their own. ML in the context of AI emphasizes not only self-learning, but also the idea that this learning can be applied to new situations and circumstances that might not have been explicitly modeled, trained, or learned before. In many ways, this sort of continuous, expanding learning is the goal of adaptive systems.

Adaptation and self-learning are keys to not only handling the explicit problems of today, but also the unknown problems of tomorrow. This sort of learning is how we humans, and many other creatures, pick up new skills, learn from peers, and apply learnings from one situation to another. ML systems that are built in this way support these goals for AI and are fundamentally more complex and sophisticated than their narrower, single-task ML brethren. The key insight is that it’s not the algorithm that determines whether ML is used in an AI context or not, but rather the way it is being applied, and the sophistication of the learning systems that surround those algorithms.

We’d Like to Know What You Think

Are we just nitpicking here? Should all forms of ML be considered AI, even if they’re super narrow, data science driven, or using decades-old OCR? Should we consider AI to be a continuum from super weak, very narrow forms to the ultimate AGI goal? Perhaps, and certainly, we see no fault with having that perspective about AI. At Cognilytica, we believe that it is more helpful to split hairs here and make the definition of AI meaningful so that we can achieve real advancement in the goals of AI, rather than using AI as the buzzword of today. Because if we do that, then AI will suffer like it did in the winters of yesterday. Respond to this newsletter and let us know your thoughts on all this. And join our AI Today Facebook group to join in the discussion there too!

Get weekly research, insights, analysis, vendor reports, end-user use cases, exclusive event discounts, and other information delivered once a week to your inbox. Sign up to be one of the thousands getting Cognilytica Insights.