AI and machine learning are all over us, a simple search on google draws 105 Million entries and counting, google trends shows a growing demand for this search term consistent with the exponential rise of deep learning since 2013, more or less when Google’s X Lab developed a machine learning algorithm able to autonomously browse YouTube to identify the videos that contained cats.

In 1959, Arthur Samuel defined machine learning as a

Field of study that gives computers the ability to learn without being explicitly programmed

AI and machine learning changes the software paradigm computers have been based on for many decades.

In the traditional computing domain, providing an input, we feed it into an algorithm to produce the desired output. This is the rule-based frameworkthe majority of the systems around us still work with.

We set up our thermostat to a desire temperature (input) and a rule based programming (algorithm) will take care of reading a sensor and activating heating or AC machines to get to the room temperature we want (output).

The industry has been working relentlessly for many years developing better hardware, software and apps to solve a gazillion problems and use cases around us with programmable solutions. But still, every new functionality or feature, every single new ‘learning’ has still to come via an update of the software (or the firmware itself in hardware).

> Machine learning puts head over heels the rule-based paradigm.

Given a dataset (input) and a known expected set of outcomes (output), machine learning will figure out the optimal matching algorithm so that, after trained (learning), it can autonomously predict the output corresponding to new inputs.

The new AI and machine learning paradigm opens up the promise land of ‘self-programming’ machines, capable of finding the right algorithms to be used in any occasion, this is, providing availability of sufficient input training data, the bottleneck today.

However, and despite all the incredible progress made in this field, including breakthroughs around deep learning in recent years, machines are far from matching human ability to learn new patterns, and worse, we don’t know how they learned what they learned nor how they come up with a decision or an specific (wrong) output. We just feed them with big data and ‘tweak’ the machine learning process till we get them to work and deliver the desired outputs within acceptable thresholds of accuracy, but the whole thing remains a ‘black box’ (Fodor & Pylyshyn 1988, Sun 2002).

And they got very efficient and accurate, better than humans in many fronts, no question. AI, machine learning and neural networks are now behind any major service, predicting our credit score, detecting fraud, rendering face recognition, assisting us through Siri, Google, Cortana or Alexa, and soon driving our cars.

But, as in the old computing paradigm, the process of learning still requires an ‘update’, what, in machine learning and neural networks jargon is called ‘retraining the network’ with a new dataset and new features required to incorporate a new learning or a new functionality.

Retraining any AI network takes well experience engineers, top notch hardware (GPUs) and time, a lot of computing time.

That’s why we can’t teach Siri, Google, Cortana or Alexa new things on the fly. If they don’t understand what we say they typically default to a simple search on the web, we can’t simply tell them ‘learn this new word’ or ‘remember my favorite team are the Red Sox’. Same applies for the rest of large neural networks behind other services, they need to be retrained with the new data and that takes days, weeks or even months depending on the size of the the network and the dataset.

Now, imagine for a moment if we could teach machines ourselves and make them learn the same way we humans do, wouldn’t that be awesome?Imagine if we could teach Siri, Cortana, Google o Alexa new words or expressions, or even new action commands ‘hey Alexa, pull out my car from the garage’:

The answer to this is in the brain.

And some researchers, devoted to reverse engineer the recognition mechanisms of the brain have unlocked brain-like algorithms and new machine learning models solving this problem, turning the traditional machine learning ‘black box’ into a ‘clear box’ neural network where new learnings can happen on the fly, in real time and at a fraction of today’s computational cost (no retraining over the whole dataset required).

In a simplistic way, the underlying problem is that all traditional machine learning models are primarily feedforward based, in other words, the basic calculations in the network happen ultimately in the form of a simple multiplication where the output Y is just the input X weighted (feedforward multiplied by W, the Weight). Y = W * X

Determining the set of weights W for a given input dataset X with a known labeled output Y is called ‘training the network’. The process is long, can take hours, days or even months for large networks, but, once all those weights W are calculated and refined (a process called optimization of weights) the network is capable of amazing wonders like facial recognition or natural language understanding.

However, as mentioned, if you want the network to learn something new, you need to go back again through all the retraining process and start from the beginning, recalculating and optimizing the new set of weights W.

But ‘this is not how the brain works’, Tsvi Achler, MD/PhD in neuroscience and BSEE from Berkeley, told us at a talk in Mountain View.

‘The brain does not turn around and recalculates weights, it computes and learns differently during recognition while the context is still available, and it does not only do feedforward, all sensorial neural recognition mechanisms show some form of feedback loop’In all traditional machine learning methods (deep learning, convolutional networks, recurrent networks, support vector machines, perceptrons, etc) there is a ‘disconnect’ between the learning & training phase and the recognition phase. What Tsvi Achler proposes is not to recalculate (learn) weights but to determine neural network activation (Y, output) by optimizing during recognition, factoring in feedback as well as feedforward, and more importantly, focusing on the current pattern in context (vs all of the training dataset).

With this approach and this new machine learning algorithm we can ‘see’ the weights and change them in real time, while recognizing, add new nodes to the network (patterns) and features on the fly without the need to go over the re-training process.

At his startup Optimizing Mind, Tsvi ran his machine learning model on a Celeron Quad core laptop, 2GHz, 4GB memory, which is equivalent to a high end smartphone. He tested it against traditional methods such as SVM or KNN and the scalability results were astonishing, showing off up to two orders of magnitud of computational cost reduction.

The ability to embed this new machine learning technology in a smartphone will enable true real time learning from end users’ interaction while preserving data locally (no need to go back and forth to servers).

The time when we will be able finally to teach machines by ourselves, as well as learn from the environment, all in real time, is getting closer.

This may be even a first very early step to enable machine to machine learning, and with that, who knows, exponential intelligence maybe?.