Ok, let me put it out there that I hate the use of word prediction. If I was naming the service, I would have called it "execution", or more precisely a "machine training (MT) model execution API". I know I'll never get my way, but I have to put it out there how bullshit many of the terms we use in the space are--ok, back to the API blah blah blah, as my daughter would say.

A common element of API portals for the last decade has included an application gallery, showcasing the apps that are developed on top of an API. Now, when an API offers up machine learning (training) (ML) execution capabilities, either generally or very specialized model execution (ie, genomics, financial), there should also be a gallery of available mobiles, or simply models that have been delivered as part of platform operations--just like we have done with APIs & applications.

I see this stuff as the evolution of the algorithmic layer of API operations. Meaning that most APIs are delivering data or content, but there is a 3rd layer that is about wrapping algorithms, and in the current machine learning and artificial intelligence craze, this approach to delivering algorithmic APIs will continue to be popular. It's not an ML revolution, it is simply an evolution in how we are delivering API resources, leveraging ML models as the core wrapper (Google was smart for open sourcing TensorFlow).

Developing ML models, and making them deployable via AWS, Google, Azure, as well as marketplaces like Algorithmia will become a common approach for the API providers who are further along in their API journey, and have their resources well-defined, modular, and easily deployed in a retail or wholesale manner. While 90% of the ML implementations we will see in 2017 will be smoke and mirrors, 10% will deliver some interesting value that can be used across a growing number of industries--keeping machine training (MT), and machine training execution APIs like Google Prediction something I will be paying attention to. ;-)