Apple’s Core ML: The pros and cons

Even as the public clouds duke it out for machine learning supremacy, Apple just changed the game. With the introduction of Core ML, Apple has moved the goalposts, pushing the benefits of machine learning into devices, thereby saving battery life and improving performance. Machine learning, in other words, isn’t some light frosting on application code.

Machine learning that just works on the devices we all use

Machine learning depends on large sets of training data. Once you’ve figured out the predictive model, you need to feed machines copious quantities of data that train them to “understand” the data and fine-tune the model. Because such training sets require so much data (and so much compute power to crunch it), machine learning has mostly been a cloud thing.

With its introduction of Core ML, however, Apple is pushing machine learning onto its devices (including, if the iPhone 8 rumors are true, an AI-dedicated chip for the upcoming smartphone). While Apple would continue to need to do the initial heavy lifting of machine learning in the cloud, there are significant benefits to pushing its machine learning models to its devices. As Apple said:

Core ML is optimized for on-device performance, which minimizes memory footprint and power consumption. Running strictly on the device ensures the privacy of user data and guarantees that your app remains functional and responsive when a network connection is unavailable.

Apple has made it exceptionally easy for developers to get started with machine learning. According to developer Matthijs Holleman, who dubs Core ML “machine learning for everyone,” the process for getting started couldn’t be more straightforward: “You simply drop the mlmodel file into your project, and Xcode will automatically generate a Swift or Objective-C wrapper class that makes it really easy to use the model.”

Just as important, the feedback loop is fast. How fast? As developer Said Ozcan gushes, “It was amazing to see the prediction results immediately without any time interval.”

That’s the good news.

Core ML is not quite the bees’ knees

There are no provisions within Core ML for model retraining or federated learning, where data collected from the field is used to improve the accuracy of the model. That’s something you would have to implement by hand, most likely by asking app users to opt in for data collection and using that data to retrain the model for a future edition of the app.

That lack of federated learning may be particularly thorny for the Apple-verse—especially because Google has advanced such federated learning. As Google research scientists Brendan McMahan and Daniel Ramage write,

Machine learning enables mobile phones to collaboratively learn a shared prediction model while keeping all the training data on device, decoupling the ability to do machine learning from the need to store the data in the cloud.

Here’s how it works, they wrote:

Your device downloads the current model, improves it by learning from data on your phone, and then summarizes the changes as a small focused update. Only this update to the model is sent to the cloud, using encrypted communication, where it is immediately averaged with other user updates to improve the shared model. All the training data remains on your device, and no individual updates are stored in the cloud.

In other words, instead of harnessing an army of servers in the cloud, you can harness an army of mobile devices in the field, which has much more potential. Equally (or more) important, this improved model is immediately available to the device, making the user experience personalized without having to wait for the tweaked model to round-trip from the cloud. As developer Matt Newton has highlighted, “It could be a killer feature to have easy APIs for doing personalization all on devices.”

Applying federated learning requires machine learning practitioners to adopt new tools and a new way of thinking: model development, training, and evaluation with no direct access to or labeling of raw data, with communication cost as a limiting factor.

Even so, the upside outweighs the downside, giving researchers compelling reasons to confront the challenges.

With Core ML, has Apple underdelivered again?

You could look at this as yet another example of Apple falling behind its peers. From iCloud to Apple Maps and even Siri, Apple has either been late or underpowered relative to cloud and AI heavyweights like Google. With Core ML, I’m not so sure. The “Apple got it wrong” contention feels misplaced or, at best, premature.

For example, when Amazon Web Services released its own developer-facing machine learning services like Rekognition, Polly, and Lex, there were similar complaints that it was too basic or limited. But as Swaminathan Sivasubramanian, general manager for AWS, said of these services, the goal “is to bring machine learning to every AWS developer,” and not to overwhelm them with the inherent complexity of machine learning.

In similar manner, Apple is paving an easy path to getting started with machie learning. It’s not perfect, and it won’t go far enough for some developers. But it’s a good way to raise a generation of developers on the potential for machine learning.

Still, there’s one thing Apple probably should have done, even though it’s still foreign to its culture: Open-source Core ML, thereby giving savvy developers the ability to mold it to their needs. As Holleman points out, “As most other machine learning toolkits are open source, why not make Core ML open source too?”

Except for Apple itself, it doesn’t really matter whether Apple gets machine learning right. “The deeper point,” VC Evans says, is that “many machine learning techniques are getting commoditized and pushed into developer APIs and onto devices and apps very fast.” Because of this, he says, “there won’t just be one Google or Facebook cloud that does all the machine learning—this is a foundational tech that will be in everything.”

Copyright 2019 IDG Communications. ABN 14 001 592 650. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of IDG Communications is prohibited.