The previous post was about training a Turi Create model with source imagery to use for CoreML and Vision frameworks. Now that we have our trained model, let’s integrate with Xcode to create a sample iOS object detection app.

A bit of downtime provided me with some time to explore CoreML and machine learning videos that Apple provided at WWDC 2017. And with lucky timing, Apple released the Turi Create as I was about to start up a demo project for fun.

The goal for this post is to take source images, train a model with Turi Create, to output a Xcode compatible mlmodel file for use with machine learning object detection with the CoreML and Vision frameworks.

The NSLayoutConstraint class has been Apple’s recommendation for layout, as they create relationships between views, parent views, and child views, therefore explicitly setting the frame property is not needed. This also has benefits of helping with accessibility, multiple device screen size support, and helping with views in different orientations. I’m little late to the party with iOS 9’s NSLayoutAnchor, which simplifies the wordy NSLayoutConstraint instantiation, with a simple-to-use API.

Nintendo released The Legend of Zelda: Breath of the Wild earlier this year, and I have enjoyed exploring the beautiful land of Hyrule. The simple acts of sprinting, climbing, and gliding through the world has been a surprisingly pleasurable experience.

This post is about all the attentions to detail that were devised and, more poignantly to this blog, developed.