MNIST

I took “Hello World!” in the universe of neural networks as an example, a task for systematization ofMNIST images. MNIST dataset includes thousands of images of handwritten numbers, the size of each image is 28×28 pixels. So, we have ten classes that are neatly divided into 60 000 images for educating and 10 000 images for testing. Our task is to create a neural network that is able to classify an image and determine the class it belongs to (out of 10 classes).

I think it is not necessary to explain the meaning of such terms as machine learning and artificial intelligence in 2017. You can find a lot of op-ed articles and research papers on this topic. So, I assume that the reader is familiar with the topic and knows definitions of basic terms. When talking about machine learning, data scientists and software engineers usually mean deep neural networks that became quite popular because of their productivity. So far there are many software solutions and packages for solving artificial neural networks tasks: Caffe, TensorFlow, Torch, Theano(rip), cuDNN, etc.

Swift

Swift is an innovative protocol-oriented open source programming language written within Apple by Chris Lattner (who recently left Apple and, after SpaceX, settled down in Google).
Apple OS already features different libraries for working with matrices and vector algebra, such as BLAS, BNNS, DSP, that were later on gathered in the single Accelerate library.
In 2015, small-scale solutions based on the Metal graphics technology for implementing math appeared.
In 2016, CoreML was introduced:

CoreML

CoreML can import a finished and trained model (CaffeV1, Keras, scikit-learn) and allows developer to export it to an application.
So, in the first place, you need to prepare a model on another platform using the Python or C++ language and third-party frameworks. Second, you need to educate it using a third-party hardware based solution.
Only after that you can import it and start working with the Swift language. As for me, it all seems too complicated.

Each machine learning task is related with big amount of data. Analyzing a network is a complex and confusing task. To resolve that issue, Google announced launch of visualization tools called TensorBoard.

Currently that is the most useful source-code tool. Unfortunately that tool works only with TensorFlow library from the box. There is no way to feed it with json or xml logs.

Deepening into a self-written neural network you can’t avoid any data-visualization task. For that reason you can use Tensorboard from C/C++/Java or Swift application.

Forward propagation as well as backpropagation leads to some operations on matrixes. The most common one is a matrix multiplication. In order to perform matrix multiplication in reasonable time you will need to optimise your algorithms.

There is a simple way to do it on macOS by means of their Accelerate Framework . Actually this is an umbrella framework for vector-optimized operations:

cblas.h and vblas.h are the interfaces to Apple’s implementations of BLAS. You can find reference documentation in BLAS. Additional documentation on the BLAS standard, including reference implementations, can be found on the web starting from the BLAS FAQ page at these URLs: http://www.netlib.org/blas/faq.html and http://www.netlib.org/blas/blast-forum/blast-forum.html.

clapack.h is the interface to Apple’s implementation of LAPACK. Documentation of the LAPACK interfaces, including reference implementations, can be found on the web starting from the LAPACK FAQ page at this URL: http://netlib.org/lapack/faq.html

This is a good way to combine your code with C++ library on Linux and macOS platforms.Read More