Today, 1th of June Google brain team committed new code in public.
There are some interesting points:
1) High level APIs will be presented as a separate SwiftPM package under github.com/tensorflow.

High level APIs were added earlier purely to explore the programming model, not to be usable by anyone. Having high level APIs be part of the stdlib module conveys a wrong message for beta testers, and it has been confusing ever since our open source release.

2) Supporting Python code is one of priority:

Improved Python diagnostics related to member access.

Improved Python C API functions for binary arithmetic operations.

3) Improved cross-device sends and receives support.

4) Lots of work done around supporting generic @dynamicCallable methods.

Online demo of t-SNE visualization you can see here.Machine learning algorithms have been put to good use in various areas for several years already. Analysis of various political events can become one of such areas. For instance, it can be used for predicting voting results, developing mechanisms for clustering the decisions made, analysis of political actors’ actions. In this article, I will try to describe the result of a research in this area.

Problem Definition

Modern machine learning capabilities allow converting and visualizing huge amounts of data. Thereby it became possible to analyze political parties’ activities by converting voting instances that took place during 4 years into a self-organizing space of points that reflects actions of each elected official.

Each politician expressed themselves via 12 000 voting instances. Each voting instance can represent one of five possible actions (the person was absent, skipped the voting, voted approval, voted in negative, abstained).

The task is to convert the results of all voting instances into a point in the 3D Euclidean space that will reflect some considered attitude.

When I started working in the field of machine learning, it was quite difficult to move to vectors and spaces from objects and their behavior. At first it was rather complicated to wrap my head around all that, and most processes did not seem obvious and clear at once. That’s the reason why I did my best to visualize everything I did in my groundwork: I used to create 3D models, graphs, diagrams, figures, etc.

When speaking about efficient development of machine learning systems, usually such problems as learning speed control, learning process analysis, gathering various learning metrics, and others are mentioned. The major difficulty is that we (people) use 2D and 3D spaces to describe various processes that take place around us. However, processes within neural networks lay in multidimensional spaces, and that makes them rather difficult to understand. Engineers all around the world understand this problem and try to develop various approaches to the visualization or conversion of multidimensional data into simpler and more understandable forms.

MNIST

I took “Hello World!” in the universe of neural networks as an example, a task for systematization ofMNIST images. MNIST dataset includes thousands of images of handwritten numbers, the size of each image is 28×28 pixels. So, we have ten classes that are neatly divided into 60 000 images for educating and 10 000 images for testing. Our task is to create a neural network that is able to classify an image and determine the class it belongs to (out of 10 classes).

I think it is not necessary to explain the meaning of such terms as machine learning and artificial intelligence in 2017. You can find a lot of op-ed articles and research papers on this topic. So, I assume that the reader is familiar with the topic and knows definitions of basic terms. When talking about machine learning, data scientists and software engineers usually mean deep neural networks that became quite popular because of their productivity. So far there are many software solutions and packages for solving artificial neural networks tasks: Caffe, TensorFlow, Torch, Theano(rip), cuDNN, etc.

Swift

Swift is an innovative protocol-oriented open source programming language written within Apple by Chris Lattner (who recently left Apple and, after SpaceX, settled down in Google).
Apple OS already features different libraries for working with matrices and vector algebra, such as BLAS, BNNS, DSP, that were later on gathered in the single Accelerate library.
In 2015, small-scale solutions based on the Metal graphics technology for implementing math appeared.
In 2016, CoreML was introduced:

CoreML

CoreML can import a finished and trained model (CaffeV1, Keras, scikit-learn) and allows developer to export it to an application.
So, in the first place, you need to prepare a model on another platform using the Python or C++ language and third-party frameworks. Second, you need to educate it using a third-party hardware based solution.
Only after that you can import it and start working with the Swift language. As for me, it all seems too complicated.