It would be no surprise then that Julia is a natural fit in many areas of machine learning. ML, and in particular deep learning, drives some of the most demanding numerical computing applications in use today. And the powers of Julia make it a perfect language to implement these algorithms.

One of key promises of Julia is to eliminate the so-called “two language problem.” This is the phenomenon of writing prototypes in a high-level language for productivity, but having to dive down into C for performance-critical sections, when working on real-life data in production. This is not necessary in Julia, because there is no performance penalty for using high-level or abstract constructs.

This means both the researcher and engineer can now use the same language. One can use, for example, custom kernels written in Julia that will perform as well as kernels written in C. Further, language features such as macros and reflection can be used to create high-level APIs and DSLs that increase the productivity of both the researcher and engineer.

GPU

Modern ML is heavily dependent on running on general-purpose GPUs in order to attain acceptable performance. As a flexible, modern, high-level language, Julia is well placed to take advantage of modern hardware to the fullest.

First, Julia’s exceptional FFI capabilities make it trivial to use the GPU drivers and CUDA libraries to offload computation to the GPU without any additional overhead. This allows Julia deep learning libraries to use GPU computation with very little effort.

Beyond that, libraries such as ArrayFire allow developers to use natural-looking mathematical operations, while performing those operations on the GPU instead of the CPU. This is probably the easiest way to utilize the power of the GPU from code. Julia’s type and function abstractions make this possible with, once again, very little performance overhead.

Julia has a layered code generation and compilation infrastructure that leverages LLVM. (Incidentally, it also provides some amazing introspection facilities into this process.) Based on this, Julia has recently developed the ability to directly compile code onto the GPU. This is an unparalleled feature among high-level programming languages.

While the x86CPU with a GPU is currently the most popular hardware setup for deep learning applications, there are other hardware platforms that have very interesting performance characteristics. Among them, Julia now fully supports the Power platform, as well as the Intel KNL architecture.

Libraries

The Julia ecosystem has, over the last few years, matured sufficiently to materialize these benefits in many domains of numerical computing. Thus, there are a set of rich libraries for ML available in Julia right now. Deep learning framework with natural bindings to Julia include MXNet and TensorFlow. Those wanting to dive into the internals can use the pure Julia libraries, Mocha and Knet. In addition, there are libraries for random forests, SVMs, and Bayesian learning.

Using Julia with all these libraries is now easier than ever. Thanks to the Data Science Virtual Machine (DSVM), running Julia on Azure is just a click away. The DSVM includes a full distribution of JuliaPro, the professional Julia development environment from Julia Computing Inc, along with many popular statistical and ML packages. It also includes the IJulia system, with brings Jupyter notebooks to the Julia language. Together, it creates the perfect environment for data science, both for exploration and production.