Kur is a system for quickly building and applying state-of-the-art deep
learning models to new and exciting problems. Kur was designed to appeal to the
entire machine learning community, from novices to veterans. It uses
specification files that are simple to read and author, meaning that you can
get started building sophisticated models without ever needing to code. Even
so, Kur exposes a friendly and extensible API to support advanced deep learning
architectures or workflows. Excited? Jump straight into the
Examples: In Depth.

Kur represents a new paradigm for thinking about, building, and using state of
the art deep learning models. Rather than thinking about your architecture as a
series of tensor operations (tanh(W*x+b)) and getting lost in all the
details, you can focus on describing the architecture you want to
instantiate. Kur does the rest.

The Kur philosophy is that you should describe your model once and simply.
Simple doesn’t mean brainless, nor does it imply that you are limited in what
you can do. By “simple” we mean that models should be simple to use, simply to
modify, and simple to share. A flexible, more general model is elegant. And
this makes it easier to reuse in new contexts or to share with the community.
Kur’s power lies in quickly making models that are both flexible and reusable.

Decades ago, researchers wrote low-level code using highly optimized linear
algebra libraries and ran the code on CPUs. After the rise of General Purpose
Computing on GPUs (GPGPU), researchers modified their code to use CUDA or
OpenCL. Although this code was functionally identical, GPU computing
represented an incredible breakthrough in efficiency, as these new machine
learning models could train and predict in fractions of the time compared to
CPUs. Problematically, these programs were relatively hard-coded; exploring
different hyperparameters or architectures typically required detailed
knowledge of the code, and was fraught with ugly and bug-prone hacks.

Eventually, computer scientists began abstracting away the low-level, dirty
details of highly-optimized CUDA code, and projects like Theano and TensorFlow were born. These tools are incredible in that
they hide many of the implementation details of working with hardware (i.e.,
CPUs and GPUs), and instead expose higher-level tensor operations to the
developer. Even then, the developer is forced to choose between building up
higher-level abstractions of deep learning primitives, or devolving to the
rigid or hacked models of earlier years. Projects like Keras and Lasagna
emerged organically, driven by a need to more quickly and intuitively develop
deep learning models. Their primary genius is in providing an API that maps to
the way people actually think about the components of a deep learning network
(e.g., as a “LSTM layer” rather than as a series of tensor operations).

Kur is the natural progression of these tools and abstrations. It allows you,
the researcher, to get straight to the heart of deep learning: develop that
awesome model you’ve been dreaming about in a few short lines. And best of all,
you craft your model with high-level abstractions rather than having to think
about annoying questions like: