DLVM

Modern Compiler Infrastructure for Deep Learning Systems

Introduction

Deep learning software demands reliability and performance.
However, many of the existing deep learning frameworks are software libraries
that act as an unsafe DSL in Python and a computation graph interpreter.

We present DLVM, a design and implementation of a compiler infrastructure
with a linear algebra intermediate representation, algorithmic differentiation
by adjoint code generation, domain-specific optimizations, and a code generator
targeting GPU via LLVM.

Designed as a modern compiler infrastructure inspired by LLVM, DLVM is more modular
and more generic than existing deep learning compiler frameworks, and supports
tensor DSLs with high expressivity. With our prototypical staged DSL embedded in Swift,
we argue that the DLVM system enables a form of modular, safe, and performant frameworks
for deep learning.

DLVM started as a research project at University of Illinois at Urbana-Champaign.

Update: The authors of this project are no longer maintaining DLVM, but instead developing
Swift for TensorFlow, a project
providing first-class language and compiler support for machine learning in Swift.
Watch the TensorFlow Dev Summit 2018 video for more information.