We use neon internally at Intel Nervana to solve our customers’ problems
across many domains. We are
hiring across several roles. Apply
here!

See the new
features
in our latest release. We want to highlight that neon v2.0.0+ has been
optimized for much better performance on CPUs by enabling Intel Math
Kernel Library (MKL). The DNN (Deep Neural Networks) component of MKL
that is used by neon is provided free of charge and downloaded
automatically as part of the neon installation.

Quick Install

On a Mac OSX or Linux machine, enter the following to download and
install neon (conda users see the
guide),
and use it to train your first multi-layer perceptron. To force a
python2 or python3 install, replace make below with either
make python2 or make python3.

Starting after neon v2.2.0, the master branch of neon will be updated
weekly with work-in-progress toward the next release. Check out a
release tag (e.g., “git checkout v2.2.0”) for a stable release. Or
simply check out the “latest” release tag to get the latest stable
release (i.e., “git checkout latest”)

From version 2.4.0, we re-enabled pip install. Neon can be installed
using package name nervananeon.

pip install nervananeon

It is noted that
aeon
needs to be installed separately. The latest release v2.6.0 uses aeon
v1.3.0.

Warning

Between neon v2.1.0 and v2.2.0, the aeon manifest file format has
been changed. When updating from neon < v2.2.0 manifests have to be
recreated using ingest scripts (in examples folder) or updated using
this script.

Use a script to run an example

python examples/mnist_mlp.py

Selecting a backend engine from the command line

The gpu backend is selected by default, so the above command is
equivalent to if a compatible GPU resource is found on the system:

python examples/mnist_mlp.py -b gpu

When no GPU is available, the optimized CPU (MKL) backend is now
selected by default as of neon v2.1.0, which means the above command is
now equivalent to:

python examples/mnist_mlp.py -b mkl

If you are interested in comparing the default mkl backend with the
non-optimized CPU backend, use the following command:

python examples/mnist_mlp.py -b cpu

Use a yaml file to run an example

Alternatively, a yaml file may be used run an example.

neon examples/mnist_mlp.yaml

To select a specific backend in a yaml file, add or modify a line that
contains backend: mkl to enable mkl backend, or backend: cpu to
enable cpu backend. The gpu backend is selected by default if a GPU is
available.

Recommended Settings for neon with MKL on Intel Architectures

The Intel Math Kernel Library takes advantages of the parallelization
and vectorization capabilities of Intel Xeon and Xeon Phi systems. When
hyperthreading is enabled on the system, we recommend the following
KMP_AFFINITY setting to make sure parallel threads are 1:1 mapped to
the available physical cores.