Navigation

This tutorial is aimed at NumPy users who have no experience with Cython at
all. If you have some knowledge of Cython you may want to skip to the
‘’Efficient indexing’’ section.

The main scenario considered is NumPy end-use rather than NumPy/SciPy
development. The reason is that Cython is not (yet) able to support functions
that are generic with respect to the number of dimensions in a
high-level fashion. This restriction is much more severe for SciPy development
than more specific, “end-user” functions. See the last section for more
information on this.

The style of this tutorial will not fit everybody, so you can also consider:

Cython is a compiler which compiles Python-like code files to C code. Still,
‘’Cython is not a Python to C translator’‘. That is, it doesn’t take your full
program and “turns it into C” – rather, the result makes full use of the
Python runtime environment. A way of looking at it may be that your code is
still Python in that it runs within the Python runtime environment, but rather
than compiling to interpreted Python bytecode one compiles to native machine
code (but with the addition of extra syntax for easy embedding of faster
C-like code).

This has two important consequences:

Speed. How much depends very much on the program involved though. Typical Python numerical programs would tend to gain very little as most time is spent in lower-level C that is used in a high-level fashion. However for-loop-style programs can gain many orders of magnitude, when typing information is added (and is so made possible as a realistic alternative).

Easy calling into C code. One of Cython’s purposes is to allow easy wrapping
of C libraries. When writing code in Cython you can call into C code as
easily as into Python code.

Very few Python constructs are not yet supported, though making Cython compile all
Python code is a stated goal, you can see the differences with Python in
limitations.

The SAGE mathematics software system provides
excellent support for using Cython and NumPy from an interactive command
line or through a notebook interface (like
Maple/Mathematica). See this documentation.

Cython can be used as an extension within a Jupyter notebook,
making it easy to compile and use Cython code with just a %%cython
at the top of a cell. For more information see
Using the Jupyter Notebook.

A version of pyximport is shipped with Cython,
so that you can import pyx-files dynamically into Python and
have them compiled automatically (See Compiling with pyximport).

Cython supports distutils so that you can very easily create build scripts
which automate the process, this is the preferred method for
Cython implemented libraries and packages.
See Basic setup.py.

Manual compilation (see below)

Note

If using another interactive command line environment than SAGE, like
IPython or Python itself, it is important that you restart the process
when you recompile the module. It is not enough to issue an “import”
statement again.

As it is always important to know what is going on, I’ll describe the manual
method here. First Cython is run:

$cythonyourmod.pyx

This creates yourmod.c which is the C source for a Python extension
module. A useful additional switch is -a which will generate a document
yourmod.html) that shows which Cython code translates to which C code
line by line.

Then we compile the C file. This may vary according to your system, but the C
file should be built like Python was built. Python documentation for writing
extensions should have some details. On Linux this often means something
like:

gcc should have access to the NumPy C header files so if they are not
installed at /usr/include/numpy or similar you may need to pass another
option for those. You only need to provide the NumPy headers if you write:

cimportnumpy

in your Cython code.

This creates yourmod.so in the same directory, which is importable by
Python by using a normal importyourmod statement.

We’ll say that array_1 and array_2 are 2D NumPy arrays of integer type and
a, b and c are three Python integers.

This function uses NumPy and is already really fast, so it might be a bit overkill
to do it again with Cython. This is for demonstration purposes. Nonetheless, we
will show that we achieve a better speed and memory efficiency than NumPy at the cost of more verbosity.

This code computes the function with the loops over the two dimensions being unrolled.
It is both valid Python and valid Cython code. I’ll refer to it as both
compute_py.py for the Python version and compute_cy.pyx for the
Cython version – Cython uses .pyx as its file suffix (but it can also compile
.py files).

This should be compiled to produce convolve_cy.so (for Linux systems,
on Windows systems, this will be a .pyd file). We
run a Python session to test both the Python version (imported from
.py-file) and the compiled Cython module.

There’s not such a huge difference yet; because the C code still does exactly
what the Python interpreter does (meaning, for instance, that a new object is
allocated for each number used).

You can look at the Python interaction and the generated C
code by using -a when calling Cython from the command
line, %%cython-a when using a Jupyter Notebook, or by using
cythonize('compute_cy.pyx',annotate=True) when using a setup.py.
Look at the generated html file and see what
is needed for even the simplest statements. You get the point quickly. We need
to give Cython more information; we need to add types.

To add types we use custom Cython syntax, so we are now breaking Python source
compatibility. Here’s compute_typed.pyx. Read the comments!

importnumpyasnp# We now need to fix a datatype for our arrays. I've used the variable# DTYPE for this, which is assigned to the usual NumPy runtime# type info object.DTYPE=np.intc# cdef means here that this function is a plain C function (so faster).# To get all the benefits, we type the arguments and the return value.cdefintclip(inta,intmin_value,intmax_value):returnmin(max(a,min_value),max_value)defcompute(array_1,array_2,inta,intb,intc):# The "cdef" keyword is also used within functions to type variables. It# can only be used at the top indentation level (there are non-trivial# problems with allowing them in other places, though we'd love to see# good and thought out proposals for it).cdefPy_ssize_tx_max=array_1.shape[0]cdefPy_ssize_ty_max=array_1.shape[1]assertarray_1.shape==array_2.shapeassertarray_1.dtype==DTYPEassertarray_2.dtype==DTYPEresult=np.zeros((x_max,y_max),dtype=DTYPE)# It is very important to type ALL your variables. You do not get any# warnings if not, only much slower code (they are implicitly typed as# Python objects).# For the "tmp" variable, we want to use the same data type as is# stored in the array, so we use int because it correspond to np.intc.# NB! An important side-effect of this is that if "tmp" overflows its# datatype size, it will simply wrap around like in C, rather than raise# an error like in Python.cdefinttmp# Py_ssize_t is the proper C type for Python array indices.cdefPy_ssize_tx,yforxinrange(x_max):foryinrange(y_max):tmp=clip(array_1[x,y],2,10)tmp=tmp*a+array_2[x,y]*bresult[x,y]=tmp+creturnresult

At this point, have a look at the generated C code for compute_cy.pyx and
compute_typed.pyx. Click on the lines to expand them and see corresponding C.

Especially have a look at the for-loops: In compute_cy.c, these are ~20 lines
of C code to set up while in compute_typed.c a normal C for loop is used.

After building this and continuing my (very informal) benchmarks, I get:

So adding types does make the code faster, but nowhere
near the speed of NumPy?

What happened is that most of the time spend in this code is spent in the following lines,
and those lines are slower to execute than in pure Python:

tmp=clip(array_1[x,y],2,10)tmp=tmp*a+array_2[x,y]*bresult[x,y]=tmp+c

So what made those line so much slower than in the pure Python version?

array_1 and array_2 are still NumPy arrays, so Python objects, and expect
Python integers as indexes. Here we pass C int values. So every time
Cython reaches this line, it has to convert all the C integers to Python
int objects. Since this line is called very often, it outweighs the speed
benefits of the pure C loops that were created from the range() earlier.

Furthermore, tmp*a+array_2[x,y]*b returns a Python integer
and tmp is a C integer, so Cython has to do type conversions again.
In the end those types conversions add up. And made our computation really
slow. But this problem can be solved easily by using memoryviews.

There are still two bottlenecks that degrade the performance, and that is the array lookups
and assignments, as well as C/Python types conversion.
The []-operator still uses full Python operations –
what we would like to do instead is to access the data buffer directly at C
speed.

In short, memoryviews are C structures that can hold a pointer to the data
of a NumPy array and all the necessary buffer metadata to provide efficient
and safe access: dimensions, strides, item size, item type information, etc…
They also support slices, so they work even if
the NumPy array isn’t contiguous in memory.
They can be indexed by C integers, thus allowing fast access to the
NumPy array data.

No data is copied from the NumPy array to the memoryview in our example.
As the name implies, it is only a “view” of the memory. So we can use the
view result_view for efficient indexing and at the end return the real NumPy
array result that holds the data that we operated on.

Here is how to use them in our code:

compute_memview.pyx

importnumpyasnpDTYPE=np.intccdefintclip(inta,intmin_value,intmax_value):returnmin(max(a,min_value),max_value)defcompute(int[:,:]array_1,int[:,:]array_2,inta,intb,intc):cdefPy_ssize_tx_max=array_1.shape[0]cdefPy_ssize_ty_max=array_1.shape[1]# array_1.shape is now a C array, no it's not possible# to compare it simply by using == without a for-loop.# To be able to compare it to array_2.shape easily,# we convert them both to Python tuples.asserttuple(array_1.shape)==tuple(array_2.shape)result=np.zeros((x_max,y_max),dtype=DTYPE)cdefint[:,:]result_view=resultcdefinttmpcdefPy_ssize_tx,yforxinrange(x_max):foryinrange(y_max):tmp=clip(array_1[x,y],2,10)tmp=tmp*a+array_2[x,y]*bresult_view[x,y]=tmp+creturnresult

Now bounds checking is not performed (and, as a side-effect, if you ‘’do’’
happen to access out of bounds you will in the best case crash your program
and in the worst case corrupt data). It is possible to switch bounds-checking
mode in many ways, see Compiler directives for more
information.

We’re faster than the NumPy version (6.2x). NumPy is really well written,
but does not performs operation lazily, resulting in a lot
of intermediate copy operations in memory. Our version is
very memory efficient and cache friendly because we
can execute the operations in a single run over the data.

Warning

Speed comes with some cost. Especially it can be dangerous to set typed
objects (like array_1, array_2 and result_view in our sample code) to None.
Setting such objects to None is entirely legal, but all you can do with them
is check whether they are None. All other use (attribute lookup or indexing)
can potentially segfault or corrupt data (rather than raising exceptions as
they would in Python).

The actual rules are a bit more complicated but the main message is clear: Do
not use typed objects without knowing that they are not set to None.

For extra speed gains, if you know that the NumPy arrays you are
providing are contiguous in memory, you can declare the
memoryview as contiguous.

We give an example on an array that has 3 dimensions.
If you want to give Cython the information that the data is C-contiguous
you have to declare the memoryview like this:

cdefint [:,:,::1]a

If you want to give Cython the information that the data is Fortran-contiguous
you have to declare the memoryview like this:

cdefint [::1,:,:]a

If all this makes no sense to you, you can skip this part, declaring
arrays as contiguous constrains the usage of your functions as it rejects array slices as input.
If you still want to understand what contiguous arrays are
all about, you can see this answer on StackOverflow.

For the sake of giving numbers, here are the speed gains that you should
get by declaring the memoryviews as contiguous:

Declaring types can make your code quite verbose. If you don’t mind
Cython inferring the C types of your variables, you can use
the infer_types=True compiler directive at the top of the file.
It will save you quite a bit of typing.

Note that since type declarations must happen at the top indentation level,
Cython won’t infer the type of variables declared for the first time
in other indentation levels. It would change too much the meaning of
our code. This is why, we must still declare manually the type of the
tmp, x and y variable.

And actually, manually giving the type of the tmp variable will
be useful when using fused types.

All those speed gains are nice, but adding types constrains our code.
At the moment, it would mean that our function can only work with
NumPy arrays with the np.intc type. Is it possible to make our
code work for multiple NumPy data types?

Yes, with the help of a new feature called fused types.
You can learn more about it at this section of the documentation.
It is similar to C++ ‘s templates. It generates multiple function declarations
at compile time, and then chooses the right one at run-time based on the
types of the arguments provided. By comparing types in if-conditions, it
is also possible to execute entirely different code paths depending
on the specific data type.

In our example, since we don’t have access anymore to the NumPy’s dtype
of our input arrays, we use those if-else statements to
know what NumPy data type we should use for our output array.

Cython has support for OpenMP. It also has some nice wrappers around it,
like the function prange(). You can see more information about Cython and
parallelism in Using Parallelism. Since we do elementwise operations, we can easily
distribute the work among multiple threads. It’s important not to forget to pass the
correct arguments to the compiler to enable OpenMP. When using the Jupyter notebook,
you should use the cell magic like this: