Next topic

This Page

Quick search

This page lists the Python features supported in the CUDA Python. This includes
all kernel and device functions compiled with @cuda.jit and other higher
level Numba decorators that targets the CUDA GPU.

CUDA Python maps directly to the single-instruction multiple-thread
execution (SIMT) model of CUDA. Each instruction is implicitly
executed by multiple threads in parallel. With this execution model, array
expressions are less useful because we don’t want multiple threads to perform
the same task. Instead, we want threads to perform a task in a cooperative
fashion.