GPU Coder

GPU Coder

Generate CUDA code for NVIDIA GPUs

GPU Coder™ generates optimized CUDA® code from MATLAB® code for deep learning, embedded vision, and autonomous systems. The generated code calls optimized NVIDIA® CUDA libraries, including cuDNN, cuSolver, and cuBLAS. It can be integrated into your project as source code, static libraries, or dynamic libraries, and can be used for prototyping on GPUs such as the NVIDIA Tesla® and NVIDIA Tegra®. You can use the generated CUDA within MATLAB to accelerate computationally intensive portions of your MATLAB code. GPU Coder lets you incorporate legacy CUDA code into your MATLAB algorithms and the generated code.

When used with Embedded Coder®, GPU Coder lets you verify the numerical behavior of the generated code via software-in-the-loop (SIL) testing.

Generate Fast, Flexible CUDA Code

Generate optimized CUDA code. Deploy code royalty-free.

Deploy Algorithms Royalty-Free

Compile and run your generated code on popular NVIDIA GPUs, from desktop systems to data centers to embedded hardware. The generated code is royalty-free—deploy it in commercial applications to your customers at no charge.

Generate Code from Supported Toolboxes and Functions

GPU Coder generates code from a broad range of MATLAB language features that design engineers use to develop algorithms as components of larger systems. This includes over 390 operators and functions from MATLAB and companion toolboxes.

Incorporate Legacy Code

Use legacy code integration capabilities to incorporate trusted or highly optimized CUDA code into your MATLAB algorithms for testing in MATLAB, then call the same CUDA code from the generated code as well.

Generate CUDA Code from Deep Learning Networks

Deploy trained deep learning networks with Deep Learning Toolbox.

Deploy End-To-End Deep Learning Algorithms

Deploy a variety of trained deep learning networks such as ResNet-50 and SegNet from Deep Learning Toolbox™ to NVIDIA GPUs. Generate code for preprocessing and postprocessing along with your trained deep learning networks to deploy complete algorithms.

Generate Optimized Code for Inference

GPU Coder generates code with a smaller footprint compared with other deep learning solutions because it only generates the code needed to run inference with your specific algorithm. The generated code calls optimized libraries, including TensorRT™ and cuDNN.

Optimize the Generated Code

Take advantage of optimizations that are automatically applied to code generated by GPU Coder. Use design patterns to further increase performance.

Minimize CPU-GPU Memory Transfers and Optimize Memory Usage

GPU Coder automatically analyzes, identifies, and partitions segments of MATLAB code to run on either the CPU or GPU. It also minimizes the number of data copies between CPU and GPU. Use profiling tools to identify other potential bottlenecks.

Use Design Patterns for Further Acceleration

Design patterns such as stencil processing use shared memory to improve memory bandwidth. They are applied automatically when using certain functions such as convolution. You can also manually invoke them using specific pragmas.

Access Peripherals and Sensors from MATLAB and Generated Code

Remotely communicate with the NVIDIA target from MATLAB to acquire data from webcams and other supported peripherals for early prototyping. Build and deploy your algorithm along with peripheral interface code to the board for standalone execution.

Move from Prototyping to Production

Use GPU Coder with Embedded Coder to interactively trace your MATLAB code side-by-side with the generated CUDA. Verify the numerical behavior of the generated code running on the hardware using software-in-the-loop (SIL) and processor-in-the-loop (PIL) testing.