CUDA

Stands for "Compute Unified Device Architecture." CUDA is a parallel computing platform developed by NVIDIA and introduced in 2006. It enables software programs to perform calculations using both the CPU and GPU. By sharing the processing load with the GPU (instead of only using the CPU), CUDA-enabled programs can achieve significant increases in performance.

CUDA is one of the most widely used GPGPU (General-Purpose computation on Graphics Processing Units) platforms. Unlike OpenCL, another popular GPGPU platform, CUDA is proprietary and only runs on NVIDIA graphics hardware. However, most CUDA-enabled video cards also support OpenCL, so programmers can choose to write code for either platform when developing applications for NVIDIA hardware.

While CUDA only supports NVIDIA hardware, it can be used with several different programming languages. For example, NVIDIA provides APIs and compilers for C and C++, Fortran, and Python. The CUDA Toolkit, a development environment for C/C++ developers, is available for Windows, OS X, and Linux.

TechTerms - The Tech Terms Computer Dictionary

This page contains a technical definition of CUDA. It explains in computing terminology what CUDA means and is one of many technical terms in the TechTerms dictionary.

All definitions on the TechTerms website are written to be technically accurate but also easy to understand. If you find this CUDA definition to be helpful, you can reference it using the citation links above. If you think a term should be updated or added to the TechTerms dictionary, please email TechTerms!