Before you build CUDA code, you’ll need to have installed the appropriate
driver for your nvidia GPU and the CUDA SDK. See NVIDIA’s CUDA installation
guide
for details. Note that clang does not support the CUDA toolkit as installed
by many Linux package managers; you probably need to install nvidia’s package.

You will need CUDA 7.0, 7.5, or 8.0 to compile with clang.

CUDA compilation is supported on Linux, on MacOS as of 2016-11-18, and on
Windows as of 2017-01-05.

Pass e.g. -L/usr/local/cuda/lib64 if compiling in 64-bit mode; otherwise,
pass e.g. -L/usr/local/cuda/lib. (In CUDA, the device code and host code
always have the same pointer widths, so if you’re compiling 64-bit code for
the host, you’re also compiling 64-bit code for the device.)

<GPUarch> – the compute capability of your GPU. For example, if you
want to run your program on a GPU with compute capability of 3.5, specify
--cuda-gpu-arch=sm_35.

Note: You cannot pass compute_XX as an argument to --cuda-gpu-arch;
only sm_XX is currently supported. However, clang always includes PTX in
its binaries, so e.g. a binary compiled with --cuda-gpu-arch=sm_30 would be
forwards-compatible with e.g. sm_35 GPUs.

You can pass --cuda-gpu-arch multiple times to compile for multiple archs.

The -L and -l flags only need to be passed when linking. When compiling,
you may also need to pass --cuda-path=/path/to/cuda if you didn’t install
the CUDA SDK into /usr/local/cuda, /usr/local/cuda-7.0, or
/usr/local/cuda-7.5.

If you’re using GPUs, you probably care about making numerical code run fast.
GPU hardware allows for more control over numerical operations than most CPUs,
but this results in more compiler options for you to juggle.

Fused multiply-add instructions can be much faster than the unfused
equivalents, but because the intermediate result in an fma is not rounded,
this flag can affect numerical code.

-fcuda-flush-denormals-to-zero (default: off) When this is enabled,
floating point operations may flush denormal inputs and/or outputs to 0.
Operations on denormal numbers are often much slower than the same operations
on normal numbers.

-fcuda-approx-transcendentals (default: off) When this is enabled, the
compiler may emit calls to faster, approximate versions of transcendental
functions, instead of using the slower, fully IEEE-compliant versions. For
example, this flag allows clang to emit the ptx sin.approx.f32
instruction.

In clang, math.h and cmath are available and passtests
adapted from libc++’s test suite.

In nvcc math.h and cmath are mostly available. Versions of ::foof
in namespace std (e.g. std::sinf) are not available, and where the standard
calls for overloads that take integral arguments, these are usually not
available.

#include<math.h>#include<cmath.h>// clang is OK with everything in this function.__device__voidtest(){std::sin(0.);// nvcc - okstd::sin(0);// nvcc - error, because no std::sin(int) override is available.sin(0);// nvcc - same as above.sinf(0.);// nvcc - okstd::sinf(0.);// nvcc - no such function}

nvcc does not officially support std::complex. It’s an error to use
std::complex in __device__ code, but it often works in __host____device__ code due to nvcc’s interpretation of the “wrong-side rule” (see
below). However, we have heard from implementers that it’s possible to get
into situations where nvcc will omit a call to an std::complex function,
especially when compiling without optimizations.

As of 2016-11-16, clang supports std::complex without these caveats. It is
tested with libstdc++ 4.8.5 and newer, but is known to work only with libc++
newer than 2016-11-16.

Although clang’s CUDA implementation is largely compatible with NVCC’s, you may
still want to detect when you’re compiling CUDA code specifically with clang.

This is tricky, because NVCC may invoke clang as part of its own compilation
process! For example, NVCC uses the host compiler’s preprocessor when
compiling for device code, and that host compiler may in fact be clang.

When clang is actually compiling CUDA code – rather than being used as a
subtool of NVCC’s – it defines the __CUDA__ macro. __CUDA_ARCH__ is
defined only in device mode (but will be defined if NVCC is using clang as a
preprocessor). So you can use the following incantations to detect clang CUDA
compilation, in host and device modes:

Invoke fatbin to combine all P_arch and S_arch files into a
single “fat binary” file, F.

Compile H using an external host compiler (gcc, clang, or whatever you
like). F is packaged up into a header file which is force-included into
H; nvcc generates code that calls into this header to e.g. launch
kernels.

clang uses merged parsing. This is similar to split compilation, except all
of the host and device code is present and must be semantically-correct in both
compilation steps.

For each GPU architecture arch that we’re compiling for, do:

Compile the input .cu file for device, using clang. __host__ code
is parsed and must be semantically correct, even though we’re not
generating code for the host at this time.

Invoke fatbin to combine all P_arch and S_arch files into a
single fat binary file, F.

Compile H using clang. __device__ code is parsed and must be
semantically correct, even though we’re not generating code for the device
at this time.

F is passed to this compilation, and clang includes it in a special ELF
section, where it can be found by tools like cuobjdump.

(You may ask at this point, why does clang need to parse the input file
multiple times? Why not parse it just once, and then use the AST to generate
code for the host and each device architecture?

Unfortunately this can’t work because we have to define different macros during
host compilation and during device compilation for each GPU architecture.)

clang’s approach allows it to be highly robust to C++ edge cases, as it doesn’t
need to decide at an early stage which declarations to keep and which to throw
away. But it has some consequences you should be aware of.

When resolving an overloaded function, clang considers the host/device
attributes of the caller and callee. These are used as a tiebreaker during
overload resolution. See IdentifyCUDAPreference for the full set of rules,
but at a high level they are:

When compiling for device, HDs will call Ds with lower priority than HD, and
will call Hs with still lower priority. If it’s forced to call an H, the
program is malformed if we emit code for this HD function. We call this the
“wrong-side rule”, see example below.

__host__voidhost_only();// We don't codegen inline functions unless they're referenced by a// non-inline function. inline_hd1() is called only from the host side, so// does not generate an error. inline_hd2() is called from the device side,// so it generates an error.inline__host____device__voidinline_hd1(){host_only();}// no errorinline__host____device__voidinline_hd2(){host_only();}// error__host__voidhost_fn(){inline_hd1();}__device__voiddevice_fn(){inline_hd2();}// This function is not inline, so it's always codegen'ed on both the host// and the device. Therefore, it generates an error.__host____device__voidnot_inline_hd(){host_only();}

For the purposes of the wrong-side rule, templated functions also behave like
inline functions: They aren’t codegen’ed unless they’re instantiated
(usually as part of the process of invoking them).

clang’s behavior with respect to the wrong-side rule matches nvcc’s, except
nvcc only emits a warning for not_inline_hd; device code is allowed to call
not_inline_hd. In its generated code, nvcc may omit not_inline_hd’s
call to host_only entirely, or it may try to generate code for
host_only on the device. What you get seems to depend on whether or not
the compiler chooses to inline host_only.

Member functions, including constructors, may be overloaded using H and D
attributes. However, destructors cannot be overloaded.

Occasionally you may want to have a class with different host/device versions.

If all of the class’s members are the same on the host and device, you can just
provide overloads for the class’s member functions.

However, if you want your class to have different members on host/device, you
won’t be able to provide working H and D overloads in both classes. In this
case, clang is likely to be unhappy with you.

#ifdef __CUDA_ARCH__structS{__device__voidfoo(){/* use device_only */}intdevice_only;};#elsestructS{__host__voidfoo(){/* use host_only */}doublehost_only;};__device__voidtest(){Ss;// clang generates an error here, because during host compilation, we// have ifdef'ed away the __device__ overload of S::foo(). The __device__// overload must be present *even during host compilation*.S.foo();}#endif

We posit that you don’t really want to have classes with different members on H
and D. For example, if you were to pass one of these as a parameter to a
kernel, it would have a different layout on H and D, so would not work
properly.

To make code like this compatible with clang, we recommend you separate it out
into two classes. If you need to write code that works on both host and
device, consider writing an overloaded wrapper function that returns different
types on host and device.

structHostS{...};structDeviceS{...};__host__HostSMakeStruct(){returnHostS();}__device__DeviceSMakeStruct(){returnDeviceS();}// Now host and device code can call MakeStruct().

Unfortunately, this idiom isn’t compatible with nvcc, because it doesn’t allow
you to overload based on the H/D attributes. Here’s an idiom that works with
both clang and nvcc:

Modern CPUs and GPUs are architecturally quite different, so code that’s fast
on a CPU isn’t necessarily fast on a GPU. We’ve made a number of changes to
LLVM to make it generate good GPU code. Among these changes are:

Memory space inference –
In PTX, we can operate on pointers that are in a paricular “address space”
(global, shared, constant, or local), or we can operate on pointers in the
“generic” address space, which can point to anything. Operations in a
non-generic address space are faster, but pointers in CUDA are not explicitly
annotated with their address space, so it’s up to LLVM to infer it where
possible.

64-bit integer divides are much slower than 32-bit ones on NVIDIA GPUs.
Many of the 64-bit divides in our benchmarks have a divisor and dividend
which fit in 32-bits at runtime. This optimization provides a fast path for
this common case.

Aggressive loop unrooling and function inlining – Loop unrolling and
function inlining need to be more aggressive for GPUs than for CPUs because
control flow transfer in GPU is more expensive. More aggressive unrolling and
inlining also promote other optimizations, such as constant propagation and
SROA, which sometimes speed up code by over 10x.

(Programmers can force unrolling and inline using clang’s loop unrolling pragmas
and __attribute__((always_inline)).)

The team at Google published a paper in CGO 2016 detailing the optimizations
they’d made to clang/LLVM. Note that “gpucc” is no longer a meaningful name:
The relevant tools are now just vanilla clang/LLVM.