GPU Computing in URCS

Introduction

GPU Computing (GPUc) refers to the use of Graphics Processing Units (GPUs) for General-Purpose computations (i.e. not necessarily related to some fixed graphics-programming API). Alternatively, and maybe more often, GPU-Computing is referred to as GPGPU. The main target of GPU-Computing are heavy, mostly-embarrassingly-, data-parallel applications, which can efficiently exploit the massively-parallel architecture of GPUs. An impressive number of applications falls in this category, and an impressive number of those intersects with the research interests of URCS.

node1x4x2a ( node1x4x2a.cs.rochester.edu ) is our specialized, GPU-Computing-capable server, accessible through the grad-network (NFS-mount), and the target of (at least) systems research on GPU-Computing. This page is a temporary description of the hardware and software installed, and of the necessary steps you have to take before you can make use of the system for your GPUc applications. Though the system is administered by JamesRoche, due to it being the focus of systems research, it might be frequently reboot or heavily in use, so you should consult Jim and/or KonstantinosMenychtas before you try to make use of it for your own research purpose.

Hardware

node1x4x2a is a Dell T7500n Workstation, with the following characteristics :

Software

In order to make use of the graphics devices for GPU-Computing, you would need to be familiar with the Compute Unified Device Architecture ( CUDA ) of NVIDIA. Notice that with the NVIDIA CUDA 2.3 you should be able to write code for GPUs both using CUDA and OpenCL. To make use of either API, you should read the respective Documentation ( trivial list under further resources ).

Notice that Fedora 11 is not officially supported by NVIDIA CUDA 2.3. However, this shouldn't cause any trouble to the use of the devices for GPU Computing.

At any one point on node1x4x2a :

The only software that is guaranteed to be installed is the NVIDIA GPU driver (2.3 or later).

Similarly, the only configuration that is guaranteed to have taken place is the configuration needed for the graphics cards to be usable for GPU computing.

Additional Software

In order to make actual use of the GPUs for GPU-Computing, you will have to install additional software. You can do this at you home directory (NFS-mount).

Install the NVIDIA CUDA Toolkit.

Grab the NVIDIA Toolkit Version 2.3 for Fedora 10 x86_64 from here. This package is absolutely necessary, as it includes the compiler (nvcc) and run-time system to run your CUDA programs.

Install the NVIDIA Toolkit under some directory in your home directory (you will prompted during installation). For the following, the toolkit is assumed to be installed under $HOME/Applications/gpu_computing/cuda_x86_64. This is the CUDA_INSTALL_PATH .

Add the following lines to your .bashrc file - or do similarly if you are using CSH or any other shell (for the following we assume you use bash and that .bashrc will be properly loaded). These will make nvcc and the respective dev-files ready to use.

Because of lack of support for Fedora 11, which in fact is lack of support for gcc4.4 (the default gcc in F11), we will be using the "compatibility-version" of the gcc compiler, 3.4, which is already installed on node1x4x2a. Any .c/.cpp files compiled with gcc/g++ and linked against .cu files compiled with nvcc, won't work if you use gcc4.4. To make this customary change only valid for use with nvcc, do the following :

Create a new directory - we 'll call it gcc_compat for this example - inside the CUDA Toolkit directory (the aforementioned CUDA_INSTALL_PATH ). For example, mkdir $CUDA_INSTALL_PATH/gcc_compat

Create symbolic links to gcc/g++34 in this directory. A valid set of links should look as follows :

Install somewhere in your home directory. For the following, the toolkit is assumed to be installed under $HOME/Documents/workspace.

Install the following packages

freeglut-2.4.0-16.fc11.x86_64.rpm

freeglut-devel-2.4.0-16.fc11.x86_64.rpm

mesa-libGL-7.6-0.1.fc11.x86_64.rpm

mesa-libGL-devel-7.6-0.1.fc11.x86_64.rpm

mesa-libGLU-7.6-0.1.fc11.x86_64.rpm

mesa-libGLU-devel-7.6-0.1.fc11.x86_64.rpm

To do this, you can just download the aforementioned rpms to your $CUDA_INSTALL_PATH and then manually extract them under the same directory. One way to do this is as follows :
while inside $CUDA_INSTALL_PATH, where the *.rpm files also lie, do

for file in *.rpm
do
rpm2cpio $file | cpio -idv
done

This will put the libraries and include files under the directories you have already set to your bashrc, hence make them usable without any further configuration.

Go back to $HOME/Documents/workspace .Make the following changes (below is a diff you can apply) to the file C/common/common.mk in the SDK

The only SDK project that cannot be built under this configuration, is scanLargeArray, which you should move outside of the SDK projects directory ( C/src/ ) temporarily.

After invoking make under the C directory in the SDK, you should be able to make all projects.

If some projects, like projects with OpenGL visualizations, don't work, don't panic; that is natural because of the lack of support for OpenGL through VNC/ssh-X (yet). Same holds for any project which uses non-installed libraries/software. However, a few we have tested and should work include the following ( under C/bin/linux/release/ )

deviceQuery

bandwidthTest

transpose

radixSort

matrixMul

Alternatively

This is only temporary :

Login to node1x4x2a and take a look under /localdisk/NVIDIA . You will find the CUDA Toolkit ( under gpu_computing ), the SDK and a .bashrc sample. Do not make any changes in this directory.

If you are just curious to see some GPU-applications, take a look at the NVIDIA SDK, under /localdisk/NVIDIA/NVIDIA_GPU_Computing_SDK/ . Under C/src/ you will find the source code and under C/bin/linux/release the binaries to sample projects. To run the binaries, fix your LD_LIBRARY_PATH first, with something like this: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/localdisk/NVIDIA/gpu_computing/cuda_x86_64/lib64/ . Some projects don't work yet.

If you want to make changes, make a copy of /localdisk/NVIDIA either locally ( /localdisk ) or to your home directory, and adjust your shell init file taking into account at least the CUDA-related paths of the sample.bashrc file under /localdisk/NVIDIA .

Further resources

For questions and help, contact KonstantinosMenychtas, provided that you first have tried the following and have not found the answer you needed.

If you are just starting with CUDA and want a couple of hands-on resources, you might want to try UIUC ECE 498 (Programming Massively Parallel Processors)class notes and Dr. Dobbs "super-computing for the masses" tutorials (7 parts, starting here ). Notice though that there is no better manual for the rapidly changing software/hardware platform than the CUDA Programming Guide itself - all other resources can and will probably be slightly outdated.