Category: CUDA

Trying to get information of the underlying design of a GPGPU programming language environment and hardware can be difficult. Companies will not publish design information because they do not want you or other companies to copy the technology. But, sometimes you need to know details of a technology that are just not published in order to use it effectively. If they won’t tell you how the technology works, the only recourse to gain an understanding is experimentation [1, 2]. What is the performance of OpenCL, CUDA, and C++ AMP? What can we learn from this information?

Every time Microsoft releases a new version of Visual Studio C++, NVIDIA releases a new version of CUDA to work with the new version of Visual Studio. Unfortunately, that seemed to take NVIDIA an incredibly long time the last time. After Visual Studio 2010 was released in April 4, 2010, CUDA integration with with Visual Studio 2010 didn’t happen until CUDA 4.0 RC1 in March 4, 2011–a year later! And, to this day, the build rules have never worked cleanly for me (http://stackoverflow.com/questions/6156037/issue-with-production-release-of-cuda-toolkit-4-0-and-nsight-2-0). Because I’m developing C++ AMP and CUDA side by side, and cannot wait for NVIDIA, I decided to develop the build rules myself, and work out the details of calling CUDA from Visual Studio C++ 2012.

Today's processors have undergone a huge transformation from those of just 10 years ago. CPU manufacturers Intel and AMD (and up and coming CPU designer ARM) have increased processor speed via greater emphasis on superscalar execution, deeper pipelining, branch prediction, out of order execution, and fast multi-level caches. This design philosophy has resulted in faster response time for single tasks executing on a processor, but at the expense of increased circuit complexity, high power consumption, and a small number of cores on the die. On the other hand, GPU manufacturers NVIDIA and ATI have focused their designs on processors with many simple cores that implement SIMD parallelism, which hides latency of instruction execution [1].

While GPUs have been in existence for about 10 years, the software support for these processor have taken years to catch up. Software developers are still sifting through solutions for programming these processors. OpenCL and CUDA are frameworks for GPGPU computing. Each framework comprises a language for expressing kernel code (instructions that run on a GPU), and an API for calling kernels (from the CPU). While the frameworks are similar, there are some important differences.

CUDA is a proprietary framework. It is not open source, and all changes to the language and API are made by NVIDIA. But, some third-party tools have been built around the framework and it does seem to have a large following in academia. Unfortunately, CUDA only runs on NVIDIA devices. While it should be possible to run CUDA code on other platforms using Ocelot, this only works on Linux systems.

OpenCL is a standardized framework, and is starting to gain popularity. Similar to NVIDIA's CUDA C++, OpenCL allows programmers to use the massive parallel computing power of GPU's for general purpose computing. Unlike CUDA, OpenCL works on any supported GPU or CPU, including Intel, AMD, NVIDIA, IBM, and ARM processors.

Does OpenCL make programming multiple platforms easier? Is it as fast as CUDA, or does it sacrafice speed for diverse platform support?

CUDA C++ is a parallel programming language for NVIDIA GPU’s. Using the CUDA Toolkit (http://developer.nvidia.com/object/cuda_3_2_toolkit_rc.html), developers can write programs that take advantage of the large parallel computing potential of the GPU, speeding up their programs several orders of magnitude. CUDA programs are executed on the GPU, which works as a coprocessor with the CPU, having its own memory and instruction set. But, the question is, after the developer invested the time to parallelize his program, can the CUDA program run on a PC without an NVIDIA GPU? Does the developer have to redo all his software?

Last month, I took a short course called Introduction to Multicore Programming, which was taught by James T. Demers of Physical Sciences Inc. This course introduced me to current hardware and software systems for parallel computing. Personally, the offering of this course was timely: I have been reading about how important parallel programming is becoming; and, I was starting to become interested in parallel algorithms but my knowledge of parallel computing was embarrassingly sparse. The last time I looked at the subject was in the early 1990’s when I wrote a program that used MPI. Though I have an Intel Q6600 quad-core CPU machine (Figure 1, Figure 2), a byproduct of the rivalry between Intel and AMD, I never really bothered to program it for parallel computing because I did not think four cores would offer that much over one core. What I learned from this course was that I had my head in the sand. In fact, I was surprised to learn that that the graphics card I owned, an NVIDIA GeForce 9800 GT (Figure 3), was a parallel computing system which I could program. So, I decided to apply what I learned in the class by solving two programming exercises on my system (Figure 4): matrix multiplication and graph breadth-first search. In this post, I describe the first problem, matrix multiplication.

1/2: Adding to #Antlrvsix an analysis tool of #Antlr grammars. This is how it works with cycle detection and useless lexer rules. There are some issues in the responsiveness of the MS LSP client for VS2019.

Adding to #Antlrvsix the refactoring to remove useless parentheses in an #Antlr grammar. This is how it works with the extra parentheses in the arrayAccess_lf_primary rule in Java9.g4 that nobody knew were there. Only yet starting to scratch the surface of grammar optimizations.

Implementing #Antlr grammar fold refactorings in #Antlrvsix. Two types: extract a selected sequence of symbols and make a rule (shown first); replace all occurrences in the grammar with a folded rule (shown second). Spacing and comments do not matter.