We gain access to the GPGPU by calling
CudafyHost.GetDevice. There are nuances here, but for now, let’s call it simple.

We transfer data (the longitudes and latitudes) to the GPGPU using its device memory. Again, nuances but nothing too weird.

We launch a function called
GpuFindPathDistance. We will be looking at that code right after we discuss the concepts of threads, blocks, the grid, and memory.

We call Synchronize, which, as you might expect, waits for the GPU operation to complete.

We then transfer some answers from the GPGPU back into main memory and look for the shortest permutation. For now, rest assured that even if we are performing millions or billions of permutations, the number of answers we need to loop through will be a relatively small number, like 1024 or less.

Threads, Blocks, the Grid, and Memory

In Cudafy, a thread is basically what you might expect it to be. What is new to CPU developers are the concepts of blocks and grids.

Block: Think of a block as a group of threads that can allocate and share GPGPU memory. If one thread in a block allocates memory, all threads in the block have access to that memory. There is also an ability to synchronize the threads within a block. There is a limit, however, of only 512 or 1024 threads per block depending on the hardware.

Grid: Think of the grid as the GPGPU itself. In other words, if we see a reference to blocks per grid, that really means the total number of blocks. There is a hardware limit of 65535 blocks per grid in older hardware, and 2^31-1 blocks per grid in newer hardware.

So in the code above, suppose we specify 128 threads per block and 128 blocks per grid. That means we will have 16384 threads working the problem.

There are many types of GPGPU memory, but in this series of posts we will cover only two kinds:

Device Memory: This kind of memory can be accessed (read and written) by all threads in all blocks in the GPGPU and by the CPU. This is not as fast as shared memory.

Shared Memory: This kind of memory is allocated by GPGPU threads and is accessible only among the threads in the same block. In other words, shared memory allocations take place once per block.

GpuFindPathDistance

Now the fun begins with some real heavy-duty GPGPU code that implements the
GpuFindPathDistance function we launched above. The first thing you should notice is that it is C#. Also, it calls our function from part 1 of this series of posts:
FindPathDistance (which in turn calls PathFromRoutePermutation) – so none of that needs to be re-written.

There is a Cudafy-supplied parameter of type
GThread. From GThread we can determine which thread and block we are in, as well as the total number of threads and blocks. The formula to determine the permutation from all this looks pretty straightforward. However, we might suspect that the number of permutations needing to be computed is more than the total number of threads running. Therefore, we use a for loop to calculate additional permutations as needed. For example, the fifth thread in the eleventh block begins by computing the permutation 10*128+4 (if there are 128 threads per block). Once this thread computes the distance for permutation 1284, it will compute the distance for permutation 1284+128*128 (if there are 128 threads per block and 128 blocks). The loop will continue until the next permutation to compute exceeds the number of permutation computations required. The first thread in the first block will likely loop one more time than the last thread in the last block since the number of permutations is probably not divisible by 128*128.

Shared Memory

The three lines of code that allocate memory look as if each thread in each block is allocating memory. This is an understandable mistake, and simply a nuance of how the GPGPU works. In reality, this memory is only allocated once per block. Therefore, whatever we allocate needs to be enough for all threads in the block. That is why the number of items allocated is usually some multiple of the number of threads. In our solution, we allocate one floating point number to track the shortest distance found by each thread. We also need to store the permutation number that corresponds to the shortest distance. Finally, we need some scratch pad memory to hold the city path sequence. Now it might make sense to you why we wrote
PathFromRoutePermutation as we did in part 1.

Clever Algorithm

At the end of the permutation loop (at the first
thread.SyncThreads), we are left with 128 best distances and permutation numbers per block. We need the call to
SyncThreads to ensure all the all 128 threads have made it to this point before proceeding.

What we want to do now is loop through all 128 best distances to find the smallest one. One solution is to let thread 0 loop through the data while the other 127 threads do nothing. This is easily done by enclosing the loop inside of an if statement. Or, we could come up with some clever algorithm that used more than 1 thread to go faster. The for loop in the code above does this most elegantly. This code was adapted from the code I found in a wonderfully helpful book called CUDA By Example. The basic idea is for thread 0 to compare its own results with that of thread 64’s results. Thread 0 then places the shortest of those two results back into thread 0’s results. At the same time thread 1 does the same thing with the results of thread 65. This will keep 64 of the 128 threads busy for a wee bit. At the end of that iteration of the for loop we are left with 64 best distances. We loop again and are left with 32 best distances. You can see how this ends.

Final Answer

Once we have narrowed down the results to one best distance per block, we place that result into the GPGPU device memory that is accessible by the CPU. The CPU is then tasked with retrieving these results and comparing the best result of each block to get the final answer. We could perform this last pass in the GPGPU. Would that be better? You tell me.

Share

About the Author

John Hauck has been developing software professionally since 1981, and focused on Windows-based development since 1988. For the past 17 years John has been working at LECO, a scientific laboratory instrument company, where he manages software development. John also served as the manager of software development at Zenith Data Systems, as the Vice President of software development at TechSmith, as the lead medical records developer at Instrument Makar, as the MSU student who developed the time and attendance system for Dart container, and as the high school kid who wrote the manufacturing control system at Wohlert. John loves the Lord, his wife, their three kids, and sailing on Lake Michigan.