Sangyoon Lee (sjames @ evl.uic.edu)

Electronic Visualization Laboratory

University of Illinois at Chicago

1. Concept and Brief

In project 2, we develope galaxy collision simulation with n-body interation algorithm. Milky way /Andromeda galaxy dataset is used (http://bima.astro.umd.edu/nemo/archive/#dubinski). In general, n-body simulation requires N square complexity of computation or N square memory space to reduce some of its computation (compute acceleration / velocity / position of bodies).

To achieve fast simulation, we utilize GPU's highly paralleled processing unit with CUDA. We especially focus on optimization of this simulation appropriate to GPU architecture and CUDA specifications.

* Requirements: to compile and execute appliction, you need to install nvidia CUDA driver / tools / SDK. (ver. 2.0). If you need more detials of this. See this link.

* Note: this application is the final version of the project. Therefore, performance will be the one mentionened in the last section. Application uses 8,192 bodies for simulation without approximation as default setup.

Point Size: '+' / '-' to increase or decrease the size of point (point or pointsprite depends on current drawing mode)

3. Phase I (Data loading & Rendering)

First of all, we need to load inital dataset (Milky way/Andromeda). Given dataset includes 81,920 particles. Among those many particles, we only use disk and bulge of Milky way/Andromeda galaxy (total 49,152 particles. Exclude halo). This has 16,384 disk partcles and 8,192 bulge particles for each galaxies.

- Data Loader

When app initializes, it decides total number of particles simulation uses via app argument. Assume that this number is multiple of 4,096 to eliminate some complexity of parallel execution design of simulation. Data loader destribute this total number of particles evenly. For instance, if we simulate 8,192 particles, then half of it will be Milky way (4096 particles. 2730 for disk and 1366 for bulge) and another half will be Andromeda (4096 particles. 2730 for disk and 1366 for bulge). Following code snippet shows a data loading function.

- Particle Renderer

Once data loaded, app creates VBO and assigns initial value to it. Modified rendering routine in nvidia's n-body simulation is used to draw particles in three different mode. One colored gl point, the second one is colored pointsprite and the last one is pointsprite with predetermined color. You can toggle it via 'd' key. Also the size of point premitive can be changed interactively (refer to keyboard control section).

- Initial Screenshots (two types of pointsprites)

4. Phase II (First naive approach)

Now, we have all necessary data loaded. The next stage is to add computation code on CUDA kernel. Basic simulation code is grabbed from GPU Gems3 book chapter 31. As the first trial, algorithm does not consider any of performance issues here. All data (current position, mass & velocity) reside in device memory area (global memory). Each thread accesses all necessary data from this memory. With N loop, thread compute gravitation (acceleration) against all N bodies and accumulates and calculate new velocity and position.

This is naive N square algorithm and needs two N square read accesses to global memory (each thread read all N bodies' position and velocity values) and two N write access to it again (write out updated velocity and position). As expected, it is very slow. I only get 1.9 frames per second (8192 particles). With this low fps, it is almost impossible to check the simulation result (looks OK or something wrong with simulation parameters). Following code illustrates the CUDA kernel of this naive approach.

- Usage of float4 type

Above first kernel uses device memory type as float array (float*), one thing we've heard in class is that there is passible improvement when using float4 instead of float type. So, modified code to use float4* as device memory dataset. Then, application gets 3.1 fps. This is almost 50% increase. Guessing that when kernel access device memory, float4 type grabs all 4 float values at once while old code may read in float one by one. This also reduces some inteager instruction to compute array index within kernel code.

5. Phase III (Approximation)

In class, we have discussed about the approximation, which means that we may compute a part of bodies at each iteration (time step). For instance, one tenth of N bodies interaction per iteration. Since the first approach gives really bad fps, I decided to reduce some part of its iteration.

Here, I tested one sixteenth, one eighth, and one quater out of total N bodies. Sure, this gives much better performance. The reason is obvious. We reduced both computation and memory access time accordingly. Below code shows the changes in kernel code to accomodate this approximation. Parameter offset is controlled by application in the manner of modula increment (i.e. apprx = 16, then offset will be cycled from 0 to 15 as applicaton progresses.)

- Change in thread organization

Until now, I used a sort of extreme organization of thread/block/grid. After testing approximation, I tried to change this. Previously block(256,1,1), grid(32,1,1) used and new one is block(16, 16, 1) grid (16, 2, 1) it did not show any big difference at this point (16 was the magic number we saw in class). My guess is that the most serious bottleneck is too frequent global memory accesses.

6. Phase IV (Shared Memory)

So far, simulation only uses global memory and register. Great deal of performance gain expected is to utilize fast shared memory. Therefore, in this phase of optimization, kernel include shared memory access. Basic idea came from GPU Gems3 (use tile concept). Shared memory is only used to read in last frame data (position & velocity. two float4 per bodies). Kernel function is splited in similar way to CUDA example.

- Shared Memory Size?

Now, we need to think about how to organize shared memory. Simplest way would be to load all N bodies position into shared memory in the same block. This is another naive idea. But... total amount of shared memory per block limited to 16KB (Geforce 8600M GT). Let's do quick calculation to how much we need for each block. Most frequently used data is last position of bodies to compute acceleration.

8,192 bodies' position (float4) data size = 8,192 * 4 * 4 = 128 KB

Oops, this is too much. Cannot fit to the limitation. Also according to the CUDA programming guide (chapter 5.2), if there are multiple blocks running on the same multiprocessor, the total amount of shared memory used in single block must takes account for other blocks on the same processor. (i.e. two block per multiprocess, then at most half of shared memory use per block). Thefore, we need to cut this down somehow.

Nvidia's n-body example shows good example of how to split task as the manner of divide and conquer. So, I will use this concept to organize shared memory for each block.

Since shared memory is only visible to threads in the same block, each tile compute iteration is arranged in block base. Each block has 256 threads (16x16x1). In main kernel, each thread read single position & velocity data from device memory to shared memory (single iteration within block). Then, compute acceleration for this tile. This single iteration will repeat until it gets to the required number of body interaction (i.e. 8192 bodies in no approximation case. this is 32 iterations.). Following table shows comparasion of memory access pattern between old and new approaches.

Total bodies N: 8192, No approximation. Data type is float4. Table does not include VBO updates (same for both case).

As we can see in above table, new approach moves major memory read accesses to shared memory side. Another consideration is "Loop Unrolling". As GPU Gems3 indicates, there is a slight possible performance gain by unrolling loop. During computation of acceleration, each thread run through 256 bodies interaction for each tile. So, we surely can unroll some of it. Then, how much performance increses?

Approximation

None

1 / 2

1 / 4

1 / 8

1 / 16

Phase III

3.1

6.1

12

22.7

41.6

Shared Mem

20.1

36.8

72

124

168

+ Loop Unroll

21.9

41.8

77

132.4

176

Table shows great performance improvements over utilization of shared memory. Loop unrolling (8 loop unroll) also gives us about 5 ~ 7% speedup. For the loop unrolling I used nvcc compiler directive, #pragma unroll (CUDA programming Guide pp.25). When I tried different number of loop unrolling more than 8, it did not give that much of improvements.

Here is the kernel code used for this testing. This is almost the final version of my n-body simulation.

In tile_calculation function, shared memory access by all threads has the same memory address. This can be done as fast as register reading (refer to Dr. Dobb's tutorial. part 5.1). I think that this is the broadcasting feature of shared memory (refer to CUDA programming Guide, pp.62). One thing I was not sure is that document mentioned broadcasting occurs within 32bit word. My guess here is that float4 devided into 4 separate reads and each is 32bit word (guess based on double type example in CUDA programming Guide, pp.61). Therefore, above code can obtain fast access to shared memory.

The other consideration of memory optimization is access pattern of global memory access. According to Nvidia's n-body simulation kernel, it enforces each block not to read the same addresses when reading in data from global mem to shared mem. I was not sure if this is possible benefit or not. So, I tried it but did not get any big difference in performance. This was a bit strange.

7. Overall Comparision

We have seen a several techniques to improve n-body simulation so far. Most of all, shared memory utilization was best bet for the performance gain. According to Nvidia's implementation, there are a few more possible optimizations. Without getting too complex, phase IV reached almost 86% of nvidia example code in terms of performance (nvidia implementation showed 25.5 frames per second with 8,192 bodies without approximation).

Here are some comparision charts for summary.

8. Screenshots

9. Lessons

First of all, finding simulation parameters was not easy at all. Even the final version of my work looks OK but not close to what I saw in class. Some of the parameters I played with are scale factor for each initial position, velocity and mass. In addition to this data scales, there were simulation timestep, softening of distance square, damping of velocity, and gravity constant. So, total 7 parameters play with... Not easy to find magic number for all even though I tend to fix total number of bodies.

Frequent global memory (device memory) access is really really bad in general. Shared memory is so cool and fast. Anyone who is thinking of CUDA app development must consider memory utilization very carefully to get the best performance.

Good material for memory architecture and optimization strategy is found in chapter 5.1.2 memory bandwidth in CUDA Programming Guide 2.0.