This function emits a helper that gathers Reduce lists from the first lane of every active warp to lanes in the first warp.

void inter_warp_copy_func(void* reduce_data, num_warps) shared smem[warp_size]; For all data entries D in reduce_data: If (I am the first lane in each warp) Copy my local D to smem[warp_id] sync if (I am the first warp) Copy smem[thread_id] to my local D sync

Emit a helper that reduces data across two OpenMP threads (lanes) in the same warp.

It uses shuffle instructions to copy over data from a remote lane's stack. The reduction algorithm performed is specified by the fourth parameter.

Algorithm Versions. Full Warp Reduce (argument value 0): This algorithm assumes that all 32 lanes are active and gathers data from these 32 lanes, producing a single resultant value. Contiguous Partial Warp Reduce (argument value 1): This algorithm assumes that only a contiguous subset of lanes are active. This happens for the last warp in a parallel region when the user specified num_threads is not an integer multiple of

This contiguous subset always starts with the zeroth lane. Partial Warp Reduce (argument value 2): This algorithm gathers data from any number of lanes at any position. All reduced values are stored in the lowest possible lane. The set of problems every algorithm addresses is a super set of those addressable by algorithms with a lower version number. Overhead increases as algorithm version increases.

Terminology Reduce element: Reduce element refers to the individual data field with primitive data types to be combined and reduced across threads. Reduce list: Reduce list refers to a collection of local, thread-private reduce elements. Remote Reduce list: Remote Reduce list refers to a collection of remote (relative to the current thread) reduce elements.

We distinguish between three states of threads that are important to the implementation of this function. Alive threads: Threads in a warp executing the SIMT instruction, as distinguished from threads that are inactive due to divergent control flow. Active threads: The minimal set of threads that has to be alive upon entry to this function. The computation is correct iff active threads are alive. Some threads are alive but they are not active because they do not contribute to the computation in any useful manner. Turning them off may introduce control flow overheads without any tangible benefits. Effective threads: In order to comply with the argument requirements of the shuffle function, we must keep all lanes holding data alive. But at most half of them perform value aggregation; we refer to this half of threads as effective. The other half is simply handing off their data.

Procedure Value shuffle: In this step active threads transfer data from higher lane positions in the warp to lower lane positions, creating Remote Reduce list. Value aggregation: In this step, effective threads combine their thread local Reduce list with Remote Reduce list and store the result in the thread local Reduce list. Value copy: In this step, we deal with the assumption made by algorithm 2 (i.e. contiguity assumption). When we have an odd number of lanes active, say 2k+1, only k threads will be effective and therefore k new values will be produced. However, the Reduce list owned by the (2k+1)th thread is ignored in the value aggregation. Therefore we copy the Reduce list from the (2k+1)th lane to (k+1)th lane so that the contiguity assumption still holds.

The master thread id is the first thread (lane) of the last warp in the GPU block. Warp size is assumed to be some power of 2. Thread id is 0 indexed. E.g: If NumThreads is 33, master id is 32. If NumThreads is 64, master id is 32. If NumThreads is 1024, master id is 992.

For the 'generic' execution mode, the runtime encodes thread_limit in the launch parameters, always starting thread_limit+warpSize threads per CTA. The threads in the last warp are reserved for master execution. For the 'spmd' execution mode, all threads in a CTA are part of the team.