The number of active threads required to achieve peak application throughput on Graphics Processing Units (GPUs) depends largely on the ratio of time spent on computation to the time spent accessing data from memory. While compute-intensive applications can achieve peak throughput with a low number of threads, memory-intensive applications might not achieve good throughput even at the maximum supported thread count. In this paper, the authors study the effects of scheduling work from multiple applications on the same GPU core.