Practical aspects of multicore programming (part 1)

EN
/ Day 5
/ 10:00
/ Track 1

Modern servers have dozens or even hundreds of cores, which can execute many threads of computation in parallel. In such a system, the difference between the performance of a bad implementation and a good one can easily be 100x.

This lecture uses concurrent data structures as examples to explain high performance implementation techniques and surprising performance pitfalls. Along the way, we will cover linearizability, which is a way to define what correctness means for a concurrent data structure.

Students should finish with knowledge of some simple linearizable concurrent data structures, and an understanding of how these data structures interact with various aspects of real systems, with a special focus on processor caches.

Trevor Brown

University of Waterloo

Trevor Brown is currently an assistant professor in the Cheriton School of Computer Science at the University of Waterloo. Before that, he was a postdoctoral researcher at the Institute of Science and Technology, Austria. Before that, he was a postdoctoral researcher at the Technion, Israel Institute of Technology. Before that, he was a PhD student at the University of Toronto.

Although Trevor was in the theory group at the University of Toronto, he would say that his work is closer to systems work than theory. He cares greatly about rigor, whether in theoretical or experimental work. His research currently revolves around concurrent data structures and non-uniform memory architectures. He is also interested in transactional memory, non-volatile memory, and memory allocation and reclamation.