Séminaire REGAL

Practical Abstractions for Dynamic and Parallel Software

17/07/2012المتدخلون) المتدخل) : Umut Acar (MPI-SWS, Saarbrücken, Allemagne)Developing efficient and reliable software is a difficult task. Rapidly growing and dynamically changing data sets further increase complexity by making it more challenging to achieve efficiency and performance. I present practical and powerful abstractions for taming software complexity in two large domains: 1) dynamic software that interacts with dynamically changing data, and 2) parallel software that utilizes multiple processors or cores. Together with the algorithmic models and programming-languages that embody them, these
abstractions enable designing and developing efficient and reliable software by using high-level reasoning principles and programming techniques. As evidence of their effectiveness, I consider a broad range benchmarks involving lists, arrays, matrices, and trees, as well as sophisticated applications in geometry, machine-learning, and large-scale cloud computing. On the theoretical side, I show asymptotically significant improvements in efficiency and present solutions to several open problems. On the practical side, I present programming languages, compilers, and related software systems that deliver massive speedups with little or no programmer effort.------ Bio Umut Acar leads the programming languages and systems group at Max Planck Institute for Software Systems. He obtained his Ph.D. at Carnegie Mellon University (2005), and his M.A. and B.S. degrees at University of Texas at Austin and Bilkent University. Between 2005 and 2010, he worked as an Assistant Professor at Toyota Technological Institute and at the University of Chicago. Acar's research interests span programming languages, algorithms, and software systems. Acar is a co-inventor of self-adjusting computation and a co-creator of the programming languages CEAL and DeltaML for self-adjusting computation. He is a co-developer of algorithms and software systems for locality-guided parallel scheduling, dynamic trees, dynamic meshes, statistical learning, and incremental large-scale data processing.