Developing efficient and reliable software is a difficult task.
Rapidly growing and dynamically changing data sets further increase
complexity by making it more challenging to achieve efficiency and
performance. I present practical and powerful abstractions for taming
software complexity in two large domains: 1) dynamic software that
interacts with dynamically changing data, and 2) parallel software
that utilizes multiple processors or cores. Together with the
algorithmic models and programming-languages that embody them, these
abstractions enable designing and developing efficient and reliable
software by using high-level reasoning principles and programming
techniques. As evidence of their effectiveness, I consider a broad
range benchmarks involving lists, arrays, matrices, and trees, as well
as sophisticated applications in geometry, machine-learning, and
large-scale cloud computing. On the theoretical side, I show
asymptotically significant improvements in efficiency and present
solutions to several open problems. On the practical side, I present
programming languages, compilers, and related software systems that
deliver massive speedups with little or no programmer effort.