Abstract: For over four decades, throughput has been the target metric of choice for
Online Transaction Processing engines. Around mid 2000s, however, Dennard scaling came
to a crushing halt and now multicore processors provide explicit thread-level parallelism
as an alternative to frequency scaling for increasing throughput. Thus, OLTP research
focuses on developing scalable synchronization techniques for exploiting parallelism
provided by multicore processors. In the late 2000s, DRAM price free-fall made it possible
to fit a single server with Terabytes of memory, and to fit most operational databases,
with the exception of a few rare cases, entirely in memory. This led to a flurry of
research on the design of scalable main-memory OLTP engines that adopt radically
different designs compared to their disk-based counterparts. Today, state-of-the-art
main-memory OLTP engines can handle millions of transactions per second and provide
near-linear scalability under most workloads. However, three recent trends indicate an
impending change in OLTP engine design once again: 1) changes in application workloads, 2)
shifting hardware landscape, and 3) new target metrics. In this talk, we will discuss the
implications of these trends on the design of next-generation transactional engines, and
explore new designs with the twin goal of meeting changing application demands and
optimizing for the new metrics by exploiting emerging hardware.