Ben started by defining in-memory computing as “technology that allows the processing of massive quantities of real time data in the main memory of the server to provide immediate results from analyses and transactions”. He then asked whether the cloud enables real-time computing, since there is a clear market hunger for cloud computing to solve the problems of our current enterprise systems.

Not surprisingly, he advocated in-memory computing as the solution for those problems. Like John Ousterhout and the RAMCloud team, he sees the need to scale DRAM memory independently from physical boxes. He proposed a model of coherent shared memory, using high-speed low-latency networks and separating the data transport and cache layers into a separate tier below the operating system. The goal: no server-side application caches, DRAM-like latency for physically distributed databases, and in fact no separation between the application server and the database server.

Ben argued that coherent shared memory can dramatically lower the cost of in-memory computing while minimizing the pain for application developers. He also offered some benchmarks for SAP’s BigIron system to demonstrate the performance improvements.

In short, Ben offered a vision of in-memory computing as a reincarnation of the mainframe. It was an interesting and provocative presentation, and my only regret is that we couldn’t stage a debate between him and Jeff Hammerbacher over the future of large-scale enterprise computing.