Since 2008, Harvard Research Computing has undertaken a significant scaling challenge increasing their available HPC and storage from 200 cores and 20TB to over 70,000 cores and 35PB of storage. James will discuss the journey and the highlights of extending the computing to support world class research and education. During the evolution of the computing platforms at Harvard they also helped to support and build the Massachusetts Green High Performance Computing Center which is a dedicated high performance research computing facility in Holyoke, MA. This facility continues to support large scale research computing with sustainable energy and advanced networking. Recently the NESE project (New England Storage Exchange) was funded by the National Science Foundation. This is a multi-petabyte object store that is supported by the existing MGHPCC facility supporting the region. The Data Science Initiative at Harvard has also been recently announced and will require even further advanced computation to support their research faculty. Now as the world takes a grip on "cloud" but more importantly remotely provisioned infrastructure, hybrid models for compute and storage are required along with flexibility to be able to further accelerate science. James will discuss their strategy moving forwards and the current and existing infrastructures in place to allow for seamless provisioning of research computing. Justin Riley Team Lead at Harvard, will follow this talk with a deep technical discussion of the specific implementation of the systems that Harvard are designing in concert with the development teams and leadership at OpenNebula to support research computing to make their platforms more resilient and able to continue to scale.

James Cuff is the Assistant Dean for Research Computing in the Division of Science in the Faculty of Arts and Sciences. James also performs the role of Distinguished Engineer for Research Computing in HUIT at Harvard University. His group supports advanced high performance... Read More →