Managing scientific data at large scale is challenging for scientists but also for the host data center. The storage and file systems deployed within a data center are expected to meet users' requirements for data integrity and high performance across heterogeneous and concurrently running applications. With new storage technologies and layers in the memory hierarchy, the picture is becoming murkier. To effectively manage the data load within a data center, I/O experts must understand how users expect to use these new storage technologies and what services they should provide in order to enhance user productivity. We seek to ensure a systems-level perspective is included in these discussions. In this workshop we bring together I/O experts from data centers and application workflows to share current practices for scientific workflows, issues and obstacles for both hardware and the software stack, and R&D to overcome these issues. To focus on relevant aspects and streamline the discussion, a list of relevant topics is provided as common structure of the talks. Scientific papers related to the topic are welcome for submission.

Targeted Audience

I/O experts from data centers and industry. Researchers/Engineers working on high-performance I/O for data centers. Interested domain scientists and computer scientists interested in discussing I/O issues. Vendors are also welcome, but their presentations must align with the same topics and not focus on commercial aspects.