I/O in ExaFLOW

Exascale computing will serve for very large capability jobs as well as for workflows with many instances of large-scale simulations. Implied in any case is an extremely large I/O consumption for reading and writing data as well as for storing these on a large-scale filesystem. This applies in particular to fluid-flow simulations. However, data I/O is an emerging bottleneck in highperformance computing (irrespective of application/discipline) because of diverging hardware speed-ups between computation and I/O. This will remain true even with new IO technologies like burst buffers. Also, nonvolatile memory will only gradually help. To reduce the amount of data for storage and handling we propose two solution paths: parallelization of I/O, and I/O data reduction and compression via application-dependent filtering. The main objective of both is alleviation of performance bottlenecks caused by data transfer from memory to disk.

Unsteady fluid-flow simulations produce large amounts of raw data describing the flow physics by a huge collection of time-dependent scalar, vector and tensor data, similar to real-world measurements. However, in this approach, the underlying flow phenomena (e.g. vortices) are only contained in an implicit way and as every object may be discretized by hundreds of grid points and many time steps, there is an enormous data-reduction potential if feature-based data were used for storage instead of 'cbrute-force' storage of raw data. The use of problem-specific filters will be investigated with the goal of reducing the amount of data for I/O in-situ such that the ratio of I/O to floating point operations for exascale computing is improved and physically interesting features are extracted. Present findings for fluid dynamics will be applicable for other disciplines using computational methods and other users as well, including industry.