The goal is to hit one quintillion floating-point operations per second, and DDN hopes this mind-boggling level of number-crunching will be reached by 2018. The company is going to direct its funds to:

Accelerating IO with a new file system, plus new middleware and storage tiering. This is needed to achieve "million-way application CPU parallelism".

Merging compute, network and storage to put "pre-processing and post-processing routines natively within the storage infrastructure".

Improving energy efficiency

About that last point, DDN states:

With the emergence of storage-class memory and software tools, infrastructures can be built with fewer components compared to today’s disk-based technologies. These initiatives will serve to significantly reduce hardware acquisition costs but will also make data centres much more space and power efficient by reducing storage footprint by more than 75 per cent.

How will this footprint shrink benefit a storage-array supplier? The only conceivable way is executing exascale apps inside the data stores, which relates to the second point above, a converged compute, network and storage infrastructure.

This implies that DDN has to get compute and network elements into its existing storage stack, either directly or via partnership. DDN also wants to get Big Data analytics software into its arrays, again through ownership or partnership.

Privately held DDN must have healthy revenues to prop up an average $16.6m-per-year spend on research and development. The company has been selected by Intel to collaborate with the chip giant in Lawrence Livermore National Security's FastForward programme, which is sponsored by the US Department of Energy and investigates extreme-scale computing.

The company said its efforts will "focus on evolving the state-of-the-art in parallel file systems, including the Lustre open-source parallel file system, as well as more tightly integrating compute and storage platforms to achieve greater efficiency and information insight".

This confirms DDN's future is all about running apps in its arrays that process massive amounts of data.

DDN said:

The storage and IO research and development subcontract will focus on three main areas that together cover the exascale IO stack from top-to-bottom. Included in the stack is a new storage interface that tightly integrates with the HDF5 scientific data library and data model, a next-generation flash-optimised storage tier designed to accelerate peak IO loads in HPC environments, and a massively scalable storage interface designed to support the storage foundation requirements to achieve exascale infrastructure scalability.

We are told the FastForward programme is part of a seven-lab consortium of Argonne National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Livermore National Laboratory, Los Alamos National Laboratory, Oak Ridge National Laboratory, Pacific Northwest National Laboratory, and Sandia National Laboratories. Industry partners include AMD, Intel, and NVIDIA.

A final thought: could Intel be contributing any of DDN's $100m investment bill? Intriguing, no? ®