DataDirect Networks reckons existing parallel file systems will run full tilt into a scalability wall as they get overwhelmed by concurrent file accesses and locks from HPC compute complexes with tens of thousands of cores. A bigger, better and faster cache is needed.

DDN has devised the Infinite Memory Engine (IME) concept and will develop it in an up to three year high-performance computing (HPC) IO stack optimisation effort. The basic idea is dead simple: stick a solid state cache between multiple fast multi-threaded CPU cores producing random IOs on the one hand and a parallel file system storage array on the other.

Compute core explosion

DDN's thinking is that the number of compute cores in an HPC compute complex is going to grow radically. Its thinking is informed by work its doing at the US Los Alamos National Laboratory on so-called burst buffers. Looking at a Top 500 supercomputer of 2010 it says an average compute cluster had 13,000 cores. It's estimating that the number for a top 500 supercomputer in 2018 would be 57,772,000 cores – 57.772 million. This is what it calls exascale territory.

Let's say the 2010 processors were dual-threaded; then IO requests from 26,000 threads could hit the file system. In 2018 with, say, 12-threaded CPUs, we could have IO requests from 693.3 million threads hit the file system. This would be a 26,665 per cent increase.

The chances of any one lockable part of a file being locked would grow enormously, to the point where there could be queues of locks waiting too be instantiated and the HPC apps slow down, with compute waiting for storage.

DDN points out that adding spindles for bandwidth sufficient to deal with this problem is vastly expensive and disks fail, meaning a lot of defensive programming has to be done to cope with that. At some point it is cheaper to have a solid-state buffer than to carry on adding spindles, and the advantage of solid state over disk keeps on getting larger as the thread/core count in the compute complex rises.

We need a buffer to cope with slow disk duffers

So we need, DDN says, a very large and very fast buffer interposed between the compute and storage parts of an HPC system so as to keep the compute computing with crippling the storage back-end.

it's fully-integrated with popular HPC parallel file systems like Lustre and GPFS and

its use of an NVRAM cache will simplify app enveloper IO work.

DDN says that, when using IME, a Big Data or HPC app issues a write request to the IME. The IME is scalable and "scalable writes" require no parallel file system locks <-- secret sauce alert.

Writes are dynamically mapped to the solid state storage, based on its load and health, and then protected. Data is selectively drained to a parallel file system in a completely sequential manner, DDN says, and this ensures near-perfect disk utilisation.

Persisted reads are staged into the IME buffer, or to the client directly. DDN says the reads are fast as well since they are made from 100 per cent sequentially written data.

It claims IME can handle millions of concurrent requests per second, scales linearly for bursty applications, and scales 2,000 times better than today's HPC file system technology. While IME has not been announced as a product DDN is including it in bids for petascale systems.

Burst buffer blogging

DDN marketeer Jeff Denworth has written an IME blog. In it he says: "The hairy secret about parallel file systems is that the most truly parallel operations bring them to their knees. Shared writes are the worst, as metadata contention becomes unbearable when you get beyond a few 100K concurrent requests."

Today the IME uses NAND flash. In the future it could use a post-flash non-volatile technology such as Resistive RAM, Spin-Transfer Torque RAM or Phase Change Memory.

DDN's Jean-Luc Chatelain, EVP for strategy and technology, writes in his blog: "I happen to be a big fan of the In-Memory approach but also a believer that Next Gen NV Memory (ie, PCM, ReRam, STRam) is the “path to application salvation."

"The economics of these upcoming technologies are such that they will allow extremely large amounts of fast, static, low power memory to be right next to DRAM (for full disclosure, I am very biased toward ReRam), and with the right middleware layer can make I/O “disappear” as seen from the application layer. This will reduce the tiers of persistence to one, while eliminating complexity – with the important side effect being that the true storage layer will allow use of slow, very fat and very green, spinning media.

EMC also has activities at Los Alamos and is also involved in burst buffer work with its ABBA appliance. ®