Multi-PFLOPS supercomputer roadmap

A post at Next Big Future got me to thinking about the shape of the known roadmap for the multi-PFLOPS (and larger) machines. I think it would be a fun insideHPC reader project to collectively document all of the extreme scale machines that are on the horizon right now.

I’m specifically not including SETI@home kinds of clusters here; a super has to be a collection of computing resources intended to be used (at least in substantial part) on a single task or relatively small number of tasks (relative to the number of cores in the machine) in order to be considered. So a supercomputer commissioned to run many multi-thousand core weather simulations (for example) would count, but Google’s datacenter doesn’t count. Likewise, botnets don’t count either. Also, to be listed, the machine has to have a planned peak performance greater than 1.999 PFLOPS, and has to be further along that “wouldn’t it be cool if we could build this machine.”

I’ll get things started; post changes or additions in the comments and I’ll update the list.

Resource Links:

Latest Video

Industry Perspectives

In this Nvidia podcast, Bryan Catanzaro from Baidu describes how machines with Deep Learning capabilities are now better at recognizing objects in images than humans. “AI gets better and better until it kind of disappears into the background,” says Catanzaro — NVIDIA’s head of applied deep learning research — in conversation with host Michael Copeland on this week’s edition of the new AI Podcast. “Once you stop noticing that it’s there because it works so well — that’s when it’s really landed.” [Read More...]

White Papers

This white paper reviews common HPC-environment challenges and outlines solutions that can help IT professionals deliver best-in-class HPC cloud solutions—without undue stress and organizational chaos.