My previous blog post "HPC Game Changer: IBM & NVidia New Architecture" was not well-received by some in the HPC community.Yesterday at SC13, I attended at least two presentations about Hadoop and HPC. Hadoop 2 and HPC: Beyond MapReduce, presented by Cray, Inc., further illustrated some of the miscommunication between the two camps. During Q&A, someone asked, "Have you measured against Spark? Hadoop is an entire ecosystem; you have to look at Spark and streaming technologies such as Storm, too, not just Hadoop itself," the Cray representative responded that they're just about to start looking at Spark. (For more on the Apache Spark project, see my 20-minute overview video.) Right away, what a Hadoop person means by "Hadoop" (an entire ecosystem) and what an HPC person means by "Hadoop" (a specific release from Apache in isolation) are different.After that, though, it started looking even worse for HPC, at least for Cray in particular. The very last question from the audience came from a woman whose tone implied she was surprised no one else had asked the question earlier, something to the effect of, "Isn't there a performance penalty when running Hadoop on a Cray HPC due to the data being centralized on a single server as opposed to the data being distributed amongst the nodes as in a conventional Hadoop cluster?" The response from the Cray representative was that the performance ended up being about the same.Questions were over by then, and the next logical and obvious question was not asked out of both politeness to the Cray representative and the lack of time: What is the performance per dollar of a Cray running Hadoop vs. a conventional Hadoop cluster on commodity hardware?Now, to be fair, I'm sure the Cray representative was referring to a comparison on a "Big Data" problem. To digress into this important distinction: there are (at least) three broad categories of problems.

Scientific simulation or processing. This is where conventional HPC is strong, because data is read at most once and sometimes not at all (e.g. for 3D movie rendering), and computational power is paramount.

Big Data, where massive data from various sources is "dumped" onto a Hadoop cluster in the hopes that sometime in the future insights will be gleaned. In this second scenario, the data just sits on Hadoop, and gets processed and reprocessed at various times in the future.

Streaming, which is the extension of batch-oriented Big Data to real-time. The new streaming technologies such as Storm, Spark Streaming, and S4 address this, and I'm not aware of any HPC vendor addressing this class of problem. Indeed, when that first questioner pressed the issue of streaming technology, the Cray representative did not have an answer.

So while HPC excels at the domains to which it has conventionally been applied (#1 above), there are domains (#2 and #3 above) where Hadoop, by its introduction of the topology of bringing the compute to the data, excels and does so in a cost effective manner. HPC vendors are now looking at Hadoop for two reasons:

The classes of problems that their systems don't handle as well or as cost-efficiently.

To leverage the comparative ease of programming Map/Reduce compared to OpenMP/MPI, the large community and body of knowledge surrounding Hadoop, as well as the popularity of Hadoop and Big Data. Indeed, one SC13 panelist in another session mentioned the difficulty of attracting students into HPC programs due to the popularity of Hadoop and Big Data.

It is this issue of topology that was my primary point in my previous blog post -- the idea of bringing compute closer to data. This idea was echoed by many speakers in many sessions at SC13 as being a long-term goal and a way to reinvent HPC. The IBM/Nvidia announcement of their joint work was the only concrete realization of this much-shared goal that I've heard here at SC13.To clarify my point about interconnects from my previous blog post, the interconnect speeds of HPC vs. Hadoop underscore the importance of topology. HPC often uses 40Gbps Infiniband (which has the additional advantage of remote direct memory access (RDMA) to eliminate CPU involvement in communication), whereas Hadoop conventionally has just used 1Gbps Ethernet. For the class of Big Data problems, Hadoop achieves its performance even with the much slower interconnect. There is certainly nothing wrong with Infiniband itself; the point is the opposite, that because such a powerful technology is needed in HPC illustrates the weakness of conventional HPC topology, at least for some classes of problems.But the set of Big Data problems that Hadoop is good at solving is expanding, thanks to projects like Apache Spark. Hadoop's disk-based implementation of Map/Reduce has conventionally been very poor at iterative algorithms such as machine learning. That is where Apache Spark shines, which instead of distributing data across the disks of a cluster like plain Hadoop does, distributes data across the RAM memories of machines in a cluster. With Apache Spark, 10Gbe or faster becomes useful -- no more waiting for data to stream off disk. The combination of RAM-based mass data storage and higher-speed interconnects is bringing Hadoop into even more domains conventionally handled by HPC. A Hadoop cluster running Apache Spark over 10Gbe where each node has a lot of RAM (say, 512GB today in 2013) starts to look like and certainly at least starts to solve some of the same problems as HPC.The overlap and "convergence" (as the Cray representative had a slide on) of HPC and Hadoop is growing, due to the performance improvements and expanded domains and software infrastructure (e.g. streaming technology) in Hadoop, and due to HPC vendors adopting Hadoop for the two reasons stated above. The two communities are working to find common ground.Going forward, that common ground is coming in the form of GPUs. Both HPC and Hadoop communities are adopting GPU technology and heterogeneous computing at a rapid pace, and hopefully as each community moves forward, they will be able to cross-pollinate architectures and understanding of problem domains. The Hadoop community has HPC-like problems, and the HPC community is having to deal with Big Data due to the explosion of data. While there are many success stories already of one or two racks of GPUs replacing room-sized HPCs, the IBM/Nvidia engineering partnership promises to take it to the next level beyond that due to their stated goal of moving compute closer to the data.

The big announcement at SC'13, the International Supercomputing conference sponsored by IEEE and ACM that is in its 25th year and that this year is in Denver, came from an IBM VP, speaking at the Nvidia booth. I believe the VP was Dave Turek.The announcement did make the press, even the Wall Street Journal, but the press is not reporting the magnitude of the announcement, possibly because they were working off press releases rather than the technical details relayed by the IBM VP late in the evening during tonight's opening night gala portion of the conference.It's more than just marrying Nvidia GPUs to IBM's forthcoming POWER8 processor. POWER7 is what powers Watson, and although the POWER series and the Xeon series leapfrog each other in raw chip benchmarks, IBM engineers its High-performance computers (HPC) holistically, with its own POWER CPUs, its own processor boards, and most importantly, its own architecture to maximize throughput. Real-world applications run twice as fast on POWER systems than they do on Xeon systems.

No, what the surprising, and welcome, proclamation from the IBM VP is was "the end of the server in HPC" -- I haven't seen that quote yet in any press covering the general announcement. Anyone who has seen a modern-day "supercomputer" walks away disappointed -- the racks of commodity-like PCs simply strung together with Infiniband. This has led to the modern HPC mantra lament of "We've got compute out the wazoo -- it's I/O we need more of."

That's one reason why Hadoop was a stunning upset to the HPC community. HPC usually completely separates compute from storage, with storage relegated to a system like Lustre. Oh, the file systems are so-called "parallel file systems," but all that really means is that the pipe is made fatter by having multiple parallel pipes each of standard size. At a 30,000 foot PowerPoint view, it's still just two bubbles, one for compute and one for storage, connected by one line.

Hadoop introduced the novel idea of bringing the compute to the storage. (BTW, the other reason for Hadoop's popularity is that the Map/Reduce API and its even higher abstractions of Pig and Hive are orders of magnitude easier to program for than the marriage of MPI/OpenMP that has become the standard in the HPC world. But the easier Map/Reduce API actually comes with a performance penalty, compared to hand-tuned MPI/OpenMP software.)

When the IBM VP said "end of the server," he went on to explain that IBM intends to incorporate GPUs throughout the entire architecture and "workflow" as he put it, including directly into storage devices. He wouldn't elaborate on exactly where else in their architectures that IBM would incorporate GPUs, but he said something to the effect of, "if you study our past architectures and see the direction we were going in and project out into the future, that probably won't be too far off."

This is quite a change from 2009, when Dave Turek said: "The use of GPUs is very embryonic and we are proceeding at an appropriate pace".

Turek believes the industry has entered a period of evaluation that will last between 18 to 24 months and there will be a gradual dissemination into more conventional segments.

Putting GPU in the storage is taking the Hadoop idea of bringing compute to storage to the next level.

It's a whole new paradigm in HPC. The chapter of the past two decades of "lots of servers + Infiniband" is about to be closed, and a new one opened by IBM and Nvidia.