Cloud computing/software as a service/infrastructure as a service are becoming concrete.

Traditional data centers are losing favor.

A new era in storage is dawning.

A reasonable enough list, but hey, everyone is a critic, and I would have added GPUs. It looks to me like the general accelerator rush is coming to an end with the demise of ClearSpeed and the reminder that FPGAs are still really hard to get performance out of. But GPUs look like they are going to run in 2009 — although how far the run beyond 2009 depends upon how well they fare when (or if) manycore processors finally come to market.

The storage part of that article — about content addressable storage — is worth a quick read.

Comments

I disagree with the comment “the general accelerator rush is coming to an end”. On the contrary, speaking with customers, it seems to be picking up. Clearspeed had issues as a company and as a product, but they are/were not the only accelerator provider.

The argument for accelerators is compelling for the smaller users in terms of leveraging power to their applications. It is compelling for the software ISV’s looking to help their customers decrease the cost of their hardware in order to have more budget for the ISV’s software.

Accelerators offer some combination of many more processor cycles per wallclock tick, or more efficient cycles per wallclock tick. As we demonstrated recently with GPU-HMMer, a single machine with 3 GPUs can outperform a more power-hungry/harder to maintain cluster on the same code. That is, there is a financial, ease of use, and believe it or not, a green argument to be made for using accelerators. As you note in the subsequent article, the memory wall is a problem, and curiously enough the GPUs suffer from their own version of it, though it is ~10x further out than the memory wall on the host (what we call computing substrate).

Clearspeed failed due to them having a business model that required ~$5k/accelerator for ~10x wallclock on ordinary applications, where your code needed to be ported, and Clearspeed accelerators were not ubiquitous. Programmable GPUs are in 1E+7 devices, cost from $150-$1800/unit, giving 5-30x per application (not per kernel). Couple this ubiquity with the low cost (I can/do develop Cuda code on my laptop) of the platform and the zero cost of the tools, and you have something of interest for people in need of ever more computing capability with an ever decreasing budget for the same.

We might have to agree to disagree, but accelerators appear to have a very bright, and long future ahead of them in HPC.

Joe – I agree entirely about GPUs (for now at least). My “general accelerator” comment was meant to be limited to the GPU parent class, including Clearspeed and FPGAs and whatnot. I was trying to convey with my next sentence “But GPUs look like they are going to run in 2009…” that I think GPUs are the specific exception to the general rule…but I think I failed to be clear!

Resource Links:

Latest Video

Industry Perspectives

Over at the IBM Blog, IBM Fellow Hillary Hunter writes that the company anticipates that the world’s volume of digital data will exceed 44 zettabytes, an astounding number. "IBM has worked to build the industry’s most complete data science platform. Integrated with NVIDIA GPUs and software designed specifically for AI and the most data-intensive workloads, IBM has infused AI into offerings that clients can access regardless of their deployment model. Today, we take the next step in that journey in announcing the next evolution of our collaboration with NVIDIA. We plan to leverage their new data science toolkit, RAPIDS, across our portfolio so that our clients can enhance the performance of machine learning and data analytics." [Read More...]

White Papers

Launch a Machine Learning Startup – In this report, we’ll address everything from how to choose a framework and pick the tools you need to get started, to the questions you’ll be asking yourself, and the benefits of immersing yourself in the machine and deep learning communities. This report also untangles the jargon and explores what these terms actually mean. Download this special report now.