Everyone agrees that HPC is needed to foster innovation and strengthen competitiveness. Why, then, is HPC still reserved for a small community of top experts?

Since the days of the early Beowulf Clusters in the late 1990s, esteemed colleagues in our HPC community have regularly predicted that HPC for the masses was at the door. But the door has remained firmly shut. Beowulfs were so easy to build; grids connect every researcher to powerful supercomputers; and clouds offer computing on demand at your fingertips. But where are the masses of users? Haven’t they recognized that this massive compute power is there, just waiting for them? According to the US Council on Competitiveness, fewer than 5 per cent of manufacturers are using HPC servers for computer simulations to design and develop their products.

It seems that, with every big step forward in computing, we have added one more layer of complexity, or one more hurdle, and thus we have scared away the average end-user who has decided instead to stick to their workstation — still regularly unhappy about its performance and memory limitations but very familiar with mastering it.

The pathway to HPC is strewn with challenges, many of them well-known for decades: cumbersome and time-consuming procurement; complex architectures and applications; high total cost of ownership (TCO); and so on. Anyone buying an HPC server needs an army of experts to pick the right architecture, operate it, and support the users. And all the while, what users want is an environment that looks like their own workstation. Otherwise they turn away and continue hugging their workstation.

But there may be a silver lining for HPC — in the cloud. There are some very good reasons why HPC clouds will follow enterprise clouds, at a respectful distance, in my view:

HPC clouds come with huge benefits: Users get HPC without having to buy and operate an HPC system. They can continue to use their own desktop system for daily design and development work, and then submit larger, more complex, time-consuming jobs into the cloud. In addition to on-demand access to ‘infinite’ resources they also get reduced capital expenditure with a pay-per-use charging model. Their businesses gain greater agility, and higher-quality results to keep existing customers and to win new ones, as well as reduced risk and fewer product failures.

The growing acceptance of general enterprise cloud computing services will generate a strong acceptance of HPC related services for engineering simulations in the cloud, albeit with some time delay. Because cloud services are widely accepted and used today at the enterprise level (e.g. for ERP, CRM, administration), the growing acceptance and use of cloud services in a company’s R&D department seems like a natural progression.

Supply chain partners are encouraged by the big manufacturing companies to perform high-quality end-to-end simulations on HPC systems, in an effort to reduce failure rates and increase quality in the entire supply chain.

International initiatives such as the Missing Middle by the US National Center for Manufacturing Sciences; the UberCloud HPC Experiment and Marketplace; and the trade and technical media are currently creating strong awareness for the benefits of in-house HPC and HPC as a Service (HPCaaS) in manufacturing industry.
Despite these advantages, there are well-known and widely discussed roadblocks, such as complex access processes to clouds; conservative software licensing; losing control over assets in the cloud; slow transfers of large data sets; incompatible clouds; and a jungle of cloud hardware, software, and expertise providers.

It can appear that in this cloud-based HPC also, it is too hard for the average user to find what they’re looking for. One important recent technological development might have the power to change the world of HPC cloud: UberCloud Containers.

The UberCloud started in mid-2013 using an open platform, called Docker, that can package an application and its dependencies in a virtual container that runs on any modern Linux server. The UberCloud enhanced Docker to suit it for technical computing applications in science and engineering.

UberCloud Containers form the basis for the user-friendly online UberCloud Marketplace. They rely on Linux kernel facilities, such as cGroups, libcontainer, and LXC, which are already part of many modern Linux operating systems. The run-time components to launch UberCloud Containers are widely distributed by the Docker platform and do not require an additional capital investment.

The Containers are launched from pre-built images distributed through a central registry hosted by UberCloud. Software and operating system updates, enhancements, and fixes become instantly available for the next container launch in an automated fashion.

The notion of a pre-built image may sound familiar. The notion has been at the heart of virtualization, a popular technology for breaking down a physical computer environment into finer logical pieces. However, unlike virtualization, UberCloud Containers do not rely on a hypervisor; instead, they share the host operating system’s kernel and application libraries, leading to performance characteristics that are comparable to bare metal installations.

UberCloud Containers have several advantages, such as:

Portability: They can run on most infrastructures with minimal modification. The required run time environment is distributed as open source, is well documented, and is supported by a large community of users.

Manageability: UberCloud manages the contents of the containers and keeps them up-to-date; keeping installation, tuning, maintenance, and testing costs to a minimum.

Variety: Engineering applications, tools and operating systems are constantly being adding to the portfolio.

Instant provisioning: The Containers start within seconds, with a single command. Short provisioning times ensure end-users receive the resources they need, when they need them

High utilization: Multiple containers can be run on a single server if the individual user-jobs require a small amount of resources.

Audits: UberCloud develops its Containers with a process that’s easily understandable by any Linux user. IT audits of the components, configurations, and security settings of the Containers are easy to perform.

This ground-breaking container technology can reduce or even eliminate many of today’s HPC Cloud hurdles, making access to the cloud as easy as accessing a workstation. Packaged in a suitable container like this, HPC will rocket, high into the cloud.

Wolfgang Gentzsch is co-founder and president, the UberCloud. He is also Chairman of the ISC Cloud’14 conference, which kicks off Sept. 29 in Heidelberg, Germany.

Resource Links:

Latest Video

Industry Perspectives

In this podcast, the Radio Free HPC team goes off the supercomputing rails a bit with a discussion on digital immortality. "A new company called Nectome will reportedly archive your mind for future uploading to a machine. While the price of $10K seems reasonable enough, they do have to kill you to complete the process." [Read More...]

White Papers

While some domains that rely on computing systems are satisfied with the performance gains that are continuously delivered from processor manufacturers, others require performance that can only be attained through massive parallelism. High-performance computing is for those that need the highest performance in order to solve many of today’s most difficult scientific challenges.