]]>The UberCloud Experiment (also known as the HPC Experiment) started in July last year to explore the process of accessing and using remote HPC resources or HPC-as-a-Service. From an initial pool of 160 participating organizations located around the world, 25 teams were created, consisting of an industry end-user and their application, the software provider, the computational resource provider, and the HPC expert who handles the porting of the application onto the resource and serves as team manager.

Round 2, which began in December and features both CAE and life sciences applications, has attracted 300 participating organizations, with 20 established teams and more to come. The project management tool BaseCamp guides teams through 22 well-defined steps of the end-to-end process, while a new services directory, UberCloud Exhibit, lists UberCloud hardware, software and expertise services.

Now it is time to report back to the HPC community what we have learned so far, for example which industry applications have been implemented on remote computing resources and in the cloud; how the teams have faced and resolved major roadblocks; what the optimal end-to-end process looks like; and additional guidance and recommendations. While the Experiment continues, we are beginning to get invitations to present these findings at conferences. Here are some of them – we invite interested parties to stop by and talk to us:

If you would like to participate in Round 3 of the UberCloud HPC Experiment starting in April you can register now and we’ll send you additional information. And if you want to learn more about some of the major services used in the Experiment, please go to the interactive UberCloud Exhibit.

]]>The UberCloud Experiment (also known as the HPC Experiment) started in July last year to explore the process of accessing and using remote HPC resources or HPC-as-a-Service. From an initial pool of 160 participating organizations located around the world, 25 teams were created, consisting of an industry end-user and their application, the software provider, the computational resource provider, and the HPC expert who handles the porting of the application onto the resource and serves as team manager.

Round 2, which began in December and features both CAE and life sciences applications, has attracted 300 participating organizations, with 20 established teams and more to come. The project management tool BaseCamp guides teams through 22 well-defined steps of the end-to-end process, while a new services directory, UberCloud Exhibit, lists UberCloud hardware, software and expertise services.

Now it is time to report back to the HPC community what we have learned so far, for example which industry applications have been implemented on remote computing resources and in the cloud; how the teams have faced and resolved major roadblocks; what the optimal end-to-end process looks like; and additional guidance and recommendations. While the Experiment continues, we are beginning to get invitations to present these findings at conferences. Here are some of them – we invite interested parties to stop by and talk to us:

If you would like to participate in Round 3 of the UberCloud HPC Experiment starting in April you can register now and we’ll send you additional information. And if you want to learn more about some of the major services used in the Experiment, please go to the interactive UberCloud Exhibit.

]]>Over the last four months, the Uber-Cloud Experiment has developed into a thriving community of fellow colleagues who are increasingly benefiting from the use of remote HPC resources, including HPC in the cloud. With a successful Round One having just wrapped up, the organizers of the experiment, Wolfgang Gentzsch and Burak Yenier, have opened up participation for Round Two, which will officially commence at SC12, in Salt Lake City, on November 15. Want to know more? Keep reading…

How do the members of this community benefit?

They benefit by joining the free Uber-Cloud Experiment, and exploring together the end-to-end process of accessing and using remote computing resources for HPC applications. Today, this community consists of almost 200 organizations and individuals, everybody with the vision of enhancing their current computing capacity with powerful remote resources, on demand, at their fingertips. Gone is the headache when computing resources were scarce, simulation models didn’t fit into memory, and computing took too long.

What is the Uber-Cloud HPC Experiment?

Round 1 of the experiment was mainly about a very first exploration of accessing and using compute resources remotely, hands on. And for many participants this was the first time ever that they gained access to remote computing resources. With minimal intervention into the process, we monitored each of the 25 teams and discovered the real roadblocks and how the teams have removed them (or not). We will soon publish our report addressing these findings.

You can see some of our current Round 1 participants here. In Round 1 they formed teams like Anchor Bolt, Resonance, Radiofrequency, Supersonic, Liquid-Gas, Wing-Flow, Ship-Hull, Cement-Flows, Sprinkler, Space Capsule, Car Acoustics, Dosimetry, Weathermen, Wind Turbine, Combustion, Blood Flow, ChinaCFD, Gas Bubbles, Side impact, and ColombiaBio. Want to read more about Round 1? Please see our first call for participation.

How does the experiment work?

Suppose the industry end-user is in need of additional compute resources, say for speeding up the design cycle, for simulating a more sophisticated geometry or complex physics, or for running many more simulations for a higher quality result. We, the experiment orchestrators, will jointly look at this end-user’s application and requirements, select appropriate resources, software, and the best-suited HPC experts in our community. This ‘Team of Four’, the end-user, software provider, resource provider, and HPC expert will then implement and run the end-user’s task and the results will be delivered back to the end-user. Finally, the whole team will extract lessons learned, and present further recommendations as input, which can be published as a case study.

Experiment Round 2 starts now

Now Round 2 will be quite different: more advanced, more professional, semi-automatic, with more participants from CAE and life sciences, more teams, closer to reality, with a commercial production angle, using tools for project management, and tool-based measuring of effort and cost. One of the highlights of Round 2 will be the Uber-Cloud Services Directory, where hardware, software, and expertise providers can advertise their services to our Uber-Cloud community and to the wider HPC and Digital Manufacturing community.

The Experiment Kick-off at SC12

The final webinar of Round 1 and the kick-off of Round 2 will take place in the Intel booth at the Supercomputing Conference (SC12) in Salt Lake City at 11:00 am (local time) on November 15. If you can’t make it to SC12, the webinar will be aired live for our registered experiment attendees and as always, the slides will be made available to our registered experiment participants following the webinar.

Wolfgang Gentzsch and Burak Yenier are the creators and facilitators of the Uber-Cloud Experiment. Wolfgang is an HPC veteran. Having worked in leading positions in research, academia and industry for some 30 years, Wolfgang is now an HPC consultant and the chairman of the ISC Cloud conference series for HPC and Big Data in the Cloud. Burak is the vice president of operations at CashEdge (now part of Fiserv), a software-as-a-service company in Silicon Valley, which provides innovative payments and aggregation solutions to financial institutions.

]]>Over the last four months, the Uber-Cloud Experiment has developed into a thriving community of fellow colleagues who are increasingly benefiting from the use of remote HPC resources, including HPC in the cloud. With a successful Round One having just wrapped up, the organizers of the experiment, Wolfgang Gentzsch and Burak Yenier, have opened up participation for Round Two, which will officially commence at SC12, in Salt Lake City, on November 15. Want to know more? Keep reading…

How do the members of this community benefit?

They benefit by joining the free Uber-Cloud Experiment, and exploring together the end-to-end process of accessing and using remote computing resources for HPC applications. Today, this community consists of almost 200 organizations and individuals, everybody with the vision of enhancing their current computing capacity with powerful remote resources, on demand, at their fingertips. Gone is the headache when computing resources were scarce, simulation models didn’t fit into memory, and computing took too long.

What is the Uber-Cloud HPC Experiment?

Round 1 of the experiment was mainly about a very first exploration of accessing and using compute resources remotely, hands on. And for many participants this was the first time ever that they gained access to remote computing resources. With minimal intervention into the process, we monitored each of the 25 teams and discovered the real roadblocks and how the teams have removed them (or not). We will soon publish our report addressing these findings.

You can see some of our current Round 1 participants here. In Round 1 they formed teams like Anchor Bolt, Resonance, Radiofrequency, Supersonic, Liquid-Gas, Wing-Flow, Ship-Hull, Cement-Flows, Sprinkler, Space Capsule, Car Acoustics, Dosimetry, Weathermen, Wind Turbine, Combustion, Blood Flow, ChinaCFD, Gas Bubbles, Side impact, and ColombiaBio. Want to read more about Round 1? Please see our first call for participation.

How does the experiment work?

Suppose the industry end-user is in need of additional compute resources, say for speeding up the design cycle, for simulating a more sophisticated geometry or complex physics, or for running many more simulations for a higher quality result. We, the experiment orchestrators, will jointly look at this end-user’s application and requirements, select appropriate resources, software, and the best-suited HPC experts in our community. This ‘Team of Four’, the end-user, software provider, resource provider, and HPC expert will then implement and run the end-user’s task and the results will be delivered back to the end-user. Finally, the whole team will extract lessons learned, and present further recommendations as input, which can be published as a case study.

Experiment Round 2 starts now

Now Round 2 will be quite different: more advanced, more professional, semi-automatic, with more participants from CAE and life sciences, more teams, closer to reality, with a commercial production angle, using tools for project management, and tool-based measuring of effort and cost. One of the highlights of Round 2 will be the Uber-Cloud Services Directory, where hardware, software, and expertise providers can advertise their services to our Uber-Cloud community and to the wider HPC and Digital Manufacturing community.

The Experiment Kick-off at SC12

The final webinar of Round 1 and the kick-off of Round 2 will take place in the Intel booth at the Supercomputing Conference (SC12) in Salt Lake City at 11:00 am (local time) on November 15. If you can’t make it to SC12, the webinar will be aired live for our registered experiment attendees and as always, the slides will be made available to our registered experiment participants following the webinar.

Wolfgang Gentzsch and Burak Yenier are the creators and facilitators of the Uber-Cloud Experiment. Wolfgang is an HPC veteran. Having worked in leading positions in research, academia and industry for some 30 years, Wolfgang is now an HPC consultant and the chairman of the ISC Cloud conference series for HPC and Big Data in the Cloud. Burak is the vice president of operations at CashEdge (now part of Fiserv), a software-as-a-service company in Silicon Valley, which provides innovative payments and aggregation solutions to financial institutions.

]]>https://www.hpcwire.com/2012/11/02/uber-cloud_experiment_round_2_kicks_off_at_sc12/feed/04270Half-Time in the Uber-Cloudhttps://www.hpcwire.com/2012/09/20/half-time_in_the_uber-cloud/?utm_source=rss&utm_medium=rss&utm_campaign=half-time_in_the_uber-cloud
https://www.hpcwire.com/2012/09/20/half-time_in_the_uber-cloud/#respondThu, 20 Sep 2012 07:00:00 +0000http://www.hpcwire.com/?p=4349<img style="float: left;" src="http://media2.hpcwire.com/hpcwire/Hyperscale_small.bmp" alt="" width="111" height="68" />Since its inception on June 28, the Uber-Cloud Experiment has attracted over 160 industry and research organizations and individuals from 22 countries. They all have one goal: to jointly explore the end-to-end process of remotely accessing technical computing resources sitting in HPC centers and in the cloud. With Round One of the experiment wrapping up, the organizers have generously provided a "half-time" report of the project.

]]>Since its first announcement on June 28 here on HPCwire, and its official start on July 20, the Uber-Cloud Experiment has attracted over 160 industry and research organizations and individuals from 22 countries. They all have one goal: to jointly explore the end-to-end process of remotely accessing technical computing resources sitting in HPC centers and in the cloud. The focus of this experiment is on engineering simulations performed by small and medium enterprises that expect a quantum leap in innovation and competitiveness by using high performance computing.

The benefits of remote access to HPC are widely recognized. We have at our disposal most of the technology needed to access and run our engineering workloads on remote resources. But we still face other challenges more related to the human element. For example, trusting in the resource provider; giving away some control over our applications, data, and resources; security; provider lock-in; software licensing; unfamiliar pay-per-use computing model; and a general lack of clarity in distinguishing between hype and reality.

To explore these hurdles in detail and to learn more about this end-to-end process, we were able to build 20 teams, each consisting of an end-user and their application, the software provider, the computational resource provider, and an HPC and/or CAE expert who manages the team process. Thanks to our participants, the following teams have been established:

Team

Project Description

Anchor Bolt

Simulating steel to concrete fastening capacity for an anchor bolt

Resonance

Electromagnetic simulations of NMR probe heads

Radiofrequency

Radiofrequency field distribution inside heterogeneous human body

Supersonic

Simulation of jet mixing in the supersonic flow with shock

Liquid-Gas

Two-phase flow simulation of separation columns

Wing-Flow

Flow around an aerospace wing

Ship-Hull

Simulation water flow around a hull of the ship

Cement-Flows

Burner simulation with different solid fuels in mining industry

Sprinkler

Simulating water flow through an irrigation water sprinkler

Space Capsule

Aerothermodynamics and stability analysis of a space capsule

Car Acoustics

Low frequency car acoustics

Dosimetry

Numerical EMC and dosimetry with high-res models

Weathermen

Large-scale and high-resolution weather and climate prediction

Wind Turbine

CFD simulations of vertical and horizontal wind turbines

Combustion

Simulating combustion in an IC engine

Blood Flow

Simulation of water/ blood flow inside rotating micro channels

ChinaCFD

CFD using homegrown C/C++ application

Gas Bubbles

Simulation of gas bubbles in a liquid mixing vessel

Side impact

Optimization of the side-door intrusion bars under a crash

ColombiaBio

Analysis of the biological diversity in a geography using R scripts

All 20 of these projects are underway today. Two of them are busy with defining their end-user project, 15 teams are in contact with the assigned computing resources and setting up the project environment, one is working on initiating and monitoring the end-user project execution, one is reviewing the results with the end user, and one team is already documenting the findings of the HPC as a Service process. To illustrate the team process in more detail, we present two of the projects and their current status in more detail.

The team’s end user is faced with a common problem: a periodic need for large compute capacity in order to simulate and refine potential product changes and improvements. The periodic nature of the HPC requirements means that it is not possible to have the desired amount of capacity internally as the company finds it difficult to justify capital expenditure for complex assets that may end up sitting idle for long periods of time.

To date the company has invested in a modest amount of internal HPC capacity sufficient to meet base requirements. Additional HPC resources would allow the end user to greatly expand the sensitivity of current simulations and may enable new product & design initiatives previously written off as “untestable.”

The HPC software being employed is CST Studio, a popular commercial application for electromagnetic simulations of many types. The application is currently operating in the Amazon cloud and the team has successfully completed a series of architecture refinements and scaling benchmarks. The hybrid cloud-bursting architecture allows local HPC resources residing at the end-user site to be utilized along with the Amazon cloud-based resources.

At this point in the project the team is still exploring the scaling limits of the Amazon GPU-equipped EC2 instance types and is beginning new tests and scaling runs designed to test HPC task distribution via MPI. The use of MPI will allow enable them to leverage different EC2 instance type configurations and scale beyond some technical limits imposed by the amount of memory residing within the NVIDIA GPU cards.

They believe they are currently at (or close to) the point in which they are routinely running simulations that would not be technically possible using the local-only resources of the end user. They also intend to begin testing the Amazon EC2 Spot Market, in which cloud-based assets can be obtained from an auction-like marketplace offering deeply significant cost savings over traditional on-demand hourly prices.

In this project ANSYS CFX is used to simulate a flash dryer in which hot gas is used to evaporate water from a solid. The team consists of FLSmidth as the end user, Bull as the resource provider with its extreme factory (XF) HPC on demand service, ANSYS as the software provider, and science + computing ag as team experts.

FLSmidth is the leading supplier of complete plants, equipment and services to the global minerals and cement industries. The end user needs about four to five days to complete a simulation run on the local IT infrastructure. He would like to reduce the total throughput time of the project and, in a second step, increase the mesh size to refine the results, without investing in hardware, which may not always be utilized full-time. For this, the simulation must be run on more cores and more memory through more nodes connected by a high-speed network.

XF provides 150 teraflops of computing power with InfiniBand, GPUs and currently, about 30 installed applications. Others are added on demand. Users can access XF through an easy-to-use web portal or direct login.

In this project, XF has enabled access to the end user and integrated ANSYS CFX in a web-interface for submitting jobs. For the course of this project licenses have been granted by ANSYS. The end user can manage his ANSYS licenses easily through the portal. The preparations to run the jobs are almost completed now and the first test runs should be able to start shortly.

Announcing Round Two of the Uber-Cloud Experiment

We consider Round One as proof of the concept that: yes, remote access to HPC resources works, and, there is a real need for it! And yes, there are hurdles on the way, but we know how to overcome them.

During the half-time webinar we asked the attendees if they would like to participate in the second round of the Uber-Cloud Experiment. 97 percent answered said they would. Therefore, we decided to start a new round of the experiment immediately after the first round completes. It will run from mid-November to mid-February.

Round Two of the experiment will be more professional. The end-to-end process of identifying, accessing and using remote resources (hardware, software, expertise) will become more structured, standardized, and tools-based. We will also handle more teams and more applications beyond CAE, and offer a list of additional professional services, for example, measuring the team effort. Finally, existing teams will be encouraged to use other resources, existing participants can work in new teams, and new participants can join and form new teams.

For anyone interested in learning more about the experiment or to register for Round Two, go to the Uber-Cloud Experiment website.

About the Authors

Wolfgang Gentzsch and Burak Yenier are the creators and facilitators of the Uber-Cloud Experiment. Wolfgang is an HPC veteran. Having worked in leading positions in research, academia and industry for some 30 years, Wolfgang is now an HPC consultant and the chairman of the ISC Cloud conference series for HPC and Big Data in the Cloud. Burak is the vice president of operations at CashEdge, a software-as-a-service company in Silicon Valley, which provides innovative payments and aggregation solutions to financial institutions.