modeling and simulation – HPCwirehttps://www.hpcwire.com
Since 1987 - Covering the Fastest Computers in the World and the People Who Run ThemFri, 09 Dec 2016 13:18:37 +0000en-UShourly1https://wordpress.org/?v=4.760365857U.S. Air Force Taps SGI ICEhttps://www.hpcwire.com/2013/07/04/sgi_shows_off_spirit_for_air_force/?utm_source=rss&utm_medium=rss&utm_campaign=sgi_shows_off_spirit_for_air_force
https://www.hpcwire.com/2013/07/04/sgi_shows_off_spirit_for_air_force/#respondThu, 04 Jul 2013 07:00:00 +0000http://www.hpcwire.com/2013/07/04/sgi_shows_off_spirit_for_air_force/A new SGI system has been installed at Wright-Patterson Air Force base in Dayton, Ohio as part of the Department of Defense's HPC Modernization Program. The Air Force will be using the new SGI ICE X machine for particular modeling and simulation efforts as well as....

]]>On this Independence Day in the United States we bring news of a new system set to boost the defense capabilities of the country via a new system housed at one of the nation’s top Air Force facilities.

The Air Force Research Laboratory Supercomputing Resource Center (DSRC) has a new addition to its fleet of supers via its new SGI ICE X system called Spirit, which will be housed at the Wright-Patterson Air Force base in Dayton, Ohio.

The top 20-level system, which is capable of 1.4 petaflops, will support various research, development, testing and evaluation projects, particularly on the aircraft and ship design fronts.

Spirit boasts 4,608 nodes and 73,728 Xeon cores humming at 2.6 GHz, as well as 146 TB of memory and 4.6 PB of disk space.

“By providing the technology solutions behind Spirit , SGI is further powering the work of the armed forces in the success of their missions and the safety of the men and women,” said Jorge Titinger, president and CEO of SGI. “It is with great honor to see the results of the SGI ICE X system Spirit being included in the TOP500, further validating our work together and the effective supercomputing architecture we built to address complex HPC needs.”

The DSRC is one of five HPC sites in the Department of Defense’s High Performance Computing Modernization Program. The goal of the program, which is managed on behalf of the DoD by the U.S. Army Engineer Research and Development Center, is to provide the DoD with supercomputing capabilities and support.

To learn about the new system we talked with Jeff Graham, Director, AFRL DRSC and John West, who heads up the HPC Modernization Program.

What is the current HPC environment look like at Wright-Patt and other bases?

Graham: The AFRL DSRC provides full spectrum services to customers across the DoD. We recently christened the modern Information Technology Complex which will house all of our future large scale systems – the first of which is the SGI ICE X system, Spirit. his new complex will give us eight MWs of battery/rotary Uninterruptible Power Supply coupled with backup diesel generators in order to insure consistent delivery of service and minimize the potential for damaging systems due to a power loss.

All of our human resources and monitoring activities will continue to take place across the street where our Mass Storage, Test and Development, and training resources are housed. In addition to HPCMP systems, we support several customer-funded, smaller clusters for specialized requirements in order to support critical research from the desktop, and mid-range clusters.

Can you comment on the decision to go with an SGI ICE X system over other alternatives?

West:The DoD HPCMP has a rigorous and well-refined process for its acquisition of new supercomputers that takes into account the most recent data on DoD mission requirements and user behavior in constructing an HPC system purchase portfolio which, in total, best meets the needs of the Department.

The purchase and installation of Spirit at the AFRL DSRC is part of the most recent acquisition completed by the HPCMP, which placed large-scale computing systems from IBM, Cray, and SGI at four supercomputing centers.

Each of the new systems acquired has unique attributes that match it most effectively to a portion of our user community’s workload — when managed as a portfolio of capability, they represent the most effective solution to meeting the computational requirements of the DoD.

Modeling and simulation are likely a core use of the machine–did you consider the applicability of acceleration/coprocessors on the system?

West: Not all modeling and simulation codes benefit from accelerators, due to the round trip latency that must be overcome between the processor and co-processor and the mismatch in the memory use patterns of many well-tuned HPC codes as compared to the memory structure of a co-processor.

The HPCMP is examining the applicability of specific accelerators to key applications and may make strategic investments based on these results in upcoming acquisitions. The HPCMP currently does have small platforms which contain some of the current commercially available options, including Telsa K20s and Xeon Phis, allowing side-by-side comparisons in order to determine the sensitivity of applications to different co-processor memory bandwidths (with and without ECC), memory sizes, hardware structures, and programming approaches.

Describe the HPCMP’s role with this system–

West: As described in the answer to question 2, the AFRL DSRC is funded by the HPCMP and managed on our behalf by AFRL. The AFRL DSRC is one of the five DoD Supercomputing Resources Centers funded and managed by the HPCMP for the DoD’s technical computing user community, and this acquisition is part of the HPCMP’s annual supercomputing procurement. The acquisition is managed by the HPCMP program office with significant input from the DSRCs and representatives of our user community; funding for the acquisition is provided by the HPCMP program.

]]>https://www.hpcwire.com/2013/07/04/sgi_shows_off_spirit_for_air_force/feed/03974Vulcan’s New Planet of Industrial Explorationhttps://www.hpcwire.com/2013/06/26/vulcan_opens_new_planet_to_industrial_exploration/?utm_source=rss&utm_medium=rss&utm_campaign=vulcan_opens_new_planet_to_industrial_exploration
https://www.hpcwire.com/2013/06/26/vulcan_opens_new_planet_to_industrial_exploration/#respondWed, 26 Jun 2013 07:00:00 +0000http://www.hpcwire.com/?p=4015The eighth-ranked Blue Gene/Q Vulcan system at Lawrence Livermore National Lab has opened its doors for business--at least to companies reliant on advanced modeling and simulation. The 5-petaflop super has already been used in a number of incubator projects but now that they are extending the focus of....

]]>American competitiveness, particularly on the modeling and simulation front, has been a key initiative with a lot of lip service in the last decade. Several facilities, including the Ohio Supercomputer Center, have lent helping hands to bring HPC to industry–and fresh efforts are springing up, including at Lawrence Livermore National Laboratory (LLNL).

The difference between what centers like Ohio’s and LLNL’s are doing is truly a matter of scale. Those selected as industrial users will have a crack at a 5 petafopper that sits at number 8 on the recently-updated Top 500 supercomputer list.

With 390,000 cores and a new host of commercial applications to tweak, LLNL is providing a much-needed slew of software and scaling support. The lab is lining up participants to step to the high-core line to see how more compute horsepower can push modeling and simulation limits while solving specific scalability issues.

HPC Innovation Center Director, Fred Streitz says that Vulcan offers “a level of computing that is transformational, enabling the design and execution of studies that were previously impossible, opening opportunities for new scientific discoveries and breakthrough results for American industries.”

“It’s common for us to have people come to us because they’re hitting the limits on what they can do with commercial codes,” says Streitz. “It’s taking them too long to get answers or they want to model and simulate a large enough system with enough physics and they want to understand what the ROI would be to acquire more computing power.”

In other words, the project isn’t about providing supported access to high-end resources as a “gimme” in the name of competitiveness, it’s about convincing potential users that their investment in high performance computing is worth the cost and effort. It’s a matter of going from a workstation/departmental cluster approach to modeling in two dimensions to hitting warp drive with a fully realized 3D high-res model. The idea is that the implications for competitiveness could be big enough to trip the trigger in favor of a massive investment in HPC systems–but of course that kind of core warp-drive comes with some practical challenges on the software side.

The lab and IBM are providing software support to help raise code to the Top 10 system bar, which is the real emphasis on the effort. In many ways, the hardware is the easy (and expensive) part of the process–it’s the software angle that has LLNL researchers scrambling for solutions.

The BlueGene system presents a specific architecture framework that the code is being tweaked for, however. While it’s true that some of the code is tweaked to suit the IBM architecture, Streitz says in general, tackling the scalability issues making the hundreds-to-thousands of cores jump offers solutions that are machine agnostic.

When asked about the viability and usefulness of other architectures and approaches, including GPU acceleration (since that could be a prime fit on many of the modeling/sim applications), Streitz said that there is curiosity, but for now it’s a matter of getting businesses to new scale.

LLNL and its industrial HPC partners have already wrapped up six projects that cross the academia-commercial border via the LLNL HPC4Energy incubator program. The HPC Innovation Center is now connecting more users with on-demand proprietary access to Vulcan and throwing in the support of LLNL computer scientists and engineers to solve the pressing problem of dramatic scaling.

There are a number of companies that have already tapped Vulcan’s high core counts and the in-house software expertise, including General Electric’s Energy Consulting division, which will be amping up its PSLF simulation performance and capability and Bosh, which is targeting simulations of novel internal combustion engines.

These are large companies with existing, sizable clusters and in-house software resources of their own. While it might boost American competitiveness to have access to advanced modeling and simulation with Vulcan, smaller companies require the same opportunities. Streitz pointed us to smaller organizations that are also using Vulcan for the same purposes, including Potter Drilling, which will be improving their thermal spallation drilling processes with advanced simulation.

The system will still serve lab-specific needs through LLNL’s High Performance Computing Innovation Center as well as chew on Department of Energy and National Nuclear Security Administration projects.

]]>Novel capability will deliver the best of high-performance computing and cloud computing

Next-generation neutron scattering requires next-generation data analysis infrastructure. And that means not just more data, accelerated reduction, and translation and analysis, but linking the neutron scattering on a beam line live to a simulation platform where modeling and simulation can guide the experiment.

As the data sets generated by the increasingly powerful neutron scattering instruments at Oak Ridge National Laboratory’s (ORNL’s) Spallation Neutron Source (SNS) grow ever more massive, the facility’s users require significant advances in data reduction and analysis tools so they can cope. SNS is the world’s most intense pulsed, accelerator-based neutron source for scientific research and development.

Funded by the US Department of Energy Office of Basic Energy Sciences, this national user facility hosts hundreds of scientists from all over the world every year, most of whom are engaged in materials science research. Now the SNS data specialists have teamed with ORNL’s Computing and Computational Sciences Directorate to form a strategic alliance to meet the neutron science users’ next-generation requirements.

The result is ADARA – the Accelerating Data Acquisition, Reduction, and Analysis Collaboration project, which comprises individuals from across ORNL spanning five divisions: the Neutron Sciences Directorate’s (NScD’s) Neutron Data Analysis and Visualization Division (NDAV) and Research Accelerator Division, the Computing and Computational Sciences Directorate’s (CCSD’s) Computer Science and Mathematics Division, National Center for Computational Sciences (NCCS) and Information Technology Services Division.

The collaboration between neutron sciences and supercomputing, two of ORNL’s most high-powered research centers, has created a new data infrastructure that will enhance users’ ability to reduce and analyze data as they are taken; create data files instantly after acquisition, regardless of size; reduce a data set in seconds after acquisition; and provide the resources for any user to do post-acquisition reduction, analysis, visualization, and modeling – not just on site – but literally from anywhere.

At neutron experimental facilities today, research scientists collect data during experiments and do an initial analysis of their findings. The detailed data analysis that follows can take from minutes to months. For maximum effect, visiting users manipulate their data – reduce it, analyze it, and, increasingly, visualize and model it on supercomputers – to fully understand the content. This is an interactive process.

Galen Shipman is data system architect for the Computing and Computational Sciences Directorate and principal investigator of the ADARA project. We asked him to tell us what improvements SNS users can expect in the coming months.

What are the data access and analysis problems that confront SNS users today?

Galen Shipman: Much of the software infrastructure for data acquisition, reduction, and analysis at SNS was designed more than a decade ago. It is a good system and has served the needs of the users, but there is a need to shorten the time from experiment to the scientific result. That is really what the ADARA project is about. It’s about decreasing this time by providing a streaming data infrastructure and an integrated high-performance computing (HPC) capability that provides users with instant feedback from experiments at SNS.

We began in October 2011 with an analysis of the current infrastructure, working with experts at SNS. We quickly found that one of the major issues was how long it took to start getting feedback from an experiment on a beam line as it is running. What the scientist often wants to see from an experiment at SNS is an energy spectrum, but the data captured and provided to the user are simply the position and time of flight of neutrons as they travel through a material and hit a bank of detectors surrounding the material.

The current process of providing this feedback entails capturing all the neutron event data and saving it to a data file. After the entire experiment is complete, the data files are translated to a common data format known as NeXus. After this translation is complete, a data reduction process uses MANTID, a data-reduction platform, to transform the raw neutron event data to an energy spectrum or diffraction pattern. Finally, then, the user starts seeing the results of the experiment.

Often reduction is a short process. It can be minutes for small data sets on short experiments. In other cases, it can take a day or more – a full day from completion of the experiment and then another day to actually start getting feedback on what it meant and what the results are. This long lead time from the experiment to receiving feedback from the experiment can significantly impact the productivity of scientists at SNS.

How did the team propose to speed up data reduction and get to that energy spectrum faster?

Shipman: The concept, the leap forward, is to go from experiment to data reduction to obtaining an energy spectrum nearly instantaneously, while the experiment is still running.

Rather than the current approach of saving data in “buckets” and, once the bucket is full, handing the bucket off to the next process, we do a streaming approach. As data are being captured, we concurrently do translation. Every single event coming off a detector is translated live to a common data format. While doing translation, we are also doing data reduction, so as those events are coming off the detectors, we are also doing live data reduction into an energy spectrum.

How do you enable simultaneous translation and reduction of the neutron events coming off the detectors?

Shipman: For the architecture, we’ve leveraged some of the techniques that we were already using in HPC, as well as some of the techniques from more traditional, distributed computing. The fundamental architecture for our streaming data system is built upon a high-performance publish/subscribe system. We have a system we call the stream management service (SMS). It collects information from multiple feeds: from the neutron detectors, the experiment environment, such as temperature within the sample environment, and orientation of the sample. This information is what we call slow controls information. We also collect data from a variety of other sources such as Fermi choppers [devices that block the neutron beam for a fraction of time in milliseconds]. All of these data are “published” to the SMS, which then aggregates the data into a single, common network stream that can be sent to one or more downstream “subscribers.”

One of the downstream subscribers we have developed has been dubbed the Streaming Translation Service, which translates the unified neutron event stream on the fly and creates NeXus files live, as the experiment is conducted. The instant an experiment is over, the full NeXus file is created. It’s done. It doesn’t matter if it is a terabyte. It doesn’t matter if it is just a few megabytes.

Another downstream subscriber we have developed, known as the Streaming Reduction Service, which leverages the MANTID system, transforms the neutron event stream live from simple detector position and time of flight to an energy spectrum in real time. This provides scientists at SNS with real-time feedback from their experiment coupled with the Mantid reduction and analysis platform.

What happens to all the data after the experiment is completed?

Shipman: Although much of our work has focused on providing real-time feedback from an experiment, certain tasks in the data processing chain can be conducted only after the experiment is completed. To support this, the ADARA team has developed an automated workflow engine based on the Apache ActiveMQ system for post-stream processing. This workflow engine allows for coupling of an arbitrary number of tasks to the completion of an experiment, such as cataloging of the experiment data, additional data reduction and analysis, and archiving of the experiment data to our multi-petabyte archival storage system at the NCCS.

Once cataloged, these data are available for subsequent reanalysis and intercomparison with previous experiments. This post-processing step can be highly interactive in which users interact with their data through the Mantid software package or through other analysis tools and custom applications. Although much of the data captured can be analyzed using a workstation computer, many of the datasets require HPC systems to provide users with timely feedback. While HPC systems can provide timely feedback and support interactive analysis, in the past these systems have only been accessible by advanced users with a background in parallel computing. To support a much broader set of users, we have integrated support of HPC systems into Mantid, effectively hiding the complexities of parallel computing while providing its benefits to our users.

So you bring the advantages of HPC systems to all the SNS users?

Shipman: Exactly. We have built an integrated HPC capability for users at SNS. Through a web service-enabled architecture, scientists at SNS – or scientists sitting in a coffee shop across the country – can seamlessly conduct a variety of analysis or reduction tasks on HPC infrastructure at the NCCS. From the users’ perspective, they are interacting with an application on their desktop. But behind the scenes, we are farming out larger reduction and analysis tasks to HPC systems running the Moab Intelligence engine from Adaptive Computing through a Web Service RESTful API. These HPC systems have an order of magnitude more computational capability than their desktop. This has enabled dramatic acceleration in post-processing workloads, in which scientists reanalyze their data from a completed experiment or compare a number of completed experiments. Our ActiveMQ workflow manager, based on Apache Active MQ, can also leverage this framework, farming out computationally intensive tasks to HPC systems at the NCCS as part of the experiment pipeline. We are really excited about this capability; we have in essence developed an elastic compute capability using both software as-a-service and platform as-a-service models that deliver the best of HPC and cloud computing to users at SNS.

Is neutron science research effectively partnering with supercomputing?

Shipman: Yes. The ADARA team has already built out the software and hardware infrastructure to support the use of NCCS HPC systems by scientists at SNS. Our next steps will include coupling the live streaming capability with modeling and simulation, enabling real-time analysis of experiments, such as fitting of the experiment data to a model of the material in the experiment. This will enable an entirely new level of real-time feedback from experiments at SNS. In the future, this and techniques that leverage the coupling of experiment and simulation will enable systems at the Oak Ridge Leadership Computing Facility (OLCF) to steer the experiment, providing the scientist with real-time information from a simulation of the material that they can use to more efficiently conduct the experiment at SNS. In fact, we have begun the initial steps of this work through the Center for Accelerated Materials Modeling, led by Mark Hagen, NDAV group lead.

Through this and other upcoming work, we see a future in which the Titan multi-petaflop platform at the OLCF could be steering an experiment based on intercomparison of simulation of a material with neutron data captured at SNS. This coupling of neutrons and computation could provide new breakthroughs in materials science, biology, and engineering, while significantly improving the productivity of our users.

What and who got this started?

Shipman: Jeff Nichols, the associate Laboratory director for the CCSD, and Kelly Beierschmitt, the associate Laboratory director for NScD, recognized the importance of coupling computation and neutron science. They realized that by doing so we could make significant progress in increasing the productivity of scientists at SNS and ultimately develop new capabilities in multiple science domains that use neutrons and computing.

The ADARA project has required expertise in both computing and neutron science. The computing team doesn’t have the science background in neutrons but does have the software/engineering background required to help build the system. So in collaboration, leveraging previous work that the neutron sciences data team had done, the ADARA team was able to extend those concepts and write new software to deliver a streaming infrastructure and an integrated HPC capability at SNS. Although we have made significant progress through the ADARA project, this is just the beginning of a long-term strategic partnership between computing and neutron science here at ORNL, a partnership enabled by the Laboratory’s multi-program science and technology capabilities.

]]>The use of supercomputing to help maintain the US nuclear weapons arsenal is one of the more specialized applications of high performance computing. Simulating the behavior of these devices inside a computer has allowed the US to adhere to the Comprehensive Test Ban Treaty (CTBT), while maintaining some confidence that the country’s nuclear deterrence capabilities remain intact. The responsibility to support our nuclear arsenal virtually has fallen on the NNSA’s Stockpile Stewardship Program, under the Department of Energy.

But the ability of these supercomputing models to be able to replace actual nuclear testing is still somewhat controversial. A report by Chris Schneidmiller at Global Security Newswire weighs some of pros and cons of physical versus simulated nuclear testing and the ramifications of our CTBT obligations. In particular, Schneidmiller begins by pointing out that skeptics believe that “computer modeling cannot effectively replace actual testing in terms of ensuring the upkeep of today’s stockpile, nor for preparing new nuclear weapons that might one day be necessary to safeguard the United States from future threats.”

In addition new types of weapons might need to be developed to counter new types of threats. The Bush administration’s proposal for the so-called “bunker busting” nuke is one such example. Having to develop an entirely new bomb without ever being able to detonate it is problematic at best.

The problem is that without some sort of physical testing, there is no assurance that the real-world behavior of the weapons is being reflected in computer model. As former Defense Secretary Caspar Weinberger pointed out, the confidence that the weapons will work is the whole basis of our nuclear deterrence strategy. And the only way to demonstrate that is to test the devices.

Of course, the whole idea behind the Stockpile Stewardship Program is to demonstrate that confidence without the testing. According to Undersecretary of State for Arms Control and International Security Ellen Tauscher, the directors of national labs maintain that the program has “provided a deeper understanding of our arsenal than they ever had when testing was commonplace.”

A 2002 study from the National Academy of Sciences concluded that the US nuclear stockpile could indeed be maintained, given enough computing power and other technical resources. Particularly in the 1990s, whether supercomputers were capable of accurately simulating these weapon systems was an open question. Today, with petascale machines available, there is less concern about capability.

In March at the Carnegie International Nuclear Policy Conference, CTBT opponent Senator Jon Kyl said that Stockpile Stewardship Program offered “both good news and bad news” regarding our nuclear arsenal, but expressed reservations that the program was the ultimate answer to maintaining our nuclear deterrence.

]]>https://www.hpcwire.com/2011/07/15/nuclear_deterrence_in_supercomputing_we_trust/feed/04760The Weekly Top Fivehttps://www.hpcwire.com/2011/05/26/the_weekly_top_five/?utm_source=rss&utm_medium=rss&utm_campaign=the_weekly_top_five
https://www.hpcwire.com/2011/05/26/the_weekly_top_five/#respondThu, 26 May 2011 07:00:00 +0000http://www.hpcwire.com/?p=4846The Weekly Top Five features the five biggest HPC stories of the week, condensed for your reading pleasure. This week, we cover the NC State effort to overcome the memory limitations of multicore chips; the sale of the first-ever commercial quantum computing system; Cray's first GPU-accelerated machine; speedier machine learning algorithms; and the connection between shrinking budgets and increased reliance on modeling and simulation.

]]>The Weekly Top Five features the five biggest HPC stories of the week, condensed for your reading pleasure. This week, we cover the NC State effort to overcome the memory limitations of multicore chips; the sale of the first-ever commercial quantum computing system; Cray’s first GPU-accelerated machine; speedier machine learning algorithms; and the connection between shrinking budgets and increased reliance on modeling and simulation.

Research Technique Addresses Multicore Memory Limitations

A new technique developed by researchers at North Carolina State University promises to boost multicore chip performance from between 10 to 40 percent. The new approach is two-pronged, using a combination of bandwidth allocation and “prefetching” strategies.

One of the limitations to multicore performance is the memory problem. Each core needs to access off-chip data, but there is only so much bandwidth available. With the proliferation of multicore designs, the data pathway is all the more congested. The NC State researchers developed a system of bandwidth allocation based on the fact that some cores require more access to offchip data than others. Implementing an on-chip memory store (cache-based) allows the chip to prefetch data. When prefetching is used in an intelligent as-needed basis, performance is further enhanced.

With boths sets of criteria working in tandem, “researchers were able to boost multicore chip performance by 40 percent, compared to multicore chips that do not prefetch data, and by 10 percent over multicore chips that always prefetch data,” the release explained.

First-Ever Commercial Quantum Computing System Sold

Vancouver-based research outfit D-Wave Systems, Inc. began generating buzz in 2007 when the company announced it had built the first commercially-viable quantum computer. The claim was difficult to verify and received a fair amount of skepticism.

Now four years later, D-Wave has announced the first sale of a quantum computing system, known as D-Wave One, to Lockheed Martin Corporation. As part of a multi-year contract, “Lockheed Martin and D-Wave will collaborate to realize the benefits of a computing platform based upon a quantum annealing processor, as applied to some of Lockheed Martin’s most challenging computation problems.” D-Wave will also be providing Lockheed with maintenance and related services.

The D-Wave One relies on a technique called quantum annealing, which provides the computational framework for a quantum processor. It was also the subject of an article published in the May 12 edition of Nature. The computer’s 128-qubit processor, known as Rainier, relies on quantum mechanics to tackle the most complex computational problems. While Lockheed Martin’s exact interest in the system was not specified, suitable applications include financial risk analysis, object recognition and classification, bioinformatics, cryptology and more.

A Physics World article cited expert collaboration regarding the system’s authenticity. MIT’s William Oliver, although not part of the research team, went on record as saying: “This is the first time that the D-Wave system has been shown to exhibit quantum mechanical behaviour.” Oliver characterized the development as “a technical achievement and an important first step.”

Further coverage of this historic event, including an interview with D-Wave co-founder and CTO Geordie Rose, is available here.

Cray Debuts GPU-CPU Supercomputer

The newest Cray supercomputing system, called the Cray XK6, relies on processor technology from AMD and NVIDIA to achieve a true hybrid design that offers up to 50 petaflops of compute power. Launched at the 2011 Cray User Group (CUG) meeting in Fairbanks, Alaska, the supercomputer employs a combination of AMD Opteron 6200 Series processors (code-named “Interlagos”) and NVIDIA Tesla 20-Series GPUs, and provides users with the option to run applications with either scalar or accelerator components.

The XK6 is the first Cray system to implement the accelerative power of GPU computing, and Barry Bolding, vice president of Cray’s product division, highlights this fact:

“Cray has a long history of working with accelerators in our vector technologies. We are leveraging this expertise to create a scalable hybrid supercomputer — and the associated first-generation of a unified x86/GPU programming environment — that will allow the system to more productively meet the scientific challenges of today and tomorrow.”

Cray already has its first customer; the Swiss National Supercomputing Centre (CSCS) in Manno, Switzerland, is upgrading its Cray XE6m system, nicknamed “Piz Palu,” to a multi-cabinet Cray XK6 supercomputer.

The Cray XK6, which is scheduled for release in the second half of 2011, will be available in both single and multi-cabinet configurations and scales from tens of compute nodes to tens of thousands of compute nodes. Upgrade paths will be possible for the Cray XT4, Cray XT5, Cray XT6 and Cray XE6 systems.

Researchers from the Pittsburgh Supercomputing Center (PSC) and HP Labs have figured out how to speed the process of key machine-learning algorithms using the power of GPU computing. Specifically, the team has achieved nearly 10 time speed-ups with GPUs versus CPU-only code, and more than 1,000 times versus an implementation in an unspecified high-level language. Machine learning is a branch of artificial intelligence that “enables computers to process and learn from vast amounts of empirical data through algorithms that can recognize complex patterns and make intelligent decisions based on them.”

The application the research team is working with is called k-means clustering, popular in data analysis and “one of the most frequently used clustering methods in machine learning,” according to William Cohen, professor of machine learning at Carnegie Mellon University.

Ren Wu, principal investigator of the CUDA Research Center at HP Labs, developed the GPU-accelerated cluster algorithms. Wu then teamed up with PSC scientific specialist Joel Welling to test the algorithms on a real-world problem, which used data from Google’s “Books N-gram” dataset. This type of N-gram problem is common in natural-language processing. The researchers clustered the entire dataset, with more than 15 million data points and 1,000 dimensions, in less than nine seconds. This kind of breakthrough will allow future research to explore the use of more complex algorithms in tandem with k-means clustering.

Lean Budget Increases Government Reliance on Modeling and Simulation

The Institute for Defense & Government Advancement (IDGA) put out a brief statement last week, suggesting a link between declining budgets and a growing demand modeling & simulation (M&S) tools.

Last week, the Army and Department of Defense (DoD) awarded a $2.5 billion contract to Science Applications International Corporation (SAIC) for a combination of planning, modeling, simulation and training solutions. According to the IDGA, “this contract signifies the growing need for simulation training to prepare troops for combat. Despite budget constraints, Modeling and Simulation (M&S) is expanding as technological improvements develop. M&S is the more viable and cost-effective option for tomorrow’s armed forces.”

The IDGA also announced that its 2nd Annual Modeling and Simulation Summit will explore the latest technological advancements and look at the lessons to be learned from recent efforts. This event will have a focus on military strategies for M&S, such as Irregular Warfare and Counter-IED training.

]]>Before you can begin to harness the natural power of the wind, you need to know where to look, and it’s at this very beginning stage that the predictive capabilities of high-end computers come into play. Over at RenewableEnergyWorld.com, Kevin Corbley, a consultant specializing in the geospatial and energy industries, has written an informative article exploring the use of computer modeling in the selection of wind farm sites. The models are used to locate the absolute best placement for wind farms to maximize their generative and economic potential. In fact, Corbley explains that “selecting the best geographic locations to build wind farms that will produce the maximum energy with the least intermittence in power generation is the perfect problem for a supercomputer to solve.”

Using supercomputing resources from the Rocky Mountain Supercomputing Center (RMSC) to run their simulations, a research team from Northrop Grumman created a regional climate model that determines how climatic conditions will impact a network of potential sites 50 years into the future. A sophisticated search algorithm ranks the sites in order to pinpoint a group of geographically-diverse farms that work together to produce the healthiest wind profile.

Earl J. Dodd of the Rocky Mountain Supercomputing Center explains that “the supercomputer is able to take the numerous variables that will impact the proposed wind farm’s efficiency — topography, land cover, historic wind data, global climate change, proximity to transmission infrastructure — and quickly model them in billions of different combinations to pick the optimal sites.”

The selection process is extensive, but the multi-site approach is turning out to be more lucrative and consistently profitable compared with the single farm model. With the potential financial benefits, not to mention the “green” aspect, it’s no surprise the technology has garnered the attention of the alternative energy community as well as state governments and regional energy providers. According to Corbley, this method also promises a streamlined permitting process, the ability to get facilities up and running more quickly, and reduced operational expenses.

]]>https://www.hpcwire.com/2011/05/25/advanced_modeling_benefits_wind_farms/feed/04831Oak Ridge Supercomputers Modeling Nuclear Futurehttps://www.hpcwire.com/2011/05/09/oak_ridge_supercomputers_modeling_nuclear_future/?utm_source=rss&utm_medium=rss&utm_campaign=oak_ridge_supercomputers_modeling_nuclear_future
https://www.hpcwire.com/2011/05/09/oak_ridge_supercomputers_modeling_nuclear_future/#respondMon, 09 May 2011 07:00:00 +0000http://www.hpcwire.com/?p=4859The Department of Energy has backed the Consortium for Advanced Simulation of Light Water Reactors at Oak Ridge National Laboratory. This sweeping five-year effort will unleash the power of HPC to simulate innovative designs that could dramatically improve nuclear safety, output, and waste reduction.

]]>During the annual televised “State of the Union” address at the beginning of 2011, Barak Obama sought to renew the national focus on science and technology, in part by using supercomputing capabilities to drive progress.

To highlight the role of HPC in the new generation of scientific endeavors, the President told millions of Americans about how supercomputing capabilities at Oak Ridge National Laboratory (ORNL) will lend the muscle for a Department of Energy initiative “to get a lot more power out of our nuclear facilities” via the Consortium for Advanced Simulation of Light Water Reactors (CASL).

This speech came well before the word “nuclear” was (yet again) thrown into the public perception tarpit by the Fukushima reactor disaster, otherwise it might be reasonable to assume that there would be more attention focused on the safety angle that complements the CASL’s nuclear efficiency and waste reduction goals. Outside of the safety side of the story, another, perhaps more specific element to his national address was missing — that the power of modeling and simulation — not just high performance computing — might lie at the heart of a new era for American innovation.

To arrive at an ambitious five-year plan to enact a number of design and operational improvements at nuclear facilities, CASL researchers are developing models that will simulate potential upgrades at a range of existing nuclear power plants across the United States that will seek to address a number of direct nuclear facility challenges as well as some pressing software challenges that lie at the heart of ultra-complex modeling at extreme scale.

Despite some of the simulation challenges that are ahead for CASL, the payoff for the DOE’s five-year, $122 million grant last May to support this and two other innovation hubs could be significant. According to the team behind the effort, “these upgrades could improve the energy output of America’s existing reactor fleet by as much as seven reactors’ worth at a fraction of the cost of building new reactors, while providing continued improvements in reliability and safety.”

Director of Oak Ridge National Laboratory, Thom Mason, pointed to the power of new and sophisticated modeling capabilities that “will provide improved insight into the operations of reactors, helping the industry reduce capital and operating costs, minimize nuclear waste volume, safely extend the lifetime of the current nuclear fleet and develop new materials for next-generation reactors.”

The CASL has been designed with the goal of creating a user environment to allow for advanced predictive simulation via the creation of a Virtual Reactor (VR). This virtual reactor will examine key possibilities and existing realities at power plants at both the design and operational level. CASL leaders hope to “produce a multiphysics computational environment that can be used for calculations of both normal and off-normal conditions via the development of superior physical and analytics models and multiphysics integrators.”

The CASL team further claims that once the system has matured, the VR will be able to combine “advanced neutronics, T-H, structural and fuel performance modules, linked with existing systems and safety analysis simulation tools, to model nuclear power plant performance in a high performance computational environment that enables engineers to simulate physical reactors.”

Many of the codes will employ a number of pre-validated neutronics and thermal-hydraulics (T-H) codes that have been developed by a number of partners on the project, including a number of universities (University of Michigan, MIT, North Carolina State and other) as well as national laboratories (Sandia, Los Alamos, and Idaho).

During the first year CASL will be able to achieve a number of initial core simulations using coupled tools and models — a goal that they have reached for the most part already. This involves application of 3D transport with T-H feedback and CFD with neutronics to isolate core elements of the core design and configuration. In the second year the team hopes to be able to apply a full-core CFD model to calculate 3D localized flow distributions to indentify transverse flow that could result in problems with the rods.

According to a spokesperson for ORNL, making use of the Jaguar supercomputer, CASL will allow for large-scale integrated modeling that has only been possible in the last few years.” The challenge is not simply how to use these new capabilities, but how to make sure current programming and computational paradigms can maximize its use.

A document that covers the goals of CASL in more depth sheds light on some of the computational aspects of these massive-scale simulations. The authors note that “a cross-cutting issue that will impact the entire range of computational efforts over the lifetime of CASL is the dramatic shift occurring in computer architectures, with rapid increases in the number of cores in CPUs and increasing use of specialized processing units (such as GPUs) as computational accelerators. As a result, applications must be designed for multiple levels of memory hierarchy and massive thread parallelism.”

The authors of the report go on to note that while they can expect peak performance at the desktop to be in the 10 teraflop range and the performance at the leadership platform to be in the several hundred petaflop range, during the next five years, “it will be challenging for applications to achieve a significant fraction of these peak performance numbers, particularly existing applications that have not been designed to perform well on such machines.”

Another one of CASL’s stated goals has to do with the future of modeling and simulation-focused research. The team states that they hope to “promote an enhanced scientific basis and understanding by replacing empirically based design and analysis tools with predictive capabilities.” In other words, by harnessing high performance computing to demonstrate actual circumstances versus reflect the educated hopes of even the most skilled reactor engineers, we might be one step closer to fail-proof design in an area that will allow for nothing less than perfection.

CASL could have a chance to see its models and simulations leap to life over the course of the first five years of the project. Currently the Tennessee Valley Authority operates a total of six reactors that generate close to 7,000 megawatts. The agency is currently embarking on a $2.5 billion journey to create a second pressurized water reactor at one of its existing facilities. This provides a perfect opportunity for the CASL team to put their facility modeling research to work; thus they’ve started creating simulations focused on the reactor core, internals and the reactor vessel.

CASL claims that “much of the virtual reactor to be developed will be applicable to other reactor types, including boiling water reactors.” They hope that during the subsequent set of five-year objectives they will be able to expand to include structures, systems and components that are outside of the vessel as well as consider small modular reactors.

]]>https://www.hpcwire.com/2011/05/09/oak_ridge_supercomputers_modeling_nuclear_future/feed/04859White House Announces Project to Spur HPC Adoption in US Manufacturinghttps://www.hpcwire.com/2011/03/03/white_house_announces_project_to_spur_hpc_adoption_in_us_manufacturing/?utm_source=rss&utm_medium=rss&utm_campaign=white_house_announces_project_to_spur_hpc_adoption_in_us_manufacturing
https://www.hpcwire.com/2011/03/03/white_house_announces_project_to_spur_hpc_adoption_in_us_manufacturing/#respondThu, 03 Mar 2011 08:00:00 +0000http://www.hpcwire.com/?p=4918The White House hosted a press conference on Wednesday to announce a new public-private partnership that aims to bring HPC technology to the have-nots of the US manufacturing sector. Using a $2 million grant from the US Department of Commerce and an additional $2.5 million investment from industrial partners, a consortium has been formed to broaden the use of HPC technology by small manufacturing enterprises (SMEs).

]]>The White House hosted a press conference on Wednesday to announce a new public-private partnership that aims to bring HPC technology to the have-nots of the US manufacturing sector. Using a $2 million grant from the US Department of Commerce and an additional 2.5 million investment from industrial partners, a consortium has been formed to broaden the use of HPC technology by small manufacturing enterprises (SMEs).

The new organization, named the National Digital Engineering and Manufacturing Consortium (NDEMC), will be tasked to spread adoption of advanced modeling and simulation software in a sector that is in dire need of IT modernization. As we reported last week, there is a big gap between HPC capabilities at the large manufacturers versus their smaller and much more numerous suppliers. At these less well-endowed firms, HPC capabilities are absent or in short supply. NDEMC will attempt to fill that gap by sharing its members’ expertise and resources.

The project is being led by the Council on Competitiveness, who gathered the partners and made the successful bid for the Department of Commerce grant. Besides the Council, the consortium includes the National Center for Supercomputing Applications (NCSA), the Ohio Supercomputing Center (OSC), the National Center for Manufacturing Sciences (NCMS), and Purdue University, as well as industrial partners John Deere (Deere and Company), General Electric, Procter & Gamble, and Lockheed Martin.

The central goal of this project is to bring access to HPC simulation and modeling software down into the supply chain of these major manufacturers. The reason that’s useful is that large OEMs, even with their own supercomputers and advanced software, are still dependent upon product quality and design innovation from their component suppliers. If the large firms can bring their supply chain up to the same technology level, that benefits everyone.

And from the government’s point a view, a more robust manufacturing sector benefits the nation as a whole. In that sense, this Commerce grant and the new partnership are just part of a larger industrial policy that the White House has been touting for the last couple of years. With regard to manufacturing specifically, the Obama administration’s goal is to reverse the “invented-here-manufactured-there” model and double US exports from this sector over the next five years. “Our global competitors in India and China are not waiting for America’s lead to chart a new economic path,” noted US Secretary of Commerce Gary Locke, who spoke during the press conference.

Locke said giving these SMEs access to HPC will dramatically change how these firms operate. “For small and medium sized manufacturers today, the typical product development cycle takes 14 months,” he said. “But with this new technology it can be reduced to eight months.”

Besides Secretary Locke, the press conference also included comments from Assistant to the President for Manufacturing Policy Ron Bloom, US Chief Technology Officer Aneesh Chopra, Assistant Secretary of Commerce for Economic Development John Fernandez, and Council on Competitiveness President and CEO Deborah Wince-Smith. Together they highlighted the importance of HPC technology to the manufacturing sector and took pains to emphasize that this was a collaboration between the government, academia and industry.

“What is really significant about this partnership is the recognition that the United States really cannot maintain its standard of living, drive its productivity, keep its job creation moving forward, and maintain its national security if we do not innovate and create the next generation of high-value products and services,” said Wince-Smith.

Other than that, the 30-minute press conference was a little bit light on the facts, such as how the $4.5 million investment was going to be applied. For that, HPCwire got a chance to talk with Cynthia McIntyre, senior vice president at the Council on Competitiveness and project lead for the grant. According to her, quite a bit of the money will go toward supporting trainers and educators at the non-profit partners (OSC, NCSA, NCMS, and Purdue). In some cases, these domain experts will be deployed to work on-site to help bring the selected smaller manufacturers up to speed. “We need to understand their workflows and how modeling and simulation can help them with their productivity,” explained McIntyre

In general, the funds won’t be used to buy computing infrastructure — at least for the SMEs. Instead, the approach will be to employ the existing HPC resources of partners like OSC and NCSA. What form this takes is not exactly clear, although part of the project funds will apparently go toward building a Web environment that can be used to access the HPC modeling/simulation software remotely.

Also a little bit fuzzy is the criteria that will be used to select the supply chain manufacturers for the project. According to McIntyre, this is still to be determined, but one approach being considered is to gather a group of firms that share similar application needs such as in computational fluid dynamics (CFD) or finite element analysis (FEA).

Given the limited funding for the project, they don’t expect to bring in more than a couple dozen SMEs, at most. With just $2 million from the feds and $2.5 million from industry, the buy-in is relatively small at this point. During the press conference, White House policy expert Ron Bloom admitted that this is just a pilot project, and if successful, would require greater resources in the future. “What we can do is to use this modest amount of money to start this [project], to make some important progress in developing this software, and as this gains momentum, we would expect it to grow in size,” he said.

If it does expand, it will probably do so under another grant (and possibly another administration). The project is expected to begin within the next four to six weeks and last for just 18 months. That doesn’t leave the project a lot of time to come up some with successful case studies and proof points for follow-on funding.

Nonetheless, that didn’t dampen the enthusiasm of the project’s proponents, who repeatedly drove home the point that these technologies will need to be adopted by manufactures if they want to be competitive in the 21st century. “This is going to change the game on how third millennium manufacturing is done,” said Wince-Smith.

]]>https://www.hpcwire.com/2011/03/03/white_house_announces_project_to_spur_hpc_adoption_in_us_manufacturing/feed/04918Bringing Digital Manufacturing to Markethttps://www.hpcwire.com/2010/10/06/bringing_digital_manufacturing_to_market/?utm_source=rss&utm_medium=rss&utm_campaign=bringing_digital_manufacturing_to_market
https://www.hpcwire.com/2010/10/06/bringing_digital_manufacturing_to_market/#respondWed, 06 Oct 2010 07:00:00 +0000http://www.hpcwire.com/?p=9250Last week in Washington, new research from NCMS and Intesect 360 was presented to help convince government leaders of the inherent value of making high-performance computing software and resources accessible to the manufacturing sector--from the bottom up. Without having access to core technologies, particularly in the realm of modeling and simulation, many smaller design and manufacturing shops have a hard time remaining competitive, but this is an expensive, risky leap to make for many firms.

]]>Last week in Washington, D.C., the National Center for Manufacturing Sciences (NCMS) and Intersect 360 Research presented the results of a survey based on over 300 manufacturing firms in the United States about the current state of digital manufacturing technologies. More specifically, the questions were aimed at ultimately identifying the key barriers and drivers for their adoption of complex technologies to drive innovation.

While we will touch on some of the findings of the research and its potential implications in a while, it should come as no surprise that the “missing middle” designation became a key, guiding phrase. Many in the HPC community have already heard this term repeated elsewhere, it is not always a common, understood concept for those who hold the purse strings—namely representatives in the U.S. government, some of whom might not have even been familiar with the broad domain of high-performance computing and the many layers of meaning and technology that comprise it.

The goal of the research and presentation was to convince government leaders of the inherent value of making high-performance computing software and resources accessible to the manufacturing sector from the bottom up. Without having access to core technologies, particularly in the realm of modeling and simulation, many smaller design and manufacturing shops have a hard time remaining competitive—and we all know what this “trickle up” impact is on the national economy.

Again, chances are you’re well aware of the concept of the missing middle, but let’s take it one step further and enter the realm of manufacturing, HPC and this presumably vast, lost subset of the HPC-denied American economy.

The Digital Manufacturing Angle

The concept of digital manufacturing itself can appear, at first, as a bit too broad or nebulous, in part because from first glance, it implies that the final product is digital in nature or otherwise not tangible enough to apply to something as solid as manufactured products. Digital manufacturing, however, refers to the entire lifecycle of a design or product that was based on the use of advanced computational resources and technologies to deploy simulation and modeling for multiple aspects of the design and development process. As Intersect 360 Research notes, “by creating a digital model of a product, a manufacturer can perform a wide range of tests, such as manufacturability analysis or performance testing, before physically building a new design.” It is in this total solution based on technology that digital manufacturing as a term is best applied.

What this overarching concept of digital manufacturing ultimately means is that companies who feed the manufacturing supply chain are able to improve their final product through refined design and testing efforts and furthermore, many are able to speed the time to market for their products since testing engineered parts or complete products can be time-consuming and expensive.

One of the better ways to think about digital manufacturing is to consider it in the case of a manufacturing firm at the top of the food chain, heavy-equipment maker Caterpillar.

Feeding a Caterpillar

Although not a missing middle company by any stretch of the imagination, providing an overview of how HPC does work for refining the product lifecycle (and the challenges that are present when the HPC absent from it) can almost be better realized via a case study of a company that is thriving versus one of the many missing middles who plod along with legacy systems and 2D rendering software. After all, we know what their challenges are. But for a massive manufacturer who has both high-end systems yet still occasionally resorts to older, more expensive testing and design methods, we have a more thorough perspective.

Fortune 50 heavy equipment manufacturer Caterpillar’s research program manager for virtual products, Keven Hoftstetter, highlighted the key benefits and challenges of modeling and simulation for the company’s product cycle and bottom line last week during the manufacturing and HPC-related HPC 360 event in Champaign-Urbana, Illinois. The equipment manufacturer is a top-tier user of HPC, thus they certainly do not fall into the “missing middle” that has been clearly defined for the manufacturing sector and includes the smaller firms that support the supply chain that migrates to a summit that would include a company like Caterpillar.

Caterpillar is a company on the bleeding edge of modeling and simulation for manufacturing, both on a software and hardware/GPU level. Caterpillar places significant emphasis on research and development projects to helps refine product development and bring their line of equipment to customer as well as environmental standards. In addition to the core elements of their manufacturing business, they also house other broad divisions handling equipment financing, logistics, and remanufacturing/rebuilding.

As one might imagine, Caterpillar’s needs go far beyond modeling or simulating the machine functions since there are many parts that are required in advance, all of which much operate a peak performance, both individually and inside the specific machine. Accordingly, research and development at Caterpillar is the backbone of profitability on the micro level (testing pistons, for instance) to the macro level (making sure an earth moving vehicle performs on target).

While Caterpillar’s basic product cycle model is the same as many other manufacturing companies (concept; design, then on to the build and test phase, and finally the production phase) the products that they are developing and testing are on the massive scale. It is not feasible to build giant centers to house and test the actual prototypes in a repeated, 24×7 manner as they have been doing before taking advantage of simulation to handle their design and test process. Hoftstetter noted that they built a 10-acre facility to test their earth moving equipment but if they had to continue to build out centers like this they would be unable to compete.

Caterpillar also devotes a significant amount of time and resources into the many parts and components that are critical to their earth moving equipment. For instance, Hofstetter noted that since they are a leader in diesel and natural gas engines and industrial gas turbines a great deal of their research and development efforts are related to computational fluid dynamics and combustion system interaction.

Since it would not be viable for a company like Caterpillar, who sits at the top of the supply chain and has far more resources to continue to competitively develop and test its products without sophisticated modeling and simulation software and the resources required to power it, why would it make sense to think that smaller companies who help drive Caterpillar by providing components of its larger parts and final products can scavenge enough resources?

Since it all feeds into the top, if there could be a way to empower those at the lower end of the supply chain, let’s say a small engine parts maker for Caterpillar, why wouldn’t it make sense to encourage this? Let’s say for example this small, hypothetical parts design and manufacturing company could deliver high-quality products at a lower cost to Caterpillar, all due to a dramatic reduction in development and time-to-market periods because of the boost of added HPC capacity or even first-time HPC software and resource capability?

This is what lies at the heart of all of these “missing middle” debates, yet still there are no answers on how to best reach out to these smaller providers of manufactured designs and products when all they really need are the resources. And as you know, these are some very expensive resources we’re talking about here.

While HPC on demand providers are putting themselves forward as the next best thing to an in-house cluster, there are still hefty software license issues to contend with that will still drive up the cost. On the other but related side, there are cloud services providers boasting superior services with a performance hit that won’t (they will promise) be dramatic.

While nothing seems to appeal to most quite like the good old workstation for modeling and simulation tasks and since the GPU revolution is still just in its infancy in terms of HPC resource providers with affordable solutions, one has to wonder how much longer this missing middle in manufacturing will remain lost.

Research in Context

So let’s get back to the Intersect 360 and NCMS research that started this whole conversation in the first place. Actually, let’s back way up…

What is rather unique about the study was that the 321 respondents were not told that the survey had anything to do with HPC. As Addison Snell noted during a presentation similar to the one he gave in conjunction with NCMS in Washington, D.C. the day before, in order to take care of the sticky issue of sample bias, the focus was on technology as a general concept versus the far narrower HPC distinction.

This does mark the study as different from several others that have emerged that are distinctly related to high-performance computing. However, if potential respondents were asked to take the survey, even if there was the possibility of marking an answer “we do not currently use HPC,” they might be far less likely to objectively consider the questions.

As it stands, 80% of the respondents came from the industrial or commercial manufacturing space (with the remaining 20% in supporting roles in academia, trade organizations and the public sector) and those on the commercial end of the spectrum were asked additional questions related to product design and development and to what extent they were deploying available high-end technologies to aid in their efforts, among other related questions.

While the above numbers will do you far more good if you take time to unravel some of the study’s finer points (and the important elements have only been hinted at–there are a number of sub-issues) about the distribution of opinions about securing access to advanced technologies (don’t call them HPC resources if that makes it more palatable, of course) the main point is that there is a combination of general resistance and lack of insight about how these technologies can be leveraged (and to what benefit) for those in that missing middle of manufacturing.

What the research found was that “there is potential, untapped benefit to digital manufacturing technology usage among U.S. manufacturers, particularly small to mid-sized manufacturers. For these companies to get over the hurdles inherent to adoption of advanced technologies, they will seek partners and programs that mitigate risk and help defray costs so they can make investments required to improve their competitiveness technologically.”

Still, what gives some pause is to consider the inferior technology that is driving many manufacturers feeding the supply chain. If we see Caterpillar as a representative example of the power of having access to modeling and simulation, except of the vast scale, it’s pertinent to apply the example of them building a 10-acre facility to test pre-built testable equipment to run constantly for days on end when software might have taken over the task.”

Although companies further down the supply chain don’t have the same sized products or design challenges, this example reverberates—when scaled down, what many of these smaller companies are doing is the cost, time, and resource equivalent of the 10-acre test lot.

]]>This past week in Champaign-Urbana, Illinois at the R Systems-sponsored HPC 360 event, the “triple-m” combination of manufacturing and the missing middle was at the heart of discussions and presentations from representatives from a number of companies ranging from the mega-sized (Caterpillar and General Motors, for instance) to manufacturing supply chain providers, including Dassault Systemes.

These conversations, coupled with presentations from Intel’s Dr. Stephen Wheat and Intersect 360 Research’s Addison Snell repeated the Council on Competitiveness-generated message about the critical economic role of the manufacturing supply chain’s ability to remain efficient and competitive via access to complex software packages and HPC resources.

In essence, the missing middle for manufacturing is the supply chain that feeds the large manufacturing companies—and it is this subset that is most in need of sophisticated applications and the computational capacity required to propel them. Still somehow, these companies remain left in that “missing middle” category of companies that require such resources but lack the expertise, and (or) the internal financial and developmental requirements needed to secure them.

To throw one more “m” into the missing middle and manufacturing theme, modeling (and its simulation counterpart) were prominently featured as critical backbones for successful manufacturing and product lifecycle management business, but nearly all companies present, even those who provide software solutions to these firms, were questioning the efficiency of their compute capability. While some were merely present to provide presentations about how modeling and simulation were key to reducing cost and time to market, others discussed the challenges of securing much-needed computational capacity when their current resources were already strained, which provided a perfect platform for HPC service provider R Systems (who again, was the sponsor for the event).

News from the Silicon Prairie

The “m” words just seem to keep mounting here, but there’s another crucial element that hasn’t been addressed—and it’s all about location. Enter “Midwest” as our newest addition to the parade.

Far too often, our meatiest HPC news items trickle in from the coasts of the United States and from major cities elsewhere around the world. For some reason, the American Midwest tends to get overlooked unless, of course, we start talking about manufacturing. It’s an easy thing to be guilty of—this accidental, mildly apologetic sideswiping of news from the vast plains, but it is lately been at the center of conversations about (and even within) manufacturing. Consider, for instance the Midwest Pilot program to emphasize the importance of HPC for U.S. manufacturing.

HPC resource providers would be astute to look to the Midwest for a potential customer base since this is a region that is in need of such resources, as evidenced by the pilot and related backing studies showing its viability. The pilot was the product of a summit and workshop, which was held at the University of Chicago Booth School of Business in late August and was driven by the Council on Competitiveness. The goal of the small event was to lay the groundwork to create a program to leverage HPC resources for the U.S. manufacturing sector’s “missing middle” of supply chain feeders—many of whom are in the Midwest and define the needs that have been articulated by the Council repeatedly in recent years.

It’s not hard to see the parallel between the “missing middle” and America’s Midwest since oftentimes, that same term could apply far beyond conversations about computational resource needs. As the literal heart of America that drives forward on the production end of the innovation that filters in from our coastlines, all it took was one trip to a community like Champaign-Urbana (and the surrounding University of Illinois campus).

HPC for the Heartland

Given the era of virtualization and connectivity, it doesn’t necessarily matter where your hardware resources come from when they’re delivered in an on-demand capacity, but R Systems has discovered that more localized companies requiring HPC cycles are looking to them for support. Given that other resource providers with similar offerings for high-performance computing users are located along the coasts, this does positions them to be a go-to provider to feed companies in that “missing middle” category that are part of the stream of Midwest manufacturing suppliers.

One of the main benefits that R Systems claims is that they are able to deliver HPC resources on demand far more quickly than users might realize if they made use of a university system. Brian Kucic, who originally came from NCSA before realizing that users were not getting their computing needs met quickly enough and played a key role in the formation of the small HPC on-demand company, noted that potential customers like Wolfram did not want to wait in the supercomputer queue. He saw a need for such users with sporadic needs that was strong enough to warrant an investment in cluster resources to provide for such customers.

Wolfram Research was one of the first and more notable companies to make use of the company’s HPC offering, due in part to the local connection. Wolfram is headquartered in the Champaign-Urbana, and supplies mathematical software for engineers, researchers and other users with high-performance computing demands. Wolfram’s Mathematica offering provides some of the key modeling and simulation used for manufacturing product lifecycles, but the on-demand application for which R Systems was tapped is more in the experimental realm.

Wolfram Research asked R Systems to deploy their 576-node “R Smarr” cluster to launch their cloud-based Wolfram Alpha computational search engine. This project was aimed at delivering a searchable resource for quantitative material that would become instantly accessible and functional for users. While the concept alone required an incredible amount of compute-intensive work, Wolfram was unsure about the demand versus in-house capacity for such an offering. Thus they looked to R Systems, who then partnered with Dell to deliver the needed solution by upgrading an existing cluster.

The Wolfram Alpha case study is interesting beyond the clear marketing objectives for a number of reasons, including the short time from initial request to complete access to the solution. While reading case studies is fraught with peril, given the obvious lack of objectivity any of them bring to bear, for anyone trying to get to the heart of HPC resource provider business models and handling of typical large-scale projects on short deadlines, it might serve as a template for choosing solution providers and determining what is realistic in the expectations department for requesting an on-the-fly spin up.

It will be interesting to see how, as more on-demand HPC resource providers (and what a risk to take—this “if you build it they will come” business model) enter the space, the geographical shakeout goes down. Will companies turn to “their own” as they look to off-site companies to crunch and store their data? Is there inherent value in location alone if your location provides a steady influx for one particular sector in the same way the Midwest and manufacturing are married?

A Champaign Toast

There’s something about the term “Silicon Prairie” that at once seems a little derogatory (like it’s the hick cousin on the “real” center of technology innovation in the Bay area) but also fits the region quite well. The technology campus at the University of Illinois does seem a bit out of place, jutting up as it does amid amber waves of grain, but it’s also a striking, inspiring sight.

During the HPC 360 event in Champaign-Urbana, which was big enough to draw in some world-class speakers yet small enough to allow for some in-depth conversations, there was time to spend wandering around the old site of NCSA and the buildings that house startups supported by the Technology Entrepreneur Center at the University of Illinois. Here, entrepreneurs and companies that support them. Riverbed Technology, for instance, has an inexpensive presence, real estate-wise, but employs and develops student talent while getting really, really cheap highly-educated labor—at least compared to Silicon Valley.

So, dear readers, behold—the Silicon Prairie, centered in the missing middle of the United States yet critical to empowering the missing middle of manufacturing. For companies looking for talent outside of the expensive confines of the Valley, perhaps looking to the region for what it might offer is realistic as the costs of doing business continue to soar.