HPCwire » Oak Ridge National Laboratoryhttp://www.hpcwire.com
Since 1986 - Covering the Fastest Computers in the World and the People Who Run ThemSun, 02 Aug 2015 12:39:43 +0000en-UShourly1http://wordpress.org/?v=4.2.3Application Readiness at the DOE, Part I: Oak Ridge Advances Toward Summithttp://www.hpcwire.com/2015/04/16/application-readiness-in-full-swing-at-doe-labs-part-i/?utm_source=rss&utm_medium=rss&utm_campaign=application-readiness-in-full-swing-at-doe-labs-part-i
http://www.hpcwire.com/2015/04/16/application-readiness-in-full-swing-at-doe-labs-part-i/#commentsFri, 17 Apr 2015 00:38:14 +0000http://www.hpcwire.com/?p=18242At the 56th HPC User Forum, hosted by IDC in Norfolk, Va., this week, three panelists from major government labs discussed how they are getting science applications ready for the coming crop of Department of Energy (DOE) supercomputers, which in addition to being five-to-seven times faster than today’s fastest big iron machines, constitute significant architectural changes. Titled “The Who-What-When of Getting Applications Ready to Read more…

]]>At the 56th HPC User Forum, hosted by IDC in Norfolk, Va., this week, three panelists from major government labs discussed how they are getting science applications ready for the coming crop of Department of Energy (DOE) supercomputers, which in addition to being five-to-seven times faster than today’s fastest big iron machines, constitute significant architectural changes.

Titled “The Who-What-When of Getting Applications Ready to Run On, And Across, Office of Science Next-Gen Leadership Computing Systems,” the session was chaired by Suzy Tichenor, director of the Industrial Partnerships Program for the Computing and Computational Sciences Directorate at Oak Ridge National Laboratory (ORNL), and featured three perspectives on application readiness from DOE computing centers:

HPCwire has videos of all three panelists’ presentations. They are relatively short, ~20 minutes, and definitely worthwhile viewing. Here is the first one:

#1 – Summit: Code Winners (and Losers) in Readiness Program

The first speaker on the panel, Tjerk Straatsma, scientific computing group leader at Oak Ridge National Labs (ORNL), announced the 13 winners of the Center for Accelerated Application Readiness (CAAR) evaluation effort to select codes to be optimized for the Summit system planned to succeed Titan and due around 2018.

Straatsma discussed the significance of optimization, portability, and early science projects for Summit which will have far fewer nodes, use only slightly more power, but vastly outperform Titan. He provided a comparison of Titan and Summit architecture and performance expectations, and then detailed CAAR activities in preparation for Summit as well as reviewed synergy between the NERSC Exascale Science Application Program (NESAP), CAAR at OLCF, and ESP at ALCF.

]]>http://www.hpcwire.com/2015/04/16/application-readiness-in-full-swing-at-doe-labs-part-i/feed/0Summit Puts 13 Code Projects Into Readiness Programhttp://www.hpcwire.com/2015/04/15/summit-puts-13-code-projects-into-readiness-program/?utm_source=rss&utm_medium=rss&utm_campaign=summit-puts-13-code-projects-into-readiness-program
http://www.hpcwire.com/2015/04/15/summit-puts-13-code-projects-into-readiness-program/#commentsWed, 15 Apr 2015 19:35:42 +0000http://www.hpcwire.com/?p=18229When the Oak Ridge National Laboratory’s Summit supercomputer powers up in 2018, it will provide the Department of Energy (DOE) research community with 150 to 300 peak petaflops of computational performance. To extract the highest benefit from this multi-million dollar machine that will be five to ten times the capability of the current fastest US Read more…

]]>When the Oak Ridge National Laboratory’s Summit supercomputer powers up in 2018, it will provide the Department of Energy (DOE) research community with 150 to 300 peak petaflops of computational performance. To extract the highest benefit from this multi-million dollar machine that will be five to ten times the capability of the current fastest US supercomputer, Titan, having optimized application sets is essential. But getting codes ready for much larger systems takes many months of planning, especially when there’s an architectural change, as is the case transitioning from Titan, a GPU-accelerated hybrid x86 Cray machine, to Summit, which uses IBM POWER9 CPUs and NVIDIA Volta GPUs.

To that end, the Oak Ridge Leadership Computing Facility (OLCF) is focusing on getting 13 application codes ready for the coming Summit architecture. OLCF admitted 13 partnership projects, each of which focus on a different code (examples include ACME, NAMD, and NWCHEM), into its Center for Accelerated Application Readiness (CAAR) program. Under the program, application development teams and staff from the OLCF Scientific Computing group collaborate on the redesigning, porting, and optimizing of application codes for Summit’s hybrid CPU–GPU architecture.

The diverse set of CAAR application teams were chosen based on a computational and scientific review conducted by the OLCF with guidance from the ALCF, NERSC, IBM and NVIDIA. The teams will gain access to early software development systems, leadership computing resources, and will have access to technical support from the IBM/NVIDIA Center of Excellence at ORNL. After they have finished the porting and application work, teams will be required to demonstrate the effectiveness of their application through a scientific grand-challenge project performed on Titan.

The modeling and simulation applications selected for the CAAR program and their principal investigators include:

Stay tuned for more coverage of this announcement tomorrow when we will present coverage of a panel about getting applications ready for next-generation leadership computing systems from the IDC User Forum. On the panel was Tjerk Straatsma, group leader for scientific computing at Oak Ridge Leadership Computing Facility.

]]>http://www.hpcwire.com/2015/04/15/summit-puts-13-code-projects-into-readiness-program/feed/0Health Care Catches Data Feverhttp://www.hpcwire.com/2014/10/30/health-care-catches-data-fever/?utm_source=rss&utm_medium=rss&utm_campaign=health-care-catches-data-fever
http://www.hpcwire.com/2014/10/30/health-care-catches-data-fever/#commentsFri, 31 Oct 2014 00:37:06 +0000http://www.hpcwire.com/?p=15922The United States is arguably in the midst of a health care crisis, but there is hope on the horizon and it involves learning how to make sense of big data. Over at Communications of the ACM, Oak Ridge National Laboratory (ORNL) shares how it is helping the health care industry benefit from patient data using the power of Read more…

]]>The United States is arguably in the midst of a health care crisis, but there is hope on the horizon and it involves learning how to make sense of big data. Over at Communications of the ACM, Oak Ridge National Laboratory (ORNL) shares how it is helping the health care industry benefit from patient data using the power of graph computing.

Starting about four years back, researchers at the lab saw an opportunity to use their data science and computing prowess for the betterment of health care. The project is rather unique in that it leverages three different high-performance computing architectures. The multicore Cray XK7 supercomputer Titan, currently the second-most powerful computer in the world, is being used to simulate outcomes of interventions. Apollo, the in-memory graph-computing Urika appliance also built by Cray, is enabling actionable pattern discovery. And cloud computing machines with distributed storage are providing further analysis.

One challenge of the American health care system is the tendency for data to end up in silos, which by their nature are not easily joined. Another problem is with patient data itself. It exists in a variety of formats, both structured and unstructured, and it often exists in massive volumes.

“ORNL computing experts found a better approach in scalable graph computing,” notes the author of the ACM article, “which allows for detailed analysis and discovery of relationships hiding in large quantities of data. By organizing health care data into relationship graphs (linked structures of interacting entities), researchers were able to mine and understand complex patterns of relationships and behaviors in health care delivery.”

The team took publicly available datasets from several sources, including The Cancer Genome Atlas, clinicaltrials.gov, Semantic MEDLINE, openFDA, DocGraph, National Plan and Provider Enumeration System, as well as clinical partnerships.

Graph computing was highly effective at finding areas of wastefulness and fraudulent activity within the federal health care system. In one case, the crime was carried out via a type of identity fraud when a health care provider created multiple identities to bill patients. In another example, the system was able to predict which providers were the most risky, based on associations.

Finding ways to extract meaningful information from patient data is a critical step toward making health care effective, efficient, affordable and sustainable. The ORNL project shows how health care data can be sliced and diced in unusual ways for pattern discovery and predictive modeling, highlighting areas that work well and those that require attention. ORNL’s Health Data Sciences Institute (HDSI), the group behind the effort, anticipates that these methods and tools will be beneficial for other partners as well, in fields ranging from genomics to electronic health records to health-sensor data.

]]>One doesn’t normally associate their favorite shampoo or laundry detergent with science, let alone multi-million dollar supercomputers, but in today’s modern world many well-known consumer goods are the products of extensive R&D. By using large-scale computational modeling to facilitate advanced product design, manufacturers can improve customer satisfaction and minimize costly design flaws.

A recent feature article on the Oak Ridge National Laboratory website recounts how the lab’s supercomputing resources enabled Procter & Gamble, the consumer products giant behind such brands as Downy, Head & Shoulders, Olay and Crest, to understand the molecular interactions that control the flow, thickness, performance and stability of P&G products.

Credit: Oak Ridge National Laboratory

Oak Ridge Science Writer Dawn Levy shares how Procter & Gamble and research partners at Temple University leveraged Oak Ridge systems, Jaguar and Titan, to perform challenging molecular dynamics simulations. The research team was specifically working to understand the interplay of fat-soluble molecules called lipids, and lipid vesicles, which are formed from lipid bilayers. Many products for the body and for laundry are comprised of these types of molecules, which directly impact the product’s performance and shelf-life.

“For Procter & Gamble, it is crucial to understand vesicle fusion if you want to extend the shelf lives of such products as fabric softeners, body washes, shampoos, lotions, and the like,” explained Temple’s Michael Klein, a National Academy of Sciences member who has collaborated with P&G for 15 years. “Vesicle fusion is a very hard science problem.”

The addition of perfumes and dyes can also affect stability, making the problem even more complex. Simulating the reorganization of lipid systems over time is thus a very challenging computational problem, surpassing the capabilities of P&G’s in-house machines. To perform these compute-intensive simulations, they turned to what was at the time the fastest supercomputer in the world, Jaguar, which has since been upgraded and re-launched as Titan.

The P&G/Temple University team was awarded time on Jaguar through the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, which is jointly managed by the U.S. Department of Energy’s (DOE’s) Leadership Computing Facilities at Argonne and Oak Ridge national laboratories. The researchers accessed 69 million core hours on Jaguar over two years, enabling them to perform simulations of large, complex systems of lipid assemblies.

Because of Jaguar’s powerful capabilities, and also the GPU-equipped Titan prototype, called TitanDev, the team was able to carry out vesicle fusion simulations that had previously not been possible. As Levy concludes, the research highlights the importance of leadership computing facilities for solving “unsolvable” problems and providing a major competitive advantage.

]]>http://www.hpcwire.com/2014/08/27/product-design-gets-supercomputing-treatment/feed/0Titan Captures Liquid-Crystal Film Complexityhttp://www.hpcwire.com/2014/04/14/titan-captures-liquid-crystal-film-complexity/?utm_source=rss&utm_medium=rss&utm_campaign=titan-captures-liquid-crystal-film-complexity
http://www.hpcwire.com/2014/04/14/titan-captures-liquid-crystal-film-complexity/#commentsMon, 14 Apr 2014 21:21:45 +0000http://www.hpcwire.com/?p=10532Liquid-crystal displays (familiar to most as LCDs) rely on the light modulating properties of liquid crystals to bring images to life on a wide variety of screens. From computer monitors to televisions to instrumental panels and signage, LCDs are a pervasive element of modern life. LCDs employ high-tech films, which must be both thin and Read more…

]]>Liquid-crystal displays (familiar to most as LCDs) rely on the light modulating properties of liquid crystals to bring images to life on a wide variety of screens. From computer monitors to televisions to instrumental panels and signage, LCDs are a pervasive element of modern life.

LCDs employ high-tech films, which must be both thin and robust. The problem is that these films degrade over time as liquid-crystal “mesogens,” which make up the films, redistribute to areas of lower energy in a process called dewetting. Eventually the film ruptures.

Recently a team of scientists at Oak Ridge National Laboratory put the lab’s Titan supercomputer – packed with 18,688 CPUs and an equal number of GPUs – to work to better understand the mechanics of this process, as reported on the OLCF website.

Some of the important uses of high-tech films include protecting pills from dissolving too early, keeping metals from corroding, and reducing friction on hard drives. When the films are manufactured using liquid crystals – macromolecules with both rigid and flexible elements – the innovation potential goes through the roof.

The rigid segments support interaction with electric currents, magnetic fields, ambient light and temperature and more. This has led to the material’s wide prevalance in 21st century flat-panel displays. Researchers are actively looking to expand the use of liquid crystal thin films for nanoscale coatings, optical and photovoltaic devices, biosensors, and other innovative applications, but the tendency toward rupturing has stymied progress. By studying the dewetting process more closely, scientists are paving the way for a better generation of films.

For several decades, the prevailing theory held that one of two mechanisms could account for dewetting, and these two mechanisms were mutually exclusive. Then about 10 years ago experiments showed that these two mechanisms did coexist in many cases, as Postdoctoral fellow Trung Nguyen of Oak Ridge National Laboratory (ORNL) explains. Nguyen, who was coprincipal investigator on the project with W. Michael Brown (then at ORNL, but now working at Intel), ran large-scale molecular dynamics simulations on ORNL’s Titan supercomputer detailing the beginning stages of ruptures forming on thin films on a solid substrate. The work appears as the cover story in the March 21, 2014, print edition of Nanoscale, a journal of the Royal Society of Chemistry.

“This study examined a somewhat controversial argument about the mechanism of the dewetting in the thin films,” stated Nguyen.

The two mechanisms thought to be responsible for the dewetting are thermal nucleation, a heat-mediated cause, and spinodal dewetting, a movement-induced cause. Theoretical models posited decades ago asserted that one or the other would be responsible for dewetting thin film, depending on its initial thickness. The simulation validated that the two mechanisms do coexist, although one does predominate depending on the thickness of the film – with thermal nucleation being more prominent in thicker films and spinodal dewetting more common in thinner films.

The impetus for the ruptures is the liquid-crystal molecules striving to recover lower-energy states. While still in the research stages, it is thought that this finding may boost innovation in using thin films for applications such as energy production, biochemical detection, and mechanical lubrication. The research was facilitated by a 2013 Titan Early Science program allocation of supercomputing time at the Oak Ridge Leadership Computing Facility. Nguyen’s team went through the ORNL’s Center for Accelerated Applications Readiness (CAAR) program, which gives early access to cutting-edge resources for codes that can take advantage of graphics processing units (GPUs) at scale. Under the CAAR program, Brown reworked the LAMMPS molecular dynamics code to leverage a large number of GPUs.

Titan, the most powerful US supercomputer and the world’s second fastest, has a max theoretical computing speed of 27 petaflops and a LINPACK measured at 17.59 petaflops. The Titan Cray XK7 system is also the first major supercomputing system to utilize a hybrid architecture using both conventional 16-core AMD Opteron CPUs plus NVIDIA Tesla K20 GPU parts.

The researchers utilized Titan to simulate 26 million mesogens on a substrate micrometers in length and width, employing 18 million core hours and harnessing up to 4,900 of Titan’s nodes. The study lasted three months, but would have taken about two years without the acceleration of Titan’s GPUs.

“We’re using LAMMPS with GPU acceleration so that the speedup will be seven times relative to a comparable CPU-only architecture – for example, the Cray XE6. If someone wants to rerun the simulations without a GPU, they have to be seven times slower,” Nguyen explained. “The dewetting problems are excellent candidates to use Titan for because we need to use big systems to capture the complexity of the dewetting origin of liquid-crystal thin films, both microscopically and macroscopically.”

This is the first study to simulate liquid-crystal thin films at experimental length- and timescales and also the first to relate the dewetting process to the molecular-level driving force, which causes the molecules to break up.

The Nanoscale paper was also authored by postdoctoral fellow Jan-Michael Carrillo, who worked on the simulation model, and computational scientist Michael Matheson, who developed the software for the analysis and visualization work.

]]>http://www.hpcwire.com/2014/04/14/titan-captures-liquid-crystal-film-complexity/feed/0Ice-Repellant Materials One Step Closerhttp://www.hpcwire.com/2013/09/12/ice-repellant_materials_one_step_closer/?utm_source=rss&utm_medium=rss&utm_campaign=ice-repellant_materials_one_step_closer
http://www.hpcwire.com/2013/09/12/ice-repellant_materials_one_step_closer/#commentsThu, 12 Sep 2013 07:00:00 +0000http://www.hpcwire.com/2013/09/12/ice-repellant_materials_one_step_closer/Scientists at GE Global Research are using the multi-petaflop Titan supercomputer at Oak Ridge National Laboratory to study the way that ice forms as water droplets come in contact with cold surfaces. They are working to develop "icephobic" materials that prevent ice formation and accumulation.

]]>Scientists at GE Global Research are using the multi-petaflop Titan supercomputer at Oak Ridge National Laboratory to study the way that ice forms as water droplets come in contact with cold surfaces. They are working to develop “icephobic” materials that prevent ice formation and accumulation.

“We have observed that certain types of surfaces hinder ice formation, but the exact mechanism was unknown,” writes GE High Performance Computing Advocate Rick Arthur in a recent blog entry. “We use simulations as a means to gain insight into the conditions under which ice can be suppressed.”

There are numerous industrial systems that would benefit from such a technology. Wind turbines, offshore oil & gas drilling and production rigs can withstand very cold climates, even rain and snow, but ice can be a game-stopper. The researchers were awarded 80 million CPU hours on Titan through the Department of Energy ASCR Leadership Computing Challenge to advance this science.

The blog entry highlights the work of Dr. Masako Yamada, a scientist in GE’s Advanced Computing Lab. Simulations help Dr. Yamada and her colleagues to better understand ice resistance. The effectiveness of the candidate surfaces is evaluated based on four potential effects:

Modeling and simulation are crucial to help narrow down potential candidates, but as Dr. Yamada explains, the computational technique – molecular dynamics – is notoriously time-consuming.

“‘Molecular’ means we track the position of every single water molecule. ‘Dynamics’ means we calculate very short slices of time,” she says.

Only the most powerful supercomputers in the world, machines like Titan, can handle this kind of compute-intensive work. Retooling their application to run on GPUs was another big step. The team achieved a 5x speedup by converting their code to run on Titan’s GPU accelerators.

“Even so,” says Yamada, “we can only model water droplets that are about 50 nanometers in size (far smaller than real world droplets) and we still cannot run our models to simulate as long a time period as we would like.”

The use of virtual models, as opposed to “real-life” experiments, allows for greater insight into the process:

“We can see exactly how the water molecules interact with the surfaces,” notes Yamada. “This is simply impossible using any physical test. In addition, in the virtual world, the results are not impacted by dirt, defects and other random sources of noise.”

Ultimately, the research will help establish a new class of materials. From safer aircraft engines to self-defrosting car windshields and even frustration-free ice cream scoops, the potential applications range as far as the imagination.

]]>http://www.hpcwire.com/2013/09/12/ice-repellant_materials_one_step_closer/feed/0Ford Taps ORNL to Boost Vehicle Airflow, Fuel Efficiencyhttp://www.hpcwire.com/2013/08/19/ford_taps_ornl_to_boost_vehicle_airflow_fuel_efficiency/?utm_source=rss&utm_medium=rss&utm_campaign=ford_taps_ornl_to_boost_vehicle_airflow_fuel_efficiency
http://www.hpcwire.com/2013/08/19/ford_taps_ornl_to_boost_vehicle_airflow_fuel_efficiency/#commentsMon, 19 Aug 2013 07:00:00 +0000http://www.hpcwire.com/2013/08/19/ford_taps_ornl_to_boost_vehicle_airflow_fuel_efficiency/Anybody who drives one of Ford's recent vehicles spends a little less money on gasoline thanks to HPC work the carmaker undertook with Oak Ridge National Laboratory, where more than one million processor hours were spent getting a handle on the complex fluid dynamics governing airflow under the hood.

]]>Anybody who drives one of Ford’s recent vehicles spends a little less money on gasoline thanks to HPC work the carmaker undertook with Oak Ridge National Laboratory, where more than one million processor hours were spent getting a handle on the complex fluid dynamics governing airflow under the hood.

Carmakers around the world are spending billions of dollars to find ways to comply with new fuel efficiency mandates of the U.S. government. The manufacturers are turning over every rock to find any performance gains, from low rolling resistance tires to hybrid drivetrains.

One area of exploration that may slip by the public’s eye is the flow of air through the front grill of a car into the engine bay, which has a significant impact on the car’s fuel consumption and overall performance. However, understanding how to build for maximum cooling efficiency while simultaneously minimizing front-end drag is a very difficult task because each of the many components within the compartment can alter the airflow.

“Any change in the size and position of just one component can have a significant impact on the computational model as a whole,” said Burkhard Hupertz, the thermal and aerosystems computer-aided engineering (CAE) supervisor at Ford of Europe, in a recent story on the Oak Ridge Leadership Computing Facility website. “Making one more efficient could result in the loss of cooling or increased drag for another.”

In the past, carmakers would spend a large amount of time using the trial and error method to come up with a suitable design. Several years ago, Ford decided to speed the design process and build a prototype model of airflow that could be applied to many cars across its lineup. However, this approach would require running thousands of simulations to find the optimal design parameters. This is what led the team, led by Hupertz and senior HPC technical specialist and lead investigator Alex Akkerman, to ORNL and the Jaguar supercomputer.

The first step in the process was porting its computational fluid dynamics code, called Underhood 3D (UH3D), to Jaguar. After scaling UH3D to run on Jaguar (which has since been transformed into Titan), the Ford team used approximately 1 million processor hours to test 11 geometric and non-geometric parameters (such as cooling fan speed) against four different operating conditions, for a total of 1,600 simulation cases, according to the ORLF story.

The work on Jaguar has enabled Ford to find a design that represents a happy medium between maximizing cooling airflow, while minimizing front-end drag. “Access to Jaguar enabled us to develop a new methodology that allowed Ford, for the first time, to conduct engine bay analysis with the required number of design variables and operating conditions for a true design optimization,” Akkerman told ORLF.

The results are evident in the vehicles that Ford has put on the road the last few years. While Ford has come under fire recently for overstating the mileage of some of its vehicles–specifically the C-MAX Hybrid, for which Ford this month lowered mileage estimates–the carmaker has delivered notable fuel efficiency gains across the breadth of its lineup. At least some of those gains can be attributed to the research done on Jaguar.

]]>http://www.hpcwire.com/2013/08/19/ford_taps_ornl_to_boost_vehicle_airflow_fuel_efficiency/feed/0Vampir Rises to the Occasion at ORNLhttp://www.hpcwire.com/2013/07/31/vampir_rises_to_the_occasion_at_ornl/?utm_source=rss&utm_medium=rss&utm_campaign=vampir_rises_to_the_occasion_at_ornl
http://www.hpcwire.com/2013/07/31/vampir_rises_to_the_occasion_at_ornl/#commentsWed, 31 Jul 2013 07:00:00 +0000http://www.hpcwire.com/2013/07/31/vampir_rises_to_the_occasion_at_ornl/Researchers are licking their chops with the potential to speed the execution of parallel applications on the largest supercomputers using Vampir, a performance tool that traces events and identifies problems in HPC applications.

]]>Researchers are licking their chops with the potential to speed the execution of parallel applications on the largest supercomputers using Vampir, a performance tool that traces events and identifies problems in HPC applications. The scalability breakthrough with Vampir came as the result of work done on Jaguar, the predecessor to Titan at Oak Ridge National Laboratory.

Vampir (Visualization and Analysis of MPI Resources) was developed at the University of Dresden to help troubleshoot problems that develop in parallel HPC applications. The tool, which also now supports OpenMP, Pthreads, and Cuda in addition to MPI, is especially useful in flushing out any of the myriad bugs or other problems that appear when researchers begin running their code on larger parallel clusters.

The potential to smooth the scale-up process is especially important because researchers do not start out running their parallel codes on massive machines. Instead, they start out on departmental clusters or small sets of bigger machines, perhaps 100 processors at a time. There are any number of problems that can appear as researchers begin running HPC applications on larger machines–including the overuse of barriers and I/O chokepoints–and the scale-up process is rarely linear or smooth.

This graph demonstrates how Vampir can increase bandwidth performance and maximum job size.

There were several issues with scaling Vampir itself that researchers had to overcome, according to a story published recently by the Oak Ridge Leadership Computing Facility website. One hurdle that researchers had to overcome involved how Vampir uses memory. Vampir works by installing itself onto a small portion of memory on each node, which it uses to log events as they occur in the HPC application.

However, if there isn’t enough memory available to capture all the events–either due to a long running application or by setting Vampir to collect a very fine level of detail of events–then the program slows down as huge amounts of data are written to the file system. The team of researchers addressed this problem by modifying the procedure to happen quickly and trouble-free, the ORLCF story notes.

The team successfully ran Vampir at scale on all 220,000 CPU processors on Jaguar in 2012. That was before Jaguar morphed into Titan, which sports nearly 300,000 CPUs (and more than 18,000 GPUs) and is currently the third-fastest supercomputer in the world. Prior to this, Vampir had only proven itself on a machine with 86,400 cores, according to the ORLCF story.

Terry Jones of Oak Ridge National Laboratory, left, and Joseph Schuchart of Technische Universität Dresden were part of a team that readied the Vampir performance tool to work on extreme-scale supercomputers.

“Understanding code behavior at this new scale with Vampir is huge,” ORNL computer scientist Terry Jones told ORLCF. “For people that are trying to build up a fast leadership-class program, we’ve given them a very powerful new tool to trace events at full size because things happen at larger scale that just don’t happen at smaller scale.”

]]>http://www.hpcwire.com/2013/07/31/vampir_rises_to_the_occasion_at_ornl/feed/0Kraken Chews on Gribble Data for Industrial Enzyme Researchhttp://www.hpcwire.com/2013/06/25/kraken_chews_on_gribble_data_for_industrial_enzyme_research/?utm_source=rss&utm_medium=rss&utm_campaign=kraken_chews_on_gribble_data_for_industrial_enzyme_research
http://www.hpcwire.com/2013/06/25/kraken_chews_on_gribble_data_for_industrial_enzyme_research/#commentsTue, 25 Jun 2013 07:00:00 +0000http://www.hpcwire.com/?p=3992A diminutive marine crustacean called the Gribble landed on the biofuel industry's radar for its unique ability to digest wood in salty conditions. Now, researchers in the US and the UK are putting the University of Tennessee's Kraken supercomputer to work modeling an enzyme in the Gribble's gut, which could unlock the key to developing better industrial enzymes in the future.

]]>Several years ago, a diminutive marine crustacean called the Gribble landed on the biofuel industry’s radar for its unique ability to digest wood in salty conditions. Now, researchers in the US and the UK are putting the University of Tennessee’s Kraken supercomputer to work modeling an enzyme in the Gribble’s gut, which could unlock the key to developing better industrial enzymes in the future.

Marine biologists in the UK made an important discovery about the Gribble in 2010. Apparently, the wood-boring critters had so-called “family-7″ enzymes living in their gut. Family-7 enzymes are usually only found in fungi, which have traditionally been the main sources of the enzymes that biofuel researchers are interested in.

Armed with this information, a group of researchers from the University of Portsmouth in the UK, the U.S. Department of Energy’s National Renewable Energy Laboratory (NREL), and the University of Kentucky set out to better understand the Gribble and its enzymes.

The U.K. researchers isolated one of the family-7 enzymes in the Gribble, called Cel7B, and solved its structure with X-ray diffraction, providing a good static view of the entity. Meanwhile, the NREL enlisted the UT’s Kraken supercomputer to perform molecular dynamics (MD) simulations on Cel7B, which provided a detailed view of Cel7B’s activity.

Kraken is a Cray XT5 supercomputer housed at the Oak Ridge National Laboratory and operated by the UT’s National Institute for Computational Sciences (NICS). In 2009, Kraken became the world’s first academic supercomputer to enter the petascale range, which means it performed more than one thousand trillion operations per second.

At the time, Kraken was only the fourth supercomputer of any kind to break the petascale barrier. The 112,800-core Opteron-based system debuted on the Top 500 list of the world’s biggest supercomputers in June 2011 at number 11. It has not run the LINPACK test again, and slipped to number 30 on the June 2013 edition of the list. The 9,400-node cluster continues to help scientists in the fields of astronomy, chemistry, and meteorology.

The MD simulations on Kraken have already led to several potentially valuable discoveries about the Gribble’s enzyme, according to NREL’s Gregg Beckham. For example, the researchers found “that the charge on the enzyme’s surface was immense,” Beckham tells the NICS.

High negative surface charge is typically correlated with salt tolerance. Indeed, the researchers found that Cel7B remained active in water up to six times saltier than ocean water. This is potentially valuable because it means Cel7B may be hardy in high-solids, industrial environments. Enzymes with high-solids tolerance have the potential to save industrial biofuel operations money because they require a smaller reactor and less water, Beckham says.

The work with Kraken, which is being funded by the National Science Foundation’s eXtreme Science and Engineering and Discovery Environment (XSEDE), is still on-going. Up next: comparing Cel7B with other family-7 enzymes, with the goal of better understanding this class of enzymes and potentially modifying them for industrial use.

]]>http://www.hpcwire.com/2013/06/25/kraken_chews_on_gribble_data_for_industrial_enzyme_research/feed/0Titan Didn’t Redo LINPACK for June Top 500 Listhttp://www.hpcwire.com/2013/06/13/titan_didnt_redo_linpack_for_june_top_500_list/?utm_source=rss&utm_medium=rss&utm_campaign=titan_didnt_redo_linpack_for_june_top_500_list
http://www.hpcwire.com/2013/06/13/titan_didnt_redo_linpack_for_june_top_500_list/#commentsThu, 13 Jun 2013 07:00:00 +0000http://www.hpcwire.com/?p=4000Titan, the Cray XK7 at the Oak Ridge National Lab that debuted last fall as the fastest supercomputer in the world with 17.59 petaflops of sustained computing power, will rely on its previous LINPACK test for the upcoming edition of the Top 500 list.

]]>Titan, the Cray XK7 at the Oak Ridge National Lab that debuted last fall as the fastest supercomputer in the world with 17.59 petaflops of sustained computing power, will rely on its previous LINPACK test for the upcoming edition of the Top 500 list.

The 560,000-core Titan had little chance of retaining the number one spot on the June 2013 Top 500 list, which will be unveiled at the International Supercomputing Conference next week in Germany. China’s massive Tianhe-2, with 3.1 million cores and a reported 31 petaflops of sustained capacity, will take the top spot on the list, barring any unforeseen events.

While Titan likely could have improved its performance on the LINPACK test that determines one’s place on the Top 500 list, it was apparently not worth the effort.

“To be honest, we decided at this point not to waste any more time,” Jeff Nichols, Oak Ridge’s scientific computing chief, told blogger Frank Munger. “We know that the Chinese machine is going to blow us out of the water.”

Titan could have made an iterative improvement on the LINPACK test, perhaps stretching its results to 19 to 20 petaflops. The machine, which uses a combination of Opteron CPUs and Nvidia GPUs, has a theoretical peak of 27 petaflops. Tianhe-2, which consumes 24 MW of power (compared to Titan’s 8.2 MW), has a theoretical peak of 49 petaflops.

Instead of taking a week to rerun LINPACK, Nichols and others at Oak Ridge decided it was better to use Titan to do the scientific work that it was built to perform, rather than compete for ranking.

Titan burst on the scene last November, when it pushed Sequoia, the IBM BlueGene/Q supercomputer installed at Lawrence Livermore National Lab, out of the top spot. Sequoia, which debuted on the list in November 2011 with 677 terraflops with only 65,000 Power cores running, moved up to the number one spot in June 2012, when it had its full complement of 1.57 million cores pushing 16.3 petaflops.

While not every top HPC system reruns LINPACK every six months, some top supercomputers have opted out of the Top 500 rankings altogether. Blue Waters, the Opteron- and Nvidia-based Cray supercomputer that recently went online at University of Illinois Urbana-Champaign, will not run LINPACK participate in the Top 500 list. It has 11.5 petaflops of capacity, but the administrators in charge of Blue Waters prefer to do actual research, instead of taking the time to tune the system for a favorable LINPACK result.