HPCwire » Short Takeshttp://www.hpcwire.com
Since 1986 - Covering the Fastest Computers in the World and the People Who Run ThemTue, 31 Mar 2015 19:48:35 +0000en-UShourly1http://wordpress.org/?v=4.1.1Market Responds to Intel-Altera Negotiationshttp://www.hpcwire.com/2015/03/30/market-responds-to-intel-altera-negotiations/?utm_source=rss&utm_medium=rss&utm_campaign=market-responds-to-intel-altera-negotiations
http://www.hpcwire.com/2015/03/30/market-responds-to-intel-altera-negotiations/#commentsTue, 31 Mar 2015 00:48:48 +0000http://www.hpcwire.com/?p=17973If Intel Corp. goes ahead with purported plans to purchase programmable logic peddler Altera Corp., it would be the largest-ever acquisition in the x86 chipmaker’s 46-year history. Whether or not the deal actually goes through, the negotiations, first reported by the Wall Street Journal on March 27, are interesting for several reasons. As NASDAQ.com pointed

]]>If Intel Corp. goes ahead with purported plans to purchase programmable logic peddler Altera Corp., it would be the largest-ever acquisition in the x86 chipmaker’s 46-year history. Whether or not the deal actually goes through, the negotiations, first reported by the Wall Street Journal on March 27, are interesting for several reasons.

As NASDAQ.com pointed out, word of these “advanced talks” pushed Intel stock up 6.4 percent. This is rare for an acquisition of this magnitude, which usually results in a minor sell-off as investors fret over the uncertainty involved and the prospect of all that debt.

Altera, known in the HPC space for its Stratix series FPGAs and System on Chip FPGA devices, experienced an even stronger stock surge in the wake of buyout speculation, pushing its market value from $10.4 billion to $13.4 billion by the market’s close on Friday. As of today (Monday), both companies’ stocks retracted slightly, but are still up by about 5 percent (Intel) and 25 percent (Altera). As of its last reporting, Intel had $14.1 billion cash on hand, making this a tight-but-not-impossible purchase.

The companies already have a partnership going back to February 2013, when Intel began providing foundry services for the FPGA giant. No longer having to pay for overhead on its chip manufacturing would be a boon for Altera’s gross margin. There is also synergy to be tapped combining Xeon processors with Altera coprocessors, providing Intel a competitive play against both the OpenPower consortium and Heterogenous Systems Architecture foundation.

Recall that Intel has been working on a Xeon-FPGA chip for about a year now, as announced by Intel’s Diane Bryant at Gigaom Structure 2014. The integrated design aims to provide customers with “a programmable, high performance coherent acceleration capability to turbo-charge their algorithms.” For the record, Intel didn’t say which of the two FPGA companies (Altera or Xilinx) they were partnering with for the hybrid devices, but we’re betting it’s Altera.

The deal is largely seen by market-watchers as something that Intel needs to do to boost its datacenter business in response to a tepid mobile market and lagging PC biz. With Altera set to bring in about $2 billion in revenue this year, an amount that is just shy of 4% of Intel’s projected revenue, the profit payoff would not happen overnight but by strengthening its datacenter prowess, Intel fortifies its highest-earning division. The datacenter group is already responsible for about about 26 percent of Intel’s business and is on track to grow to 32 percent in 2016. According to NASDAQ, bringing Altera into the fold would increase that revenue pool by another 13.4 percent, goosing earnings by 7.44 percent.

]]>http://www.hpcwire.com/2015/03/30/market-responds-to-intel-altera-negotiations/feed/0DARPA Seeks New Computing Paradigmshttp://www.hpcwire.com/2015/03/26/darpa-seeks-new-computing-paradigms/?utm_source=rss&utm_medium=rss&utm_campaign=darpa-seeks-new-computing-paradigms
http://www.hpcwire.com/2015/03/26/darpa-seeks-new-computing-paradigms/#commentsFri, 27 Mar 2015 00:59:20 +0000http://www.hpcwire.com/?p=17923May you live in interesting times, cautions the age-old proverb. As computing chips face the fundamental limitations of miniaturization, it is sure to be interesting times, indeed. One of the most pressing issues facing the scientific community is the inability of today’s best computers to process the large-scale simulations needed for understanding complex physical systems. “Over

]]>May you live in interesting times, cautions the age-old proverb. As computing chips face the fundamental limitations of miniaturization, it is sure to be interesting times, indeed. One of the most pressing issues facing the scientific community is the inability of today’s best computers to process the large-scale simulations needed for understanding complex physical systems.

“Over the past half century, as supercomputers got faster and more powerful, such simulations became ever more accurate and useful,” states the Defense Advanced Research Projects Agency (DARPA). “But in recent years even the best computer architectures haven’t been able to keep up with demand for the kind of simulation processing power needed to handle exceedingly complex design optimization and related problems.”

To remedy this situation, DARPA is seeking ideas on how to speed up the computation of the complex mathematics that undergird scientific computing. Specifically the agency is looking for assistance with a class of equations, known as partial differential equations. These equations, which describe fundamental physical principles of motion, diffusion and equilibrium, involve continuous rates of change over a large range of physical parameters. These are problems that are not easily broken into discrete parts to be solved by individual CPUs.

“The standard computer cluster equipped with multiple central processing units (CPUs), each programmed to tackle a particular piece of a problem, is just not designed to solve the kinds of equations at the core of large-scale simulations, such as those describing complex fluid dynamics and plasmas,” said Vincent Tang, program manager in DARPA’s Defense Sciences Office.

“A processor specially designed for such equations may enable revolutionary new simulation capabilities for design, prediction, and discovery. But what might that processor look like?” asks the DARPA invitation.

Before the digital era, equations were solved analog-style by manipulating continuously changing values instead of discrete measurements. The analog computer goes back more than 100 years but was displaced when transistor-based digital computers rose to prominence in the 1950s and 1960s based on their ability to solve a wide range of problems.

DARPA suggests that the time is right for taking another look at using analog substrates for the efficient simulation of “systems governed by complex, simultaneous, locally interacting, and non-linear phenomena,” especially given the advances that have been made in microelectromechanical systems, optical engineering, microfluidics, metamaterials and even DNA computing. If the performance advantage is significant enough, the analog coprocessor could be the next big thing in heterogenous computing.

The RFI seeks new processing paradigms that have the potential to overcome current barriers in computing performance – analog, digital, or hybrid approaches are all welcome.

From the announcement:

The RFI invites short responses that address the following needs, either singly or in combination:

Scalable, controllable, and measurable processes that can be physically instantiated in co-processors for acceleration of computational tasks frequently encountered in scientific simulation.

Algorithms that use analog, non-linear, non-serial, or continuous-variable computational primitives to reduce the time, space, and communicative complexity relative to von Neumann/CPU/GPU processing architectures.

Technology development beyond these areas will be considered so long as it supports the RFI’s goals.

DARPA is particularly interested in engaging nontraditional contributors to help develop leap-ahead technologies in the focus areas above, as well as other technologies that could potentially improve the computational tractability of complex nonlinear systems.

DARPA’s Request for Information (RFI) – titled Analog and Continuous-variable Co-processors for Efficient Scientific Simulation (ACCESS) – is available at: http://go.usa.gov/3CV43. Responses are due by 4:00 p.m. Eastern on April 14, 2015.

]]>http://www.hpcwire.com/2015/03/26/darpa-seeks-new-computing-paradigms/feed/0India Greenlights $730 Million Supercomputing Gridhttp://www.hpcwire.com/2015/03/26/india-greenlights-730-million-supercomputing-grid/?utm_source=rss&utm_medium=rss&utm_campaign=india-greenlights-730-million-supercomputing-grid
http://www.hpcwire.com/2015/03/26/india-greenlights-730-million-supercomputing-grid/#commentsThu, 26 Mar 2015 22:57:49 +0000http://www.hpcwire.com/?p=17921The Indian government has approved a seven-year supercomputing program worth $730 million (Rs. 4,500-crore) intended to restore the nation’s status as a world-class computing power. The prime mandate of the National Supercomputing Mission, first revealed last October, is the construction of a vast supercomputing grid connecting academic and R&D institutions and select departments and ministries. The

]]>The Indian government has approved a seven-year supercomputing program worth $730 million (Rs. 4,500-crore) intended to restore the nation’s status as a world-class computing power.

The prime mandate of the National Supercomputing Mission, first revealed last October, is the construction of a vast supercomputing grid connecting academic and R&D institutions and select departments and ministries. The National Supercomputing grid will be comprised of more than 70 geographically-distributed high-performance computing centers linked over a high-speed network, the National Knowledge Network (NKN).

According to an official press statement from India’s Cabinet Committee on Economic Affairs, the mission involves both capacity and capability machines. Earlier reports stated that the first order of business would be raising India’s supercomputing ranking by standing up three petascale supercomputers, some 40-times faster than the country’s current fastest.

Once title-holder to the world’s fourth-fastest supercomputer (“Eka”) in 2007, India has not kept up its supercomputing investment. Its current top system, a 719-teraflops IBM/Lenovo iDataPlex installed at the Indian Institute of Tropical Meteorology, has slid from 36th to 71st position since it making its TOP500 debut in 2013. And the nation’s second-fastest number-cruncher, the 388-teraflops PARAM Yuva II, has gone from 131 to 69 in the same timeframe.

The nation’s first petascale systems would “boost high-performance computing for India several fold,” according to K VijayRaghavan, secretary, science and technology department. The large-scale cyberinfrastructure will support applications of national relevance, including grand challenge problems, advanced research and development and home-grown Indian technologies.

“The Mission implementation would bring supercomputing within the reach of the large Scientific & Technology community in the country,” remarked the Cabinet Committee on Economic Affairs. “Currently, in the top Supercomputing machines in the world, a major share is taken from advanced countries such as the US, Japan, China and the European Union (EU). The mission envisages India to be in the select league of such nations. To provide continuity in maintaining a lead in supercomputing, the Mission also includes advanced R&D. This will create requisite expertise to build state-of-the-art next generation supercomputing. The Mission supports the government’s vision of “Digital India” and “Make in India” initiatives.”

The program will be jointly managed by the Department of Science and Technology and Department of Electronics and Information Technology and implemented through two of India’s primary science organizations: the Centre for Development of Advanced Computing (C-DAC) and the Indian Institute of Science (IISc), Bangalore.

]]>http://www.hpcwire.com/2015/03/26/india-greenlights-730-million-supercomputing-grid/feed/0Weekly Twitter Rounduphttp://www.hpcwire.com/2015/03/26/weekly-twitter-roundup-36/?utm_source=rss&utm_medium=rss&utm_campaign=weekly-twitter-roundup-36
http://www.hpcwire.com/2015/03/26/weekly-twitter-roundup-36/#commentsThu, 26 Mar 2015 21:22:10 +0000http://www.hpcwire.com/?p=17889Here at HPCwire, we want to help keep the HPC community as up-to-date as possible on some of the most captivating news items that were tweeted throughout the week. The tweets that caught our eye this past week are listed below. Check back in next Thursday for an entirely updated list. #GTC15 @cray_inc is by far the #HPC

]]>Here at HPCwire, we want to help keep the HPC community as up-to-date as possible on some of the most captivating news items that were tweeted throughout the week. The tweets that caught our eye this past week are listed below. Check back in next Thursday for an entirely updated list.

]]>http://www.hpcwire.com/2015/03/26/weekly-twitter-roundup-36/feed/0Making the Case for HPC Just Got Easierhttp://www.hpcwire.com/2015/03/24/making-the-case-for-hpc-just-got-easier/?utm_source=rss&utm_medium=rss&utm_campaign=making-the-case-for-hpc-just-got-easier
http://www.hpcwire.com/2015/03/24/making-the-case-for-hpc-just-got-easier/#commentsTue, 24 Mar 2015 22:06:21 +0000http://www.hpcwire.com/?p=17905A new study highlights the importance of locally available supercomputers to university research. Authored by a cross-disciplinary team of experts from Clemson University, the report provides compelling evidence connecting TOP500-level computing power with technical efficiency of research output. HPC users are well aware of the advantages that leadership computing confers in terms of competitiveness, innovation and other societal benefits, but policy makers require

]]>A new study highlights the importance of locally available supercomputers to university research. Authored by a cross-disciplinary team of experts from Clemson University, the report provides compelling evidence connecting TOP500-level computing power with technical efficiency of research output.

HPC users are well aware of the advantages that leadership computing confers in terms of competitiveness, innovation and other societal benefits, but policy makers require repeated assurances that the large expenditures are justified. The aim of the Clemson study is to put hard numbers behind assumptions of HPC’s merits. To do this, the authors developed a quantitative economic model that compares HPC “haves” and “have-nots” across a range of disciplines.

Per the study design, 212 institutions were classified into “haves” — those with a TOP500 supercomputer — and “have-nots” — those without TOP500-level power.

To evaluate technical efficiency, the study relied on input variables — listed as the total number of faculty members and incoming graduate students’ average GRE scores — and output variables — the total number of publications for the academic year and the number of Ph.D. degrees awarded.

The research team, led by Amy Apon who chairs the Computer Science Division in the School of Computing, found the biggest effect of having a world-class supercomputer in the fields of chemistry, civil engineering, physics and evolutionary biology. Chemistry “haves” were doubly efficient compared with “have-nots,” and in civil engineering, a TOP500 supercomputer provided a 35 percent efficiency edge. Physics and evolutionary biology were also positively impacted but to a lesser degree.

The supercomputing effect did not extend to all domains, however; research output was not enhanced for computer science, economics or English — and for biology, results were mixed, according to the Clemson colleagues.

“For the nation, it is unequivocal that a high-performance computing system will provide an advantage in doing research in several fields,” stated Apon. “It’s not uniform across all fields. But for fields where it matters, it matters a lot.”

Paul W. Wilson, the lead economist on the study, sees it as a potential tool for helping policy makers with investment decisions relating to science and innovation as well as cyber-infrastructure.

“While many would agree that high-performance computing has a positive effect on research output, the connection has been assumed and qualitative until now,” he said. “This is a critical first step in creating a model for evaluating investments in high-performance computing.”

Jim Bottum, Clemson’s chief information officer and vice provost for computing and information technology, is encouraged by the results. Clemson is one of the supercomputing “haves” owing to its Palmetto Cluster, ranked as the sixth fastest public university supercomputer in the US on last November’s TOP500.

“Our research results provide a critical first step in a quantitative economic model for investments in HPC,” the authors wrote in the journal article describing their research, published in a special issue of Empirical Economics.

]]>http://www.hpcwire.com/2015/03/24/making-the-case-for-hpc-just-got-easier/feed/0Weekly Twitter Rounduphttp://www.hpcwire.com/2015/03/19/weekly-twitter-roundup-35/?utm_source=rss&utm_medium=rss&utm_campaign=weekly-twitter-roundup-35
http://www.hpcwire.com/2015/03/19/weekly-twitter-roundup-35/#commentsFri, 20 Mar 2015 06:57:43 +0000http://www.hpcwire.com/?p=17801Here at HPCwire, we want to help keep the HPC community as up-to-date as possible on some of the most captivating news items that were tweeted throughout the week. The tweets that caught our eye this past week are listed below. Check back in next Thursday for an entirely updated list. RT @KAUST_News After much anticipation KAUST’s @cray_inc

]]>Here at HPCwire, we want to help keep the HPC community as up-to-date as possible on some of the most captivating news items that were tweeted throughout the week. The tweets that caught our eye this past week are listed below. Check back in next Thursday for an entirely updated list.

]]>http://www.hpcwire.com/2015/03/19/weekly-twitter-roundup-35/feed/0Weekly Twitter Rounduphttp://www.hpcwire.com/2015/03/12/weekly-twitter-roundup-34/?utm_source=rss&utm_medium=rss&utm_campaign=weekly-twitter-roundup-34
http://www.hpcwire.com/2015/03/12/weekly-twitter-roundup-34/#commentsFri, 13 Mar 2015 00:44:04 +0000http://www.hpcwire.com/?p=17777Here at HPCwire, we want to help keep the HPC community as up-to-date as possible on some of the most captivating news items that were tweeted throughout the week. The tweets that caught our eye this past week are listed below. Check back in next Thursday for an entirely updated list. #TBT 'Gigabyte Gals': Women played integral

]]>Here at HPCwire, we want to help keep the HPC community as up-to-date as possible on some of the most captivating news items that were tweeted throughout the week. The tweets that caught our eye this past week are listed below. Check back in next Thursday for an entirely updated list.

]]>http://www.hpcwire.com/2015/03/12/weekly-twitter-roundup-34/feed/0Hybrid Test Clusters Forge Path to ‘Summit’http://www.hpcwire.com/2015/03/12/hybrid-test-clusters-forge-path-to-summit/?utm_source=rss&utm_medium=rss&utm_campaign=hybrid-test-clusters-forge-path-to-summit
http://www.hpcwire.com/2015/03/12/hybrid-test-clusters-forge-path-to-summit/#commentsFri, 13 Mar 2015 00:42:31 +0000http://www.hpcwire.com/?p=17797The Department of Energy (DOE) pre-exascale supercomputer Summit is not scheduled to go live until early 2018, yet support staff at the Oak Ridge Leadership Computing Facility (OLCF) have been preparing for Summit’s arrival since the contract was announced last November. The degree of planning is only natural considering the expense and resources involved in standing

]]>The Department of Energy (DOE) pre-exascale supercomputer Summit is not scheduled to go live until early 2018, yet support staff at the Oak Ridge Leadership Computing Facility (OLCF) have been preparing for Summit’s arrival since the contract was announced last November. The degree of planning is only natural considering the expense and resources involved in standing up one of the first machines in its class with an expected 150-300 petaflops of performance.

To prepare for Summit, OLCF staff – including OLCF Scientific Computing (SciComp), Technology Integration (TechInt), and High-Performance Computing Operations (HPC Ops) groups – constructed a test bed early last year comprised of two clusters, Pike and Crest, each designed to represent elements of Summit’s hybrid CPU–GPU computing architecture. By probing the workings of Pike and Crest, staff and vendors have the opportunity to identify and fix problems in a preemptive fashion, ensuring that the transition to Summit goes as smoothly as possible.

The clusters both employ IBM Power8 parts, the predecessor to the Power9 CPUs that will power Summit, but are otherwise distinct to enable different aspects of Summit to be assessed.

Crest is a compute test bed comprised of four nodes, each with the aforementioned Power chips and four GPUs, presumably the most current Tesla chips since Summit will be built with future NVIDIA Volta GPUs. Crest will be used for scaling up scientific codes and testing early versions of software.

“We’re checking out compilers and building and running codes; that’s a good outcome of this,” said HPC Ops system administrator Don Maxwell, the team lead for Crest. “We will also begin using Crest to test new software that IBM is developing for Summit to ensure it meets our requirements.”

The other test system Pike has 14 Power nodes attached to non-volatile memory disk. It was designed to help OLCF become familiar with Summit’s high-speed data storage system. OLCF has primarily relied on Lustre file systems, but Summit will use IBM’s Elastic Storage System (ESS), which is based on IBM’s General Parallel File System technology. By running benchmark jobs on Pike, OLCF staff will have the opportunity to study attributes such as metadata performance, block I/O, random/sequential performance, and data management.

While Crest and Pike are first test systems to enable early exploration of Summit’s proposed computing architecture, they are by no means the last. The next planned test unit will incorporate NVLink, Summit’s node integration interconnect that NVIDIA will debut with Pascal and Volta GPUs.

]]>http://www.hpcwire.com/2015/03/12/hybrid-test-clusters-forge-path-to-summit/feed/0Intel’s James Reinders on Parallel Programming and MIChttp://www.hpcwire.com/2015/03/11/intels-james-reinders-on-parallel-programming-and-mic/?utm_source=rss&utm_medium=rss&utm_campaign=intels-james-reinders-on-parallel-programming-and-mic
http://www.hpcwire.com/2015/03/11/intels-james-reinders-on-parallel-programming-and-mic/#commentsThu, 12 Mar 2015 01:15:41 +0000http://www.hpcwire.com/?p=17778On January 30, 2015 at the Colfax International headquarters in Sunnyvale, Calif., Intel’s parallel computing savant James Reinders sat down with Vadim Karpusenko, principal HPC research engineer at Colfax International, for an enlightened discussion on the future of parallel programming and Intel MIC architecture products. As Director and Chief Evangelist at Intel Corporation, Reinders is

]]>On January 30, 2015 at the Colfax International headquarters in Sunnyvale, Calif., Intel’s parallel computing savant James Reinders sat down with Vadim Karpusenko, principal HPC research engineer at Colfax International, for an enlightened discussion on the future of parallel programming and Intel MIC architecture products. As Director and Chief Evangelist at Intel Corporation, Reinders is responsible for communicating Intel’s message of how to get the best performance out of hardware.

“At Intel we build great products with a lot of capabilities, but the challenge is how do you explain how to use it, how do you get standards that support it, tools that support it, and how do you get software developers trained in it,” says Reinders of his ambassador-like role.

The dynamics that led to today’s manycore era can be traced back to 2005. Traditional approaches to boosting CPU performance, like driving up clock speeds, hit a wall, and chipmakers made up for the lost performance gains by moving to hyperthreading and multicore architectures. But the hardware changes wouldn’t be fruitful without software that could leverage the additional cores. This necessitated a rethinking of algorithms and approaches, says Reinders.

In this 50 minute video, Reinders reviews the path of Intel’s MIC (Many Integrated Core) architecture from the first-generation Xeon Phi (Knights Corner) to the imminent launch of Knights Landing (KNL) to the expected third-generation product, codenamed Knights Hill (KNH). It takes quite a few years for design to go from concept to becoming a product, says Reinders, but he confirms that the work on Knights Landing is nearly complete and also that there is a team working on the third generation Xeon Phi, Knights Hill. He also provides some interesting details about the underlying process technology.

“We have a lot of innovations up our sleeve and the one that we’ve definitely confirmed is that [Knights Hill] will be on the next-generation process technology,” Reinders shares. “Knights Landing is exciting in coming to 14 nm for the first time for Xeon Phi. Knights Hill will be on the 10 nm process, which gives us more density, more performance, power, and capabilities. You’ll have to wait to see what we’ve done on the cores. But it’s a collection of x86 devices, Intel architecture, so we’ll carry forward the programming story that it has this high-level compatibility with standards and with Intel architecture.”

The point that Reinders really drives home is that the Phi chips were engineered enable dramatic performance gains for highly parallel codes.

“The MIC architecture…is our approach when we architect the chip assuming you are going to run a parallel program on it,” he states. “That’s what really differentiates it from our other products. We’ve optimized it to run a parallel program as fast as possible and it’s absolutely terrible at running a non-parallel program. Whereas our regular processors – our Xeons, our Core processors and our Atom processors – they are designed to balance with the real world. They are designed to allow you to write parallel programs on them and get benefits but they are also designed to handle things like server workloads and multi-tasking workloads that you might find in a tablet or desktop or so forth.

“For the MIC architecture, we threw that out the window and said, what if we designed knowing that the programmer’s only going to throw a parallel program at us, that they are going to try to take advantage of all 61 cores on the current Xeon Phi, what if we designed for that, and it turns out we can put more cores on a device because we can get rid of some of the functionality that can take care of serial, non-parallel programs. And as an engineer, that excites me because you are designing for a different design point and that’s what MIC architecture is all about.

“At Intel, we took the approach of making it compatible, so it really is an SMP cluster on a chip of x86s, it really is Intel architecture on lots of cores, and the only thing we’ve given up is we didn’t design it to run serial workloads well; we designed it assuming you are going to do parallel workloads, so of course it’s a natural fit into technical computing and HPC, but you’re not going to see it on your cell phone or your tablet anytime soon because there just isn’t that level of parallelism being used there.”

]]>http://www.hpcwire.com/2015/03/11/intels-james-reinders-on-parallel-programming-and-mic/feed/0Mira Supercomputer Propels High-Intensity Beam Sciencehttp://www.hpcwire.com/2015/03/06/mira-supercomputer-propels-high-intensity-beam-science/?utm_source=rss&utm_medium=rss&utm_campaign=mira-supercomputer-propels-high-intensity-beam-science
http://www.hpcwire.com/2015/03/06/mira-supercomputer-propels-high-intensity-beam-science/#commentsSat, 07 Mar 2015 01:46:24 +0000http://www.hpcwire.com/?p=17713As CERN’s Large Hadron Collider (LHC) prepares to restart this March, a team of Fermilab physicists are using powerful Department of Energy supercomputing resources to reduce the risks and costs associated with producing these high intensity particle beams. Led by Fermilab physicist James Amundson, the team is working with the Argonne Leadership Computing Facility (ALCF)

]]>As CERN’s Large Hadron Collider (LHC) prepares to restart this March, a team of Fermilab physicists are using powerful Department of Energy supercomputing resources to reduce the risks and costs associated with producing these high intensity particle beams.

Led by Fermilab physicist James Amundson, the team is working with the Argonne Leadership Computing Facility (ALCF) to port and optimize Synergia, the accelerator simulation software package developed at Fermilab, to run on ALCF’s Mira supercomputer. The hybrid Python code harnessed 100,000 cores on Mira, enabling researchers to simulate complex internal accelerator interactions. The team is especially interested in studying the effects that accelerator components exert on particles inside high-intensity, low-energy machines.

“The more particles you have, the more intensity you have, the more pronounced the effects become,” says Amundson. “So, since we want to move toward higher intensities, it’s important for us to understand these intensity-dependent effects which, of course, are the things that are computationally difficult to do.”

Thanks to the 10-petaflops Mira supercomputer, one of the world’s fastest, researchers identified an instability issue with the potential to jeopardize the operation of the beam.

This insight has important implications for Fermilab and for the next phase of CERN’s LHC program, which will be upping the intensity of experiments. It is also relevant for the upcoming high-intensity beam project called the Fermilab Proton Improvement Plan II, which is tasked with creating neutrino beams for the Long Baseline Neutrino Facility.