Blue Gene/Q – HPCwirehttps://www.hpcwire.com
Since 1987 - Covering the Fastest Computers in the World and the People Who Run ThemTue, 20 Mar 2018 00:17:45 +0000en-UShourly1https://wordpress.org/?v=4.9.460365857Utah University Turns to HPC for Safer Explosives Transporthttps://www.hpcwire.com/2015/01/08/utah-university-turns-hpc-safer-explosives-transport/?utm_source=rss&utm_medium=rss&utm_campaign=utah-university-turns-hpc-safer-explosives-transport
https://www.hpcwire.com/2015/01/08/utah-university-turns-hpc-safer-explosives-transport/#respondThu, 08 Jan 2015 21:39:34 +0000http://www.hpcwire.com/?p=16943In 2005, a semi-truck caught the nation’s attention when it crashed and caught fire, igniting 35,000 pounds of explosives it was carrying through Utah’s Spanish Fork Canyon. Thanks to a brief delay between the truck’s crash and the subsequent explosion, there were no fatalities. But, as evidenced by a number of injuries and a 30-by-70-foot […]

]]>In 2005, a semi-truck caught the nation’s attention when it crashed and caught fire, igniting 35,000 pounds of explosives it was carrying through Utah’s Spanish Fork Canyon.
Photo: Utah Department of Transportation

Thanks to a brief delay between the truck’s crash and the subsequent explosion, there were no fatalities. But, as evidenced by a number of injuries and a 30-by-70-foot crater taken out of the highway, the results can be crippling.

To shed light on the mechanism that caused the chain reaction and help prevent future occurrences, Professor Martin Berzins and his research team from the University of Utah turned to the Argonne Leadership Computing Facility’s 10-petaflop IBM Blue Gene/Q system, Mira. Their research was the subject of a feature article by Jim Collins on the ALCF website.

In the case of the Utah highway incident, the 8,400 cylinders of explosives in transport should have burned away more slowly in an accidental fire, through a process called deflagration. Instead the cylinders detonated, combusting at supersonic speeds and generating a shockwave that blew out the windows of nearby cars.

The research project is using INCITE funding to recreate the detonation virtually. Getting the simulation to reach the desired state has proved particularly challenging due to the incorporation of multiple spatial and temporal scales, but the team’s perseverance has paid off.

“We set out to simulate one-eighth of the actual semi-truck with the explosives in their original packing configuration, but it was not an easy feat,” says Jacqueline Beckvermit, a PhD student at the University of Utah. “After two years of work and more than 100 million computing hours, we finally reached detonation this fall.”

Based on current simulations, the team has identified two possible scenarios that could have led to the explosion: one involving a high-pressure environment caused by trapped gases from the cylinders, and a similar high-pressure scenario caused by the impact of exploding cylinders.

Optimizing and scaling their Uintah Computational Framework to harness a large number of Mira’s cores was key to the group’s success and plans are in place to scale even higher in the future.

Berzins says their ultimate goal will be to enable strategies to prevent similar accidents from occurring in the future.

]]>https://www.hpcwire.com/2015/01/08/utah-university-turns-hpc-safer-explosives-transport/feed/016943Hartree Centre Puts $45 Million Toward UK Innovationhttps://www.hpcwire.com/2013/02/21/hartree_centre_puts_45m_to_boosting_uk_innovation/?utm_source=rss&utm_medium=rss&utm_campaign=hartree_centre_puts_45m_to_boosting_uk_innovation
https://www.hpcwire.com/2013/02/21/hartree_centre_puts_45m_to_boosting_uk_innovation/#respondThu, 21 Feb 2013 08:00:00 +0000http://www.hpcwire.com/?p=4187<img src="http://media2.hpcwire.com/hpcwire/dansbury.jpg" alt="" width="95" height="93" />With $45 million in government funding, the research center will develop software to make supercomputers more efficient and to help process data from the SKA, the world's largest radio telescope. The technology is being developed with industry partners, and will be made available to scientific and industrial organizations in the UK.

]]>On February 1, 2013, the UK Chancellor of the Exchequer, Rt. Hon. George Osborne, visited the Science and Technology Facilities Council (STFC) site in Daresbury to formally open the Hartree Centre, which will focus on developing software to improve the energy efficiency of supercomputers. Or, to put it another way, “Osborne pulled the string and opened the curtain and unveiled the plaque,” says Mike Ashworth, head of the Hartree Centre.

The ceremonial opening of the Hartree Centre marks a new phase of government and industry collaboration in the development of high-performance computing the UK. A primary goal is to bring together industry, academia and government organizations to use supercomputers to increase the competitiveness of UK industry.

The ceremony also came with a pledge for funding: more than $45 million to create its energy efficient computing technologies for industrial and scientific applications, especially for supercomputers handling big data projects. About $17 million will go to creating software for Square Kilometer Array (SKA), the world’s largest radio telescope. The rest goes into two camps: next-generation software for Grand Challenge science projects, and software to allow industry to make better use of high-performance computing and computational science.

The software research will focus on creating new code to efficiently exploit new computer architectures that will be emerging in the next five to 10 years. “We’re trying to structure that code in a flexible way so that it’s not tied into any one architecture, but reveals multiple levels of parallelism, so that we’re ready to exploit large numbers of light weight cores [used as] accelerators,” says Ashworth.

The Hartree Centre has not yet decided how the money will be finally allocated, but it’s likely to include research on Xeon Phi processors, possibly NVIDIA’s latest generation of Kepler GPUs, and very probably FPGAs.

Yes, Ashworth sees new potential in FPGAs for supercomputing. STFC researchers first looked at using FPGAs for HPC about 10 years ago, but the chips weren’t very fast and were difficult to program. They required programming at the hardware level using VHDL. Now, of course, the chips are much faster and support double-precision, which is required for a lot of scientific applications. Ashworth notes that he’s “very keen on exploring” technology from Maxeler, which has high-level interfaces to FPGAs. He wants to explore how to make FPGAs useful for the kinds of research that Hartree will emphasize.

Energy efficiency is a very prominent part of the center’s mandate. “We’re interested in looking at how key applications perform in terms of their energy efficiency,” says Ashworth. “In the past, computing efficiency meant FLOPS. Now it’s FLOPS per watt. In the past it was time to a solution. Now we’re more interested in the number of watts to achieve a certain solution.”

This is inspired both by government targets to reduce carbon emissions and to save money – which, of course, go hand in hand, since both involve reducing energy consumption.

The center has some pretty heavy-duty hardware to work with: the UK’s most powerful supercomputer, already being made available for research by industry and scientific organizations through STFC. In mid-2012, STFC installed an IBM Blue Gene/Q system, named Blue Joule. It consists of seven racks of servers with 114,688 1.6 GHz cores and 112 TB RAM. When it was fired up last summer, it reached 1.2 petaflops, the first computer in the UK to pass 1 petaflop. That rated it 13th on the TOP500, but has slipped to #16 in the most recent list.

That equipment is accompanied by an IBM iDataPlex system, dubbed Blue Wonder, with 8,192 Sandy Bridge cores for 158.7 teraflops of processing power.

STFC didn’t just buy the computer, however, it got IBM as a partner. “Rather than having vendors just supply us with hardware, we specifically said in the procurement that they must enter into a collaboration with us,” says Ashworth.

In fact, there are several corporate collaborators involved, including Intel, OCF, Mellanox, DataDirect Networks and ScaleMP. Each is contributing some combination of components, services, technical expertise and/or business development expertise. IBM and OCF, for example, help the Hartree Centre find corporate partners to set up joint projects. “When we go into a room with an industrial potential partner, we’ll go in with somebody from IBM,” says Ashworth. “That adds very much to the prospects of landing that business.”

Those partnerships work both ways. One of Hartree’s mandates is to help UK companies make better use of high-performance computing and computational science. To that end, he wants to focus research on accelerators that can help achieve higher performance at lower cost.

“We see the Hartree Center as a testing ground for novel architectures,” says Ashworth. “We can buy a piece of hardware, a development platform, and make it available to academics, make it available to industry. In collaboration with our expertise, we learn how to use the hardware, and set up joint projects with people we believe would benefit from that hardware, and push forward the UK’s ability to exploit these new technologies for the future. We’re looking at a 5-10 year time frame to leverage a lot of these technologies.”

The research priorities are environment, energy, developing new materials, life sciences and human health, and security. One of the Grand Challenge projects at STFC, for example, is a three-way collaboration between STFC, the Met Office and the Natural Environment Research Council (NERC) to develop brand new code for weather forecasting and for climate change studies using supercomputers. Industrial applications might include projects such as computer modeling to create, say, new industrial adhesives or new drugs.

The UK government expects the money invested in Hartree will pay off many-fold by helping industry exploit supercomputing technology to become more competitive.

]]>https://www.hpcwire.com/2013/02/21/hartree_centre_puts_45m_to_boosting_uk_innovation/feed/04187Rensselaer Orders Up Blue Gene/Q for Exascale and Data-Intensive Researchhttps://www.hpcwire.com/2011/10/25/rensselaer_orders_up_blue_gene_q_for_exascale_and_data-intensive_research/?utm_source=rss&utm_medium=rss&utm_campaign=rensselaer_orders_up_blue_gene_q_for_exascale_and_data-intensive_research
https://www.hpcwire.com/2011/10/25/rensselaer_orders_up_blue_gene_q_for_exascale_and_data-intensive_research/#respondTue, 25 Oct 2011 07:00:00 +0000http://www.hpcwire.com/?p=4657Last month Rensselaer Polytechnic Institute announced it had been awarded a $2.65 million grant to acquire a 100 teraflop Blue Gene/Q supercomputer for its Computational Center for Nanotechnology Innovations. The new system will also include a multi-terabyte RAM-based storage accelerator, petascale disk storage, and rendering cluster plus remote display wall system for visualization.

]]>Last month Rensselaer Polytechnic Institute (RPI) announced it had been awarded a $2.65 million grant to acquire a 100 teraflop Blue Gene/Q supercomputer for its Computational Center for Nanotechnology Innovations (CCNI). The new system will also include a multi-terabyte RAM-based storage accelerator, petascale disk storage, and rendering cluster plus remote display wall system for visualization.

Even though the yet unnamed Q machine is just a microcosm of a true petascale supercomputer, it is designed to be used for exascale research: scaling codes, exploring alternative approaches to checkpointing, and dealing with I/O bottlenecks. The supercomputer will also provide a home for a variety of research applications at Rensselaer.

According to the press release these projects include: “developing new methods for the diagnosis of breast cancer using data from non-invasive techniques; modeling plasmas to aid the design and safety of future fusion reactors; modeling wind turbine design to increase efficiencies and reduce maintenance; application of new knowledge discovery algorithms to very large semantic graphs for climate change and biomedical research, modeling heat flow in the world’s oceans, integrating data and computations across scales to gain a better understanding of biological systems and improve health care; and many others.”

This is the first machine CCNI will deploy with NSF funding behind it and the first new supercomputer at the center since it launched five years ago. CCNI was kicked off in 2006 with a $100 million investment from New York State, RPI, and IBM, using the initial cash to build out the center, hire staff, and acquire HPC resources. Its stated mission: to advance the science of semiconductor manufacturing and related nanotechnology applications for academia and industry.

The NSF money to buy the Blue Gene/Q system came out the agency’s Major Research Instrumentation Program, which, as the name implies, funds instruments for scientific and engineering research. These include devices such as mass spectrometers, X-rays, laser systems, microscopes, as well as a variety of computational resources. Because of NSF’s involvement, time on the system will be available to researchers nationally. Rensselaer scientists and engineers, as well as those at other New York state universities will also be able to bid for cycles on the system.

The first Rensselaer supercomputer was a Blue Gene/L system, along with a Power-based Linux system and some smaller AMD Opteron clusters. The Blue Gene/L system, which is still operational, delivers 90 teraflops and represents most of the computation capacity at CCNI. When installed in 2007 it was the seventh most powerful system in the world. Despite CCNI’s rather modest computational capacity by 2011 standards, more than 700 researchers spread out across 50 universities, government labs, and commercial organizations have used the center’s HPC resources to run their science and engineering workloads.

Although the upcoming Blue Gene/Q is relatively small as supercomputers go — a mere 100 teraflops — it will provide as much computational horsepower as the older L system plus all the remaining clusters at the center, According to CCNI, the upcoming system will fit into just half a rack — about 1/30 the space as center’s original Blue Gene machine.

And, because it’s a Blue Gene Q, it should provide some of the best performance per watt on the planet. A similar 100 teraflop Blue Gene/Q prototype system, which is housed at IBM’s T. J. Watson Research Center, delivered 2097 megaflops/watt (the number one system on the latest Green500 list), and consumed just 41 KW. To put that in perspective, the 2005-era ASC Purple supercomputer also delivered 100 peak teraflops, but consumed a whopping 7,500 KW.

According to CCNI Director James Myers, for the time being will keep their other HPC systems, including the Blue Gene/L, operational. But he admits that it will probably make sense at some point to decommission the older machines, considering how little performance per watt they are delivering. In general, the operational costs of maintaining five-year-old HPC machines these days is often better spent on adding newer, more energy-efficient capacity. “We are certainly paying attention to those lifecycle costs,” says Myers.

The new Blue Gene/Q system is scheduled to be installed in 2012, in the same general timeframe that Argonne and Lawrence Livermore National Labs are expected to deploy their much larger Q machines: the 10 petaflop “Mira” system and the 20 petaflop “Sequoia,” respectively.

It is also designed to be a platform for data-intensive applications. The RAM-based storage accelerator that is to be integrated into the system will be a critical component for data-intensive research. Essentially the accelerator is a 2-4 terabyte RAM disk that will be used to greatly speed up I/O for disk-bound applications. It will also be used to support interactive visualization by streaming data from the RAM disk to the visualization cluster without going through the bottleneck of disk storage. According the Myers, the RAM disk is to be based on commodity components, although its exact makeup is still to be worked out.

]]>Argonne National Laboratory is planning to move up to a 10-petaflop Blue Gene/Q supercomputer next year, supporting the DOE lab’s scientific research. The new machine continues Argonne’s six-year Blue Gene tradition, which has installed every iteration of the architecture in IBM’s BG franchise.

Argonne installed its first Blue Gene supercomputer, a 5-teraflop Blue Gene/L system, in 2005, which garnered a number 58 placement on the TOP500 list that year. In 2008, the lab upgraded to a 500-teraflop Blue Gene/P, which initially placed it at number 4 on the list. The upcoming Blue Gene/Q, called “Mira,” will almost certainly give Argonne at top 10 spot in 2012. More importantly, Mira will represent a 2,000-fold increase in peak processing power in the space of six years.

The new Argonne machine will join another DOE Blue Gene/Q, the 20-teraflop “Sequoia” supercomputer, to be installed at Lawrence Livermore National Laboratory in 2012. That system is slated to run weapons simulations in support of the National Nuclear Security Administration’s program to maintain the US nuclear stockpile.

By contrast, Argonne’s Mira will be devoted entirely to open science applications like climate studies, battery research, engine design, and cosmology. The DOE has selected 16 projects that will have first crack at the Q system when it’s booted up next year. Like its predecessors, Mira will be available as an INCITE and ASCR Leadership Computing Challenge (ALCC) resource, where CPU-hours are awarded to what the DOE determines are the most deserving researchers, based on a peer-reviewed competitive process.

“Argonne’s new IBM supercomputer will help address the critical demand for complex modeling and simulation capabilities, which are essential to improving our economic prosperity and global competitiveness,” said Rick Stevens, associate laboratory director for computing, environment and life sciences at Argonne National Laboratory.

The Mira system is based on IBM’s next-generation PowerPC SoC, in this case the 16-core Power A2 processor (PDF), a 64-bit CPU capable of handling 4 threads simultaneously. The processor has 32 KB of L1 cache — 16 KB for data and 16 KB for instructions. L2 cache is made up of 8 MB of embedded DRAM (eDRAM ), a high-density on-chip memory technology that IBM uses for Blue Gene and its latest Power7 processors. Memory and I/O controllers are integrated on-chip.

Each server node will contain a single A2 processor and sport either 8 or 16 GB of memory. A fully populated Blue Gene/Q rack contains 1024 nodes, representing 16K cores. I/O has been split from the server nodes so that configurations can scale compute and I/O independently. A rack can accommodate between 8 and 128 I/O nodes. Conveniently, the I/O nodes use the same Power A2 chip as the compute servers.

Server-to-server communication is performed over a 5D Torus, which is capable of up to 40 gigabits per second, four times the speed of the Blue Gene/P interconnect. The 5D Torus employs fiber optics, the first Blue Gene design to do so.

Compute performance is delivered by using a large number of relatively low-speed cores — a hallmark of the Blue Gene architecture. Unlike the speedy 3.3 GHz Power7 chips that will go into the future Blue Waters supercomputer at the NCSA, the A2 processor for Blue Gene hums along at a modest 1.6 GHz (although faster versions of this chip can hit 3 GHz). According to IBM, Mira will encapsulate 750K cores, which works out to about 48,000 CPUs. Total memory is 750 TB, backed by 70 petabytes of disk storage.

The low-speed, high-core approach makes for a very energy-efficient package. A Blue Gene/Q prototype grabbed first place on the November 2010 Green500 list, with a Linpack rating of 1684.2 megaflops/watt. That bested even the latest Fermi GPU accelerated supers, like the TSUBAME 2.0 system recently installed at Tokyo Tech, as well as IBM’s fastest Cell (PowerXCell 8i) processor-accelerated QS22 clusters. To further boost energy efficiency and maintain reliability, all Blue Gene/Q racks are water cooled.

Because of its size, Argonne is looking at Mira as a stepping stone to exaflop supercomputing. With less than a million cores though, programmers will have to use some imagination to scale their codes to the hundreds of millions of cores envisioned in a true exascale system.

However, by the time IBM and others start building such machines, the Blue Gene PowerPC-based architecture is likely to be subsumed into the company’s Power-based line-up (which at the processor ISA level, at least, is quite similar). Based on a recent conversation with Herb Schultz, marketing manager for IBM’s Deep Computing unit, the Power and Blue Gene lines may merge around the middle of this decade. That would suggest that Blue Gene/Q could very well be the last in the Blue Gene lineage.

]]>IBM’s HPC business unit, aka Deep Computing, has always been more about fielding cutting-edge platforms than making profits. Although the company has produced some ground-breaking supercomputing systems over the years and has captured a large chunk of the HPC server market, the business proposition was not always clear. But according to a conversation I had recently with Herb Schultz, marketing manager for IBM’s Deep Computing unit, that looks to be changing.

According to Schultz, the company is revamping its approach to its high performance computing business in several dimensions. These include new alliances, sales strategies, and solutions, as well as a shift in HPC market segment focus. Overall, says Schultz, that will involve transitioning from a model that relies on selling hardware parts to one that offers complete integrated solutions.

Schultz admits IBM Deep Computing has probably put forth this story before, but according to him the incentives are now changing. And by that, he means monetary incentives. Schultz remembers as little as two or three years ago the main metric for the HPC business was revenue. So there would be an awful lot of pressure, for example, to sell a $50 million supercomputer, even if it cost IBM $49.99 million to make. “There is really no appetite in IBM anymore — with some of the leadership changes over the last few years — for revenue that has no profit with it,” says Schultz.

At the core of the strategy shift is the realization that some of the industry’s fastest growing segments, like cloud computing and business analytics, are underpinned by high performance computing technology. Even IBM’s “Smarter Planet” campaign, which covers segments like education, public safety and retail, will draw on HPC technologies. That includes hundreds of new applications, everything from optimizing city traffic flow in real-time using video streams to managing retail inventory with RFID tracking. HPC permeates this class of data-intensive applications.

IBM has decided these new application areas position Deep Computing as a growth engine (read profit center) for the company. At the same time, even traditional HPC — science applications, financial analytics, seismic codes, bioinformatics, etc. — is poised for robust growth. According to IDC, the HPC server revenue is growing at more than twice the rate of the overall server space (6.3 percent CAGR versus 2.6 percent) and will represent 76 percent of the total market increase over the next four years.

But IBM plans to be a bit more particular about market segments. Specifically, they intend to give more attention to customers interested in “better reward for value” — in other words, verticals that are willing to pay more for premium products. In the higher education market, where IBM is traditionally strong, customers are generally reticent to pay for value; they tend to be very price-sensitive. On the other hand, the financial services industry and some manufacturing firms are much more willing to shell out some serious cash if the solution adds to their bottom line.

Schultz says Whirlpool, for example, was able to save a significant amount of money because of better packaging, modeled and designed via HPC. In this case, the number of damaged goods that had to be returned due to faulty packaging was greatly reduced. Schultz estimates that Whirlpool was able to recoup its HPC investment in a matter of weeks.

Devoting more attention to commercial HPC means the company will simultaneously be shifting the Deep Computing product mix, which has skewed heavily toward the high end. Schultz estimates that 70 to 80 percent of IBM’s current HPC revenue is derived from supercomputing systems that cost over $500,000. “They’re tremendous revenue producers, but the profit profile is not all that great.”

The goal is to move a much greater proportion of the HPC sales into the mainstream HPC market — that is, systems under $500K. According to Schultz, they’re looking to increase revenue in this area from around 20 percent today to something closer to 50 percent. In other words, become less like Cray and more like HP and Dell. This is somewhat uncharted territory for the Deep Computing folks, though. “We’ve never been really good at this,” admits Schultz. “We’ve never even tried to be good at this, actually.”

They do have products that serve that market today, namely the System X (x86 server) products, but that group is more geared toward retail and telecom, where performance is not the driving criteria. Some of the System X shift to HPC is occurring organically. For example, the iDataPlex product, a dense x86 server design, was principally aimed at the Web 2.0 market — the i in iDataPlex stands for Internet. But as it turns out, that product is garnering plenty of attention from HPC customers.

The plan is for the Deep Computing group to work closely with the System X team so that more HPC-specific x86-based machinery can be offered. Some of this is already in motion. The recent announcements of a GPU-equipped BladeCenter variant and the iDataPlex dx360 M3 suggests a more purposeful x86 HPC strategy.

But selling hardware alone is not in IBM’s interest and is certainly not where the company’s strength lies. There are already plenty of “value” server vendors out there for do-it-yourself HPC customers. From a company perspective, Big Blue has always made its best margins selling software, services and highly-integrated systems, and it wants to duplicate that model in the Deep Computing group.

High value software like IBM’s General Parallel File System (GPFS), math libraries, and Tivoli Workload Scheduler LoadLeveler have never been marketed or sold aggressively, and were sometimes just given away as incentives to buy the hardware. “We’re leaving 3 to 4 billion dollars on the table every year by not aggressively selling the system software that we’ve got,” says Schultz.

At the same time, IBM plans to implement a better go-to-market strategy for the Deep Computing offerings, using stronger channel partner relationships, as well as greater incentives with business partners and ISVs. The company also plans to find a new route to the market via large system integration firms. The idea is that commercial customers will be able to buy HPC more like appliances, encompassing compute, storage and software, rather than as individual pieces to be cobbled together on-site.

None of this mean they’ll be ceding the high-end supercomputer business to Cray. Especially for top 10 systems and future exascale machines, IBM is committed to be a player. Schultz concedes that the initial return on these elite projects is not very good, but the ROI is there for future IBM products. It’s certainly likely that IBM research projects in areas like phase change memory, 3D chip stacking, silicon photonics and advanced software technology will first show up in high-end HPC systems.

The fact that this strategy is somewhat at odds with Deep Computing’s more pragmatic business approach is worth noting, though. But the way Schultz tells it, the company is committed to getting a good chunk of this high-end R&D funded via government programs such as was done with their DARPA HPCS-funded PERCS work. Down the road, the company is counting on getting a generous slice of the more than $1 billion that the US government plans to spend on exascale technologies over the next five to eight years.

As far as the HPC product mix goes, IBM will stick with its trio of Power-based systems, Blue Gene, and the aforementioned x86 product line. Schultz thinks the Power and Blue Gene lines may converge in five years or so, but for now they’re keeping those products distinct. The first Blue Gene/Q system, Sequoia, is scheduled for delivery to the NNSA in 2011, with the main pipeline expected in 2012. Meanwhile the Power7-based servers have already been out for a year, although the first really large deployment of the souped up IH supercomputing variant of that server will be the NCSA’s Blue Water system, also in 2011.

The only product that failed to make the cut was the Cell-based (PowerXCell 8i) HPC QS22 blades, which one assumes will be phased out at some point. Although that blade was used in the Roadrunner, the first petaflop supercomputer, the Cell processor turned out to be too specialized a solution, especially as GPGPU-based acceleration took hold in the last couple of years.

Whether this Deep Computing makeover works or not remains to be seen. But every large HPC server vendor is tweaking its strategy to one degree or another: Cray is dipping into the mainstream market with its CX1 and CX1000 lines; Dell is ramping up its product line with purpose-built performance gear; HP is doing likewise. All of this is being done to tap into what looks to be a burgeoning commercial HPC market. Like its rivals, IBM doesn’t want to miss that opportunity. “This is one of the higher points for this business over the last 15 years,” says Schultz.