In the News

The amount of data processed at CERN's Large Hadron Collider (LHC) will grow significantly when CERN transitions to the High-Luminosity LHC, a facility upgrade being carried out now for operations planned in 2026. To help meet the LHC’s growing computing needs, scientists from the ATLAS experiment are working in conjunction with the Argonne Leadership Computing Facility (ALCF) to optimizing ATLAS simulations on the ALCF’s Intel-Cray supercomputer, Theta, to improve the processing efficiency on supercomputing resources.

For the first time, scientists have been able to trace the origins of a ghostly subatomic particle that traveled 3.7 billion light-years to Earth. The tiny, high-energy cosmic particle is called a neutrino, and it was found by sensors deep in the Antarctic ice in the IceCube detector, which uses Globus for data archiving.

If you want to know how a machine works, it helps to look inside. Crack open the case and look at how it’s wired together; you might need an engineering degree, a microscope and a lot of time, but eventually you can puzzle out what makes any given device tick.

But can that same approach work for the most amazing machine we know—one capable of making complex calculations in a fraction of a second, while using less energy than a common light bulb?

The National Science Foundation (NSF) is announcing a $1.8 million grant for the initial development of the Open Storage Network (OSN), a distributed system for science that uses Globus for data management: Over the next two years, a collaborative team will combine their expertise, facilities and research challenges to develop OSN which will enable academic researchers across the nation to work with and share their data more efficiently than ever before. Get the full story here.

The Computation Institute at the University of Chicago covers GlobusWorld 2018:

Most of us are now comfortable with cloud computing, enough to often take it for granted. Whether it’s saving our photos in cloud storage, accessing our email from multiple devices, or streaming a high-definition video on the bus, moving data to and from a distant computing center has become second nature.

HPC and computing resources now reach an ever increasing and wider audience, extending out from the “usual suspects” in traditional modeling and simulation, into what is now being called the proverbial “long tail of science”. In support of this long tail, a new breed of research software engineers, research computing facilitators and scientists are needed. Even the very computer systems themselves are getting monikers related to such “long tails”. Recent NSF machines Jetstream and Comet as two very visible examples are each named as a hat tip to “long tails”.

Workshops reach 750 engineers at 360 institutions

Although ESnet is well known for its expertise in supporting the transfer of datasets across the country and around the globe, for the past four years the facility's staff has also been transferring their networking expertise to staff at other research and education organizations.

A team of networking experts from the Department of Energy’s Energy Sciences Network (ESnet), with the Globus team from the University of Chicago and Argonne National Laboratory, has designed a new approach that makes data sharing faster, more reliable and more secure. In an article published Jan. 15 in Peer J Comp Sci, the team describes their “The Modern Research Data Portal: a design pattern for networked, data-intensive science.”

For more than 50 years, HPC has supported tremendous advances in all areas of science. But densely-populated communities can more easily support subscription-based commodity networks and energy infrastructure that make it more affordable for urban universities to engage globally. Research centers based in sparsely-populated regions are extremely disadvantaged. HPCwire describes how researchers in far-flung places are dealing with these challenges, and how Globus facilitates fast, reliable file transfer, irrespective of distance and network conditions.