Today Panasas announced that the Science and Technology Facilities Council’s (SFTC) Rutherford Appleton Laboratory (RAL) in the UK has expanded its JASMIN super-data-cluster with an additional 1.6 petabytes of Panasas ActiveStor storage, bringing total storage capacity to 20PB. This expansion required the formation of the largest realm of Panasas storage worldwide, which is managed by a single systems administrator. Thousands of users worldwide find, manipulate and analyze data held on JASMIN, which processes an average of 1-3PB of data every day.

In this RichReport slidecast, James Coomer from DDN presents an overview of the Infinite Memory Engine IME. "IME is a scale-out, flash-native, software-defined, storage cache that streamlines the data path for application IO. IME interfaces directly to applications and secures IO via a data path that eliminates file system bottlenecks. With IME, architects can realize true flash-cache economics with a storage architecture that separates capacity from performance."

Today Nimbix announced the immediate availability of a new high-performance storage platform in the Nimbix Cloud specifically designed for the demands of artificial intelligence and deep learning applications and workflows. "As enterprises, researchers and startups begin to invest in GPU-accelerated artificial intelligence technologies and workflows, they are realizing that data is a big part of this challenge,” said Steve Hebert, CEO of Nimbix. “With the new storage platform, we are helping our customers achieve performance that breaks through the bottlenecks of commodity or traditional platforms and does so with a turnkey deep learning cloud offering.”

Today Cray announced it has completed the previously announced transaction and strategic partnership with Seagate centered around the addition of the ClusterStor high-performance storage business. “As a pioneer in providing large-scale storage systems for supercomputers, it’s fitting that Cray will take over the ClusterStor line.”

In this video from the HPC User Forum in Milwaukee, Earl Joseph and Steve Conway from Hyperion Research present and update on HPC, AI, and Storage markets. "Hyperion Research forecasts that the worldwide HPC server-based AI market will expand at a 29.5% CAGR to reach more than $1.26 billion in 2021, up more than three-fold from $346 million in 2016."

Today One Stop Systems announced that the company ranks in the top half of the Inc. 5000 list of Fastest Growing Private Companies. This is the 7th time OSS has been on the Inc. 5000 list. Of the tens of thousands of companies that have applied to the Inc. 5000 over the years, only a fraction have made the list more than once. A mere two percent have made the list seven times. "We’re proud to be part of the small fraction of companies that have made the Inc. 5000 list seven times," said Steve Cooper, CEO of OSS. "One Stop Systems continues to provide leading edge technology products to the high performance computing market, propelling its growth."

Justin Glen and Daniel Richards from DDN presented this talk at the HPC Advisory Council Australia Conference. "Burst Buffer was originally created to checkpoint-restart applications and has evolved to help accelerate applications & file systems and make HPC clusters more predictable. This presentation explores regional use cases, recommendations on burst buffer sizing and investment and where it is best positioned in a HPC workflow."

Ace Computers and BeeGFS have teamed to deliver a complete parallel file system solving storage access speed issues that slow down even the fastest supercomputers. BeeGFS eliminates the gap between compute speed and the limited speed of storage access for these clusters--stalling on disk access while reading input data or writing the intermediate or final simulation results. "We are building clusters that are more and more powerful," said Ace Computers CEO John Samborski. "So we recognized that storage access speed was becoming an issue. BeeGFS has proven to be an excellent, cost-effective solution for our clients and a valuable addition to our portfolio of partners.”

Today Globus.org announced general availability of Globus for Google Drive, a new capability that lets users seamlessly connect Google Drive with their existing storage ecosystem, enabling a single interface for data transfer, sharing and publication across all storage systems. "Our researchers wanted to use Google Drive for data storage, but found that they had to babysit the data transfers,” said Krishna Muriki, Computer Systems Engineer, HPC Services, at Lawrence Berkeley National Laboratory. “They were already familiar with using Globus so we thought it would make a good interface for Google Drive; that’s why we partnered with Globus to develop this connector. Now our researchers have a familiar, web-based interface across all their storage resources, including Google Drive, so it is painless to move data where they need it and share results with collaborators. Globus manages all the data movement and authorization, improving security and reliability as well."

Researchers are tapping Argonne and NCSA supercomputers to tackle the unprecedented amounts of data involved with simulating the Big Bang. "Researchers performed cosmological simulations on the ALCF’s Mira supercomputer, and then sent huge quantities of data to UI’s Blue Waters, which is better suited to perform the required data analysis tasks because of its processing power and memory balance."