SKA Astronomy Project Gets Boost with Hybrid Memory Cube

By George Leopold

June 14, 2017

An ambitious astronomy effort designed to peer back to the origins of the universe and map the formation of galaxies is underpinned by an emerging memory technology that seeks to move computing resources closer to huge astronomy data sets.

The Square Kilometer Array (SKA) is an international initiative to build the world’s largest radio telescope. A “precursor project” in the South African desert called MeerKAT consists of 64 44-foot “receptor” satellite dishes. The array gathers and assembles faint radio signals used to create images of distant galaxies.

Once combined with other sites, SKA would be capable of peering back further in time than any other Earth-based observatory. As with most advanced science projects, SKA presents unprecedented data processing challenges. With daily data volumes reaching 1 exabyte, “The data volume is becoming overwhelming,” astronomer Simon Ratcliffe noted during a webcast this week.

In response, Micron Technology Inc. has come up with a processing platform for handling the growing data bottleneck called the Hybrid Memory Cube (HMC). The memory specialist combined its fast logic process technology with new DRAM designs to boost badly needed bandwidth in its high-density memory system.

Steve Pawlowski, Micron’s vice president of advanced computing, claimed its memory platform delivers as much as a 15-fold increase in bandwidth, a capability that addresses next-generation networking and exascale computing requirements.

Applications such as SKA demonstrate “the ability to put [computing] at the edge” to access the most relevant data, Pawlowski added.

The radio telescope array uses a front-end processor to convert faint analog radio signals to digital. Those signals are then processed using FPGAs. Memory resources needed to make sense of all that data can be distributed using relatively simple algorithms, according to Francois Kapp, a systems engineer at SKA South Africa. The challenge, Kapp noted, is operating the array around the clock along with the “increasing depth and width of memory” requirements. “You can’t just add more memory to increase the bandwidth, ” he noted, especially as FPGAs move to faster interfaces.

Hence, the SKA project is wringing out Micron’s HMC approach as it maps the universe and seeks to determine how galaxies were formed. The resulting daily haul of data underscores what Jim Adams, former NASA deputy chief scientist, called “Big Science.”

The exascale computing requirements of projects such as SKA exceed those of previous planetary missions such as the 2015 New Horizon fly-by of Pluto. Adams said it took NASA investigators a year to download all the data collected by New Horizon.

The technical challenges are similar for earth-bound observatories. “Astronomy is becoming data science,” Ratcliffe added.

Micron positions its memory platform as a “compute building block” designed to provide more bandwidth between memory and computing resources while placing processing horsepower as close as possible to data so researchers can access relevant information.

Meanwhile, university researchers at the University of Heidelberg are attempting to accelerate adoption of new memory approaches like Micron’s through open-source development of configurable HMC controller that would serve as a memory interface.

Research Juri Schmidt noted that the German university’s network-attached memory scheme was another step toward pushing memory close to data by reducing the amount of data movement.

Micron’s Pawlowski noted that the current version of the memory platform is being used to sort and organize SKA data as another way to reduce data movement. The chipmaker is also investigating how to incorporate more logic functionality, including the use of machine learning to train new analytics models.

Computing, memory and, eventually, cloud storage could be combined with Micron’s low-power process technology for energy efficient high-performance computing. While the company for now doesn’t view HMC as an all-purpose platform, it would be suited to specific applications such as SKA, Pawlowski noted.

The astronomy initiative will provide a major test for exascale computing since, according to Adams, SKA “is a time machine,” able to look back just beyond the re-ionization period after the Big Bang when galaxies began to form.

Seeking to reign in the tediousness of manual software testing, Pfizer HPC Engineer Shahzeb Siddiqui is developing an open source software tool called buildtest, aimed at automating software stack testing by providing the community with a central repository of tests for common HPC apps and the ability to automate execution of testing. Read more…

By Tiffany Trader

In just a few months time, Senegal will be operating the second largest HPC system in sub-Saharan Africa. The Minister of Higher Education, Research and Innovation Mary Teuw Niane made the announcement on Monday (Jan. 14 Read more…

By Tiffany Trader

If it's Nvidia GPUs you're after to power your AI/HPC/visualization workload, Google Cloud has them, now claiming "broadest GPU availability." Each of the three big public cloud vendors has by turn touted the latest and Read more…

Previous:

STAC (Securities Technology Analysis Center) recently released an ‘exploratory’ benchmark for machine learning which it hopes will evolve into a firm benchmark or suite of benchmarking tools to compare the performanc Read more…

By James Reinders

Quantum computing has lived so long in the future it’s taken on a futuristic life of its own, with a Gartner-style hype cycle that includes triggers of innovation, inflated expectations and – though a useful quantum system is still years away – anticipatory troughs of disillusionment. Read more…

By John Russell

Anyone who has checked a forecast to decide whether or not to pack an umbrella knows that weather prediction can be a mercurial endeavor. It is a Herculean task: the constant modeling of incredibly complex systems to a high degree of accuracy at a local level within very short spans of time. Read more…

By John Russell

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

It’s been a good two weeks, AMD’s Gary Silcott and Andy Parma told me on the last day of SC18 in Dallas at the restaurant where we met to discuss their show news and recent successes. Heck, it’s been a good year. Read more…

By Tiffany Trader

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can do. Animated. Backstopped by a stream of data charts, product photos, and even a beautiful image of supernovae... Read more…

By John Russell

Riding healthy U.S. and global economies, strong demand for AI-capable hardware and other tailwind trends, the high performance computing server market jumped 28 percent in the second quarter 2018 to $3.7 billion, up from $2.9 billion for the same period last year, according to industry analyst firm Hyperion Research. Read more…

By John Russell

As part of the run-up to SC18, taking place in Dallas next week (Nov. 11-16), Intel is doling out info on its next-gen Cascade Lake family of Xeon processors, specifically the “Advanced Processor” version (Cascade Lake-AP), architected for high-performance computing, artificial intelligence and infrastructure-as-a-service workloads. Read more…

By Tiffany Trader

Networking equipment powerhouse Mellanox could be an acquisition target by Microsoft, according to a published report in an Israeli financial publication. Microsoft has reportedly gone so far as to engage Goldman Sachs to handle negotiations with Mellanox. Read more…