Today DDN appointed Eric Barton as the company’s chief technology officer for software-defined storage. In this role, Barton will lead the company’s strategic roadmap, technology architecture and product design for DDN’s newly created Infinite Memory Engine business unit. Barton brings with him more than 30 years of technology innovation, entrepreneurship and expertise in networking, distributed systems and storage software.

“Delivering an industry-leading combination of low latency, ultra endurance, high QoS, and high throughput, the Intel Optane SSD DC P4800X Series is the most responsive data center SSD. Built with the revolutionary new 3D XPoint memory media, the SSD DC P4800X is the first product to combine the attributes of memory and storage. This innovative solution is optimized to break through storage bottlenecks by providing a new data tier.”

“2017 will see the introduction of many technologies that will help shape the future of HPC systems. Production-scale ARM supercomputers, advancements in memory and storage technology such as DDN’s Infinite Memory Engine (IME), and much wider adoption of accelerator technologies and from Nvidia, Intel and FPGA manufacturers such as Xilinx and Altera, are all helping to define the supercomputers of tomorrow.”

“The SAGE project, which incorporates research and innovation in hardware and enabling software, will significantly improve the performance of data I/O and enable computation and analysis to be performed more locally to data wherever it resides in the architecture, drastically minimizing data movements between compute and data storage infrastructures. With a seamless view of data throughout the platform, incorporating multiple tiers of storage from memory to disk to long-term archive, it will enable API’s and programming models to easily use such a platform to efficiently utilize the most appropriate data analytics techniques suited to the problem space.”

In this special guest feature, Rob Farber writes that a study done by Kyoto University Graduate School of Medicine shows that code modernization can help Intel Xeon processors outperform GPUs on machine learning code. “The Kyoto results demonstrate that modern multicore processing technology now matches or exceeds GPU machine-learning performance, but equivalently optimized software is required to perform a fair benchmark comparison. For historical reasons, many software packages like Theano lacked optimized multicore code as all the open source effort had been put into optimizing the GPU code paths.”

In this Intel Chip Chat podcast, Alyson Klein and Charlie Wuischpard describe Intel’s investment to break down walls to HPC adoption and move innovation forward by thinking at a system level. “Charlie discusses the announcement of the Intel Xeon Phi processor, which is a foundational element of Intel Scalable System Framework (Intel SSF), as well as Intel Omni-Path Fabric. Charlie also explains that these enhancements will make supercomputing faster, more reliable, and increase efficient power consumption; Intel has achieved this by combining the capabilities of various technologies and optimizing ways for them to work together.”

In this podcast from ISC 2016 in Frankfurt, Steve Pawlowski from Micron discusses the latest memory technology trends for high performance computing. “When you look at a technology like 3D XPoint and some of the new materials the industry is looking at, those latencies are becoming more DRAM-like, which makes them a more attractive option to look at. Is there a way we can actually inject persistent memory that’s fairly high-performance so we don’t take a performance hit but we can certainly increase the capacity on a cost-per-bit basis versus what we have today?”

Today Italy’s A3Cube announced the F-730 Family of EXA-Converged parallel systems built on Dell servers and achieving sub-microsecond latency through bare metal data access. “A3Cube’s EXA-Converged infrastructure represents the next step in the evolution of converged systems”, said Emilio Billi, A3Cube’s CTO, “while keeping and improving on the scalability and resilience of Hyper-Converged infrastructure. It is engineered to converge all system resources and provide parallel data access and inter node communication at the bare metal level, eliminating the need for, and the limits of, traditional Hyper-converged systems. The system can efficiently use all the fastest storage devices currently on the market or planned to come to market, and puts all existing solutions in the rear view mirror.”

In this podcast, the Radio Free HPC team looks at the Top Technology Stories for High Performance Computing in 2015. “From 3D XPoint memory to Co-Design Architecture and NVM Express, these new approaches are poised to have a significant impact on supercomputing in the near future.” We also take a look at the most-shared stories from 2015.

“For decades, the industry has searched for ways to reduce the lag time between the processor and data to allow much faster analysis,” said Rob Crooke, senior vice president and general manager of Intel’s Non-Volatile Memory Solutions Group. “This new class of non-volatile memory achieves this goal and brings game-changing performance to memory and storage solutions.”

Latest Video

Industry Perspectives

In this video, researchers describe how the new HPC facility at Rockefeller University will power bioinformatics research and more. This is the first time that Rockefeller University has purpose-built a datacenter for high performance computing. [Read More...]