In this video from ISC 2017, Addison Snell from Intersect360 Research fires back at industry leaders with hard-hitting questions about the state of the HPC industry. “Listen in as visionary leaders from the supercomputing community comment on forward-looking trends that will shape the industry this year and beyond.”

In this podcast, the Radio Free HPC team looks at the problems with IEEE Floating Point. “As described in a recent presentation by John Gustafson, the flaws and idiosyncrasies of floating-point arithmetic ‘constitute a sizable portion of any curriculum on Numerical Analysis.’ The whole thing has Dan pretty worked up, so we hope that the news of Posit Computing coming to the new processors from Rex Computing will help.”

announced they’re bringing on D-Wave to use quantum computing as an accelerator for future exascale applications. “Advancing the problem-solving capabilities of quantum computing takes dedicated collaboration with leading scientists and industry experts,” said Robert “Bo” Ewald, president of D-Wave International. “Our work with ORNL’s exceptional community of researchers and scientists will help us understand the potential of new hybrid computing architectures, and hopefully lead to faster and better solutions for critical and complex problems.”

“Following an introduction to the exceptional computational power of quantum computers using analogies with classical high performance computing systems, I will discuss real-world application problems that can be tackled on medium scale quantum computers but not on post exa-scale classical computers. I will motivate hardware software co-design of quantum accelerators to classical supercomputers and the need for educating a new generation of quantum software engineers with knowledge both in quantum computing and in high performance computing.”

“Shrinking the feature sizes of wires and transistors has been the driver for Moore’s Law for the past 5 decades, but what might lie beyond the end of current lithographic roadmaps and how will it affect computing as we know it? Moore’s Law is an economic theory after all, and any option that can make future computing more capable each new generation (by some measure) could continue Moore’s economic theory well into the future.” The goal of this panel session is to communicate the options for extending computing beyond the end of our current silicon lithography roadmaps.

Nicola Marzari from EPFL gave this public lecture at PASC17. “The talk offers a perspective on the current state-of-the-art in the field, its power and limitations, and on the role and opportunities for novel models of doing computational science – leveraging big data or artificial intelligence – to conclude with some examples on how quantum simulations are accelerating our quest for novel materials and functionalities.”

“The von Neumann design has also led computing to its current limits in efficiency and cooling. As engineers built increasingly complex chips to carry out sequential operations faster and faster, the speedier chips have also been producing more waste heat. Recognizing that modern computing cannot continue on this trajectory, a number of companies are looking to the brain for inspiration and developing “neuromorphic” chips that process data the way our minds do. One such technology is IBM’s TrueNorth Neurosynaptic System.”

This year the ISC conference dedicated an entire day to deep learning, Wednesday, June 21, to discuss the recent advances in artificial intelligence based on deep learning technology. However it, not just the conference where deep learning was dominating the conversation as the showfloor of the exhibition hosted many new products dedicated to optimizing HPC hardware for use in deep learning and AI workloads.

In this video from ISC 2017, Mike Vildibill describes how Hewlett Packard Enterprise describes why we need Exascale and how the company is pushing forward with Memory-Driven Computing. “At the heart of HPE’s exascale reference design is Memory-Driven Computing, an architecture that puts memory, not processing, at the center of the computing platform to realize a new level of performance and efficiency gains. HPE’s Memory-Driven Computing architecture is a scalable portfolio of technologies that Hewlett Packard Labs developed via The Machine research project. On May 16, 2017, HPE unveiled the latest prototype from this project, the world’s largest single memory computer.”

“There is real excitement about new technologies like Quantum Computing, however many things are needed to help users learn how to program and use Quantum computers,” said Earl Joseph, CEO at Hyperion Research. “By providing universities, research institutes and both large and small businesses across the world access to its QLM, Atos is enabling organizations to experiment with Quantum computing and prepare for this potential revolution in computing.”

Latest Video

Industry Perspectives

In this video, researchers describe how the new HPC facility at Rockefeller University will power bioinformatics research and more. This is the first time that Rockefeller University has purpose-built a datacenter for high performance computing. [Read More...]