What is Supercomputing?

Supercomputing definition

The term "supercomputing" refers to the processing of massively complex or data-laden problems using the concentrated compute resources of multiple computer systems working in parallel (i.e. a "supercomputer"). Supercomputing involves a system working at the maximum potential performance of any computer, typically measured in Petaflops. Sample use cases include genomics, astronomical calculations, and so forth.

Why supercomputing?

Supercomputing enables problem solving and data analysis that would be simply impossible, too time-consuming or costly with standard computers, e.g. fluid dynamics calculations. Today, big data presents a compelling use case. A supercomputer can discover insights in vast troves of otherwise impenetrable information. High Performance Computing (HPC) offers a helpful variant, making it possible to focus compute resources on data analytics problems without the cost of a full-scale super computer.

HPE supercomputing

HPE approaches supercomputing through a High Performance Computing (HPC) architecture. HPC makes it possible to overcome traditional cost barriers to supercomputing. You can choose how much compute power you want to concentrate in HPC clusters. Our HPC solutions empower innovation at any scale, building on our purpose-built HPC systems and technologies solutions, applications and support services.

As a leader in the HPC market, Hewlett Packard Enterprise provides unique capabilities for driving innovation into the future. Learn how HPE is approaching the many challenges on the path to Exascale – the future of HPC – the next generation of computing. Register and download the Technical White Paper.