An astonishing 95% of our universe is made of up dark energy and dark matter. Understanding the physics of this sector is the foremost challenge in cosmology today. Sophisticated simulations of the evolution of the universe ...

Researchers at the San Diego Supercomputer Center (SDSC) at the University of California, San Diego, have built a Linux cluster using 16 Raspberry Pi computers as part of a program to teach children and adults the basics ...

A new study by a researcher at the San Diego Supercomputer Center (SDSC) at the University of California, San Diego, says that by 2015, the sum of media asked for and delivered to consumers on mobile devices and to their ...

A newly published paper by three UC San Diego astrophysics researchers for the first time provides an explanation for the origin of three observed correlations between various properties of molecular clouds in the Milky Way ...

The San Diego Supercomputer Center (SDSC) at the University of California, San Diego, has been awarded a $12-million grant from the National Science Foundation (NSF) to deploy Comet, a new petascale supercomputer designed ...

(Phys.org) —To get a better understanding of the subatomic soup that filled the early universe, and how it "froze out" to form the atoms of today's world, scientists are taking a closer look at the nuclear phase diagram. ...

Three research organizations at the University of California, San Diego, have been awarded a multi-year National Science Foundation (NSF) grant to build an end-to-end cyberinfrastructure to perform real-time data-driven assessment, ...

(Phys.org) —Officials at Facebook have apparently decided to get serious about making sense of posts by its vast user base—according to MIT's Technology Review, officials with the company (specifically Chief Technical ...

Sequencing the DNA of an organism, whether human, plant, or jellyfish, has become a straightforward task, but assembling the information gathered into something coherent remains a massive data challenge. Researchers using ...

North Carolina State University researchers have developed a way for search engines to provide users with more accurate, personalized search results. The challenge in the past has been how to scale this approach up so that ...

Supercomputer

A supercomputer is a computer that is at the frontline of current processing capacity, particularly speed of calculation. Supercomputers introduced in the 1960s were designed primarily by Seymour Cray at Control Data Corporation (CDC), and led the market into the 1970s until Cray left to form his own company, Cray Research.
He then took over the supercomputer market with his new designs, holding the top spot in supercomputing for five years (1985–1990). In the 1980s a large number of smaller competitors entered the market, in parallel to the creation of the minicomputer market a decade earlier, but many of these disappeared in the mid-1990s "supercomputer market crash".

Today, supercomputers are typically one-of-a-kind custom designs produced by "traditional" companies such as Cray, IBM and Hewlett-Packard, who had purchased many of the 1980s companies to gain their experience. As of July 2009[update], the IBM Roadrunner, located at Los Alamos National Laboratory, is the fastest supercomputer in the world.

The term supercomputer itself is rather fluid, and today's supercomputer tends to become tomorrow's ordinary computer. CDC's early machines were simply very fast scalar processors, some ten times the speed of the fastest machines offered by other companies. In the 1970s most supercomputers were dedicated to running a vector processor, and many of the newer players developed their own such processors at a lower price to enter the market. The early and mid-1980s saw machines with a modest number of vector processors working in parallel to become the standard. Typical numbers of processors were in the range of four to sixteen. In the later 1980s and 1990s, attention turned from vector processors to massive parallel processing systems with thousands of "ordinary" CPUs, some being off the shelf units and others being custom designs. Today, parallel designs are based on "off the shelf" server-class microprocessors, such as the PowerPC, Opteron, or Xeon, and most modern supercomputers are now highly-tuned computer clusters using commodity processors combined with custom interconnects.