]]>On the NextBigFuture blog, Sander Olson interviews Partha Ranganathan, Hewlett Packard’s principal investigator for HP’s exascale datacenter project, on a host of issues relevant to the HPC industry. Ranganathan’s frontline commentary covers a wide range of topics, from datacenter design to cloud computing to the challenges of exascale computing. The same design challenges that affect top-level supercomputers also affect modern datacenters. According to the HP rep, creating a feasible path for exascale computing in supercomputers and datacenters will require serious efficiency improvements.

Part of the big push to increase computing power, including expanding into the cloud, comes from the need to examine and dissect an ever-increasing data flow. This trend has led Ranganathan to assert that the “industry is transitioning from the information era to the insights era.” Ranganathan explains that the ability to derive “useful information from the deluge of digital data…will provide valuable insights into many things.” He sees this as the “killer app” for cloud computing.

On the subject of exascale computing, Ranganathan describes the technical challenges as daunting, and explains that HP created the exascale datacenter project to help find solutions. Specifically, Ranganathan sees the power issue as the biggest showstopper. He cites the statistic that the carbon footprint of the world’s datacenters is comparable to that of the aviation industry. To address the “power wall,” designers will need to create systems that are 10-100 times more power efficienct. Servers, which reside at the center of any datacenter, will need to become more efficient memory-wise. On this point, Ranganathan believes memristors could hold the key:

Memristors are two-terminal components developed by HP Labs, the company’s central research arm, that could serve either as memory or as logic. Memristors could be used to combine the logic and the memory in a single area, instead of having memory in a separate section. By combining logic and memory, we could essentially eliminate the memory bottleneck which plagues current computer architectures.

Magnetic hard drives won’t disappear. They will simply go to the next level of storage, which is archival. The cost-per-bit for magnetic hard drives will probably always be lower than for a comparable solid-state-drive. But by eliminating spinning disks from most mainstream usage, we can simultaneously reduce power consumption and increase overall performance.

Looking forward into the next decade, Ranganathan envisions a paradigm shift in the way computers and datacenters operate. In this future scenario, data and computing will be alligned by “intelligent disks,” and ”the data will be able to examine itself and derive insights.” He explains this new model will be more efficient than current memory techniques and will change the way that datacenters relate to data. It’s a more organic approach, “similar to the way the human brain works,” Ranganathan adds.

]]>The current generation of microprocessors are ill-equipped to meet growing demands of data processing. Stated another way, the amount of data that needs to be processed is quickly outpacing the performance of chips. With the current processor-centric design, the data is shuttled back and forth from processor to memory, a time-consuming activity. Additionally, all that back-and-forth movement requires a lot of electrity, far more than is consumed by the actual processing. Employing the current processor design in future exascale supercomputers (machines that will require about a billion cores) will create an impossible energy demand.

If computing progress is to continue its foward march and if the next big goal of exaflop-scale machines is to be achieved, it will require a revamping of the current processor architecture. And thanks to advancements in the field of nanoelectronics, the time for such a redesign may finally be at hand. New York Times author John Markoff explores the issue in a recent article.

Writes Markoff:

The semiconductor industry has long warned about a set of impending bottlenecks described as “the wall,” a point in time where more than five decades of progress in continuously shrinking the size of transistors used in computation will end. If progress stops it will not only slow the rate of consumer electronics innovation, but also end the exponential increase in the speed of the world’s most powerful supercomputers — 1,000 times faster each decade.

Researchers from industry and academia alike have begun to address the challenge, as Markoff notes. Hewlett-Packard researchers are designing stacked chip systems that bring the memory and processor much closer together, reducing the distance the data must travel and in doing so greatly reducing energy demands.

Parthasarathy Ranganathan, a Hewlett-Packard electrical engineer, explains that the “systems will be based on memory chips he calls ‘nanostores’ as distinct from today’s microprocessors. They will be hybrids, three-dimensional systems in which lower-level circuits will be based on a nanoelectronic technology called the memristor, which Hewlett-Packard is developing to store data. The nanostore chips will have a multistory design, and computing circuits made with conventional silicon will sit directly on top of the memory to process the data, with minimal energy costs.”

The science of nanoelectrics has generated other promising technologies designed to make the energy demands of future systems more manageable. Researchers at Harvard and Mitre Corporation have developed nanoprocessor “tiles” based on electronic switches made from ultrathin germanium-silicon wires.

I.B.M. and Samsung have partnered on phase-change memories, in which an electric current is used to switch a material from a crystalline to an amorphous state and back again. I.B.M. researchers are also looking at carbon nanotube technology to create hybrid systems that draw on advancements in both nanoelectronics and microelectronics.

]]>http://www.hpcwire.com/2011/02/28/growing_data_deluge_prompts_processor_redesign/feed/0HP Sees a Revolution in Memory Chiphttp://www.hpcwire.com/2010/04/08/hp_sees_a_revolution_in_memory_chip/?utm_source=rss&utm_medium=rss&utm_campaign=hp_sees_a_revolution_in_memory_chip
http://www.hpcwire.com/2010/04/08/hp_sees_a_revolution_in_memory_chip/#commentsThu, 08 Apr 2010 07:00:00 +0000http://www.hpcwire.com/?p=5344Memristors can be used to store and process data on the same chip.

]]>Hewlett-Packard scientists on Thursday are to report advances in the design of a new class of diminutive switches capable of replacing transistors as computer chips shrink closer to the atomic scale. The devices, known as memristors, or memory resistors, were conceived in 1971 by Leon O. Chua, an electrical engineer at the University of California, Berkeley, but they were not put into effect until 2008 at the H.P. lab here.