HP considers nanostores with memristors in future data centers

Several months ago, Hewlett-Packard demonstrated memristor
technology, a new system architecture that can be dynamically changed between
logic operations and memory storage. Memristor innovator Stan Williams, a
senior HP fellow and director of its Information and Quantum Systems
Lab,claimed that the new computing paradigm could enable calculations to be
performed “in the same chips where data is stored, rather than in a specialized
CPU.”

The company also revealed that it had designed the
architecture to allow multiple layers of memristive logic to be stacked in a 3D
fashion, resulting in a tenfold increase in memory density. Among many authors
and analysts who read the news, some eventually reminisced on the innovative
history of Moore’s Law and marveled at its progress over both prosperous times
in the IT space and rough economic downturns that have stifled chip logic advancement.
Now, HP researchers are exploring ways to make their memristor architecture
useful in future server and data center designs where data has become an
exponentially growing asset and management issue simultaneously.

“Re-thinking the balance of computer, storage and
communications will happen, and it will have big implications,” said Partha
Ranganathan, a principal investigator and distinguished technologist in the
exascale data center project at HP Labs.

Researchers working with memristor technology intend to
rebalance these fundamental core system components with
a new chip called a “nanostore.” From an architectural perspective, a nanostore
is just a 3D stack of processor cores connected to non-volatile memristor cores
(NVRAM). The new processor-memory design will essentially place data at the
heart of the computing transaction, rather than the CPU itself.

With a new “stateful
logic” paradigm that moves from observing the CPU as the “brain” of a computer
system to the data itself as the center of a system (based on the nanostore
concept), HP Labs has found the new design approach to have a ten times greater
performance factor for the same cost of energy. “This is early work in [3D stacks
and memristors], and we definitely think we can get better [performance]
factors,” said Ranganathan.

He also mentioned that it could take roughly five years
before nanostore devices are ready for commercial use. He and other researchers
at HP Labs plan to publish papers later this year on the initial ideas for
nanostores and for a new low-power processor called a “microblade.” HP Labs has
identified three unique kinds of server designs that could be optimized for
different types of processing workloads. In an energy proportional design, server
performance is dynamically scaled up or down based on an application’s
particular needs. In a consolidated design, multiple jobs are packed into a system.
In a microblade design, jobs are broken down into highly parallelized tasks
than can be handled by multiple low-power processors (think ARM and Intel Atom
chips). This last type of design is known in the HPC space as “physicalization,” or
in other words, the concept of building high-density compute nodes out of
clusters of very cheap, low-power processors.

Of course, microblades using low-power processors are
limited by the extent to which underlying algorithms can be split into separate
tasks. In a recent
article by Jon Stokes from ArsTechnica, he notes that a combination of high
margins on server chips and the overall organization of the hardware on the die
have left an opening for simpler, cheaper solutions like those from ARM and
Intel’s Atom lineup.

In addition, there is also newly envisioned potential for
Intel to reassess its stakes in the HPC market with the introduction of its MIC
(Many Integrated Core)
server architecture. But whether or not MIC will be the last remaining highly
parallel, low-power solution to power the data centers of tomorrow before HP
mass produces nanostores with memristor technology remains unclear.