HPCwire » Moore’s Lawhttp://www.hpcwire.com
Since 1986 - Covering the Fastest Computers in the World and the People Who Run ThemTue, 31 Mar 2015 19:48:35 +0000en-UShourly1http://wordpress.org/?v=4.1.1Pushing Back the Limits of Microminaturizationhttp://www.hpcwire.com/2015/03/04/transistors-head-for-nanoscale/?utm_source=rss&utm_medium=rss&utm_campaign=transistors-head-for-nanoscale
http://www.hpcwire.com/2015/03/04/transistors-head-for-nanoscale/#commentsWed, 04 Mar 2015 23:29:25 +0000http://www.hpcwire.com/?p=17681Over the last half a century, computers have transformed nearly every facet of society. The information age and its continuing evolution can be traced to the invention of the integrated circuit and the reliable progression of smaller feature sizes – enabling generation after generation of smaller, faster and cheaper microprocessors. But now that foundational trend of modern computing,

]]>Over the last half a century, computers have transformed nearly every facet of society. The information age and its continuing evolution can be traced to the invention of the integrated circuit and the reliable progression of smaller feature sizes – enabling generation after generation of smaller, faster and cheaper microprocessors.

But now that foundational trend of modern computing, commonly referred to as Moore’s law, is in jeopardy. With the US semiconductor industry valued at more than $65 billion a year, and the full promise of big data and the Internet of Things yet to come, there is great motivation to find a replacement technology that can keep this momentum going for another half-century.

In a recent feature piece, NSF science writer Aaron Dubrow discusses some of the work being done to help mitigate the disruption of a plateauing exponential. The International Technology Roadmap for Semiconductors (ITRS), which directs the course for the semiconductor industry, anticipates another decade of shrinking CMOS geometries yielding transistors that are 5 nanometers long and 1 nanometer wide – that’s 5 silicon atoms across.

At the atomic scale, predictability breaks down and there is a rise in strange behaviors, such as quantum tunneling and atomistic disorder. Understanding these phenomena requires specialized modeling software, like NEMO5 (the fifth edition of the NanoElectronics MOdeling Tools). Developed by a team of researchers at Purdue University, led by Gerhard Klimeck, NEMO5 models multiscale, multiphysics phenomena. This makes it possible to design future nanoelectronic devices, including transistors that are only a few atoms wide.

Klimeck and his colleagues received an NSF Petascale Computing Resource Allocation award that enabled them to use their NEMO5 software in tandem with the powerful Blue Waters supercomputer to assess the limits of current semiconductor technologies and explore alternative materials and approaches.

The team’s findings depict some of the challenges that arise with the continued shrinking of CMOS geometries, and were used to inform the 2014 ITRS roadmap.

]]>http://www.hpcwire.com/2015/03/04/transistors-head-for-nanoscale/feed/0Growing HPC Beyond Moore’s Lawhttp://www.hpcwire.com/2015/03/02/growing-hpc-beyond-moores-law/?utm_source=rss&utm_medium=rss&utm_campaign=growing-hpc-beyond-moores-law
http://www.hpcwire.com/2015/03/02/growing-hpc-beyond-moores-law/#commentsTue, 03 Mar 2015 02:29:22 +0000http://www.hpcwire.com/?p=17647Rice University’s Dr. Jan E. Odegard recently added his voice to a growing chorus of HPC experts weighing in on the changing HPC landscape. As the featured speaker at the first Lunch’n Learn event put on by the Norwegian Consulate General in Houston, Odegard spoke about the inevitable death of Moore’s law and what it will

]]>Rice University’s Dr. Jan E. Odegard recently added his voice to a growing chorus of HPC experts weighing in on the changing HPC landscape. As the featured speaker at the first Lunch’n Learn event put on by the Norwegian Consulate General in Houston, Odegard spoke about the inevitable death of Moore’s law and what it will mean to no longer have this exponential driving chip performance.

Since the invention of the integrated circuit in 1958, the computing industry has been making “diamonds” out of sand, observed Odegard, referring of course to silicon-based microchips. Moore’s law and Dennard scaling fueled five decades of smaller, faster and cheaper microelectronics, and delivered thousand-fold performance increases roughly every ten years, setting the stage for the information age and everything that goes along with it, from supercomputers to iPhones.

But Moore’s law won’t last forever. Feature sizes are hitting the limits of feasibility from a physics, power and cost perspective. Dr. Odegard predicted that Moore’s law will only hold out for another three or four generations (6-8 years) using current fab techniques. Currently no replacement for silicon-based CMOS exists, but researchers are hot on the trail of candidates, including spintronics, nanotubes, graphene, and other exotic technologies.

Moore’s law created a “free lunch” situation, where performance increases were a matter of waiting for the next generation of chips. But chips aren’t the only place to exploit performance. Dr. Odegard reports that at Rice, where he is the executive director of the Ken Kennedy Institute for Information Technology, there is increased focus on better software and tools, and on system level optimizations. This is highly-skilled work, however, which explains why institutions like Rice are doubling-down on outreach and training efforts.

HPC is facing the backend of an exponential that has propelled five decades of progress. That’s a pretty big loss to overcome, but there’s also cause for optimism because innovation flourishes under adversity.

Dr. Odegard will also be speaking at the Rice University Oil and Gas High Performance Computing (OG HPC) Workshop, taking place March 4-5, 2015.

]]>http://www.hpcwire.com/2015/03/02/growing-hpc-beyond-moores-law/feed/0Researchers Devise Promising Spintronics Semiconductorhttp://www.hpcwire.com/2015/02/23/researchers-devise-promising-spintronics-semiconductor/?utm_source=rss&utm_medium=rss&utm_campaign=researchers-devise-promising-spintronics-semiconductor
http://www.hpcwire.com/2015/02/23/researchers-devise-promising-spintronics-semiconductor/#commentsMon, 23 Feb 2015 23:41:10 +0000http://www.hpcwire.com/?p=17552Spintronics — the practice of using electrons to read, write and manipulate data — has long been hailed as a promising avenue for post-CMOS exploration, but imbibing a substrate with the necessary levels of magnetism and conductivity has proved challenging. A cross-disciplinary team of researchers at the University of Michigan have created a semiconductor compound

]]>Spintronics — the practice of using electrons to read, write and manipulate data — has long been hailed as a promising avenue for post-CMOS exploration, but imbibing a substrate with the necessary levels of magnetism and conductivity has proved challenging.

A cross-disciplinary team of researchers at the University of Michigan have created a semiconductor compound that is more conducive to this level of control.

The new compound shows promise as a base material for spintronic-based devices, in the same way that silicon is the base for electronic computing devices. It’s a breakthrough that could hold the key to smaller, faster, more energy-efficient computing devices.

Circuits that use spin have a smaller footprint than charge-based circuits, which means that more of them can be squeezed onto a single processor. In this way, spintronic offers a path beyond the physical limits of silicon-based microelectronics. Additionally, spintronics devices store information using the “on” or “off” electrical charge and the “up” or “down” magnetic spin of electrons. This is an advantage because the spin of electrons stays stable at smaller states of miniaturization.

“You can only make an electronic circuit so small before the charge of an electron becomes erratic,” explains Ferdinand Poudeu, assistant professor of materials science and engineering at the University of Michigan “But the spin of electrons remains stable at much smaller sizes, so spintronic devices open the door to a whole new generation of computing.”

Another benefit of spintronics is the ability to combine logic, storage and communication onto a single chip, again enabling a much smaller footprint and lower power consumption.

For years, researchers in the field have sought to make spintronic semiconductors by working to tweak existing materials, but Poudeu’s team went back to the drawing board, and created a new crystal structure made from a mixture of iron, bismuth and selenium. The result is a material that offers the ability to manipulate conductivity and magnetism independently.

Based at the University of Michigan, the project drew from chemistry, crystallography and computer science to create a novel semiconductor spintronics substrate. While the initial research was based on a powder form of the material, the next step will be to manufacture the thin film that would be required for a spintronic device. The process is expected to take about a year.

]]>http://www.hpcwire.com/2015/02/23/researchers-devise-promising-spintronics-semiconductor/feed/0Horst Simon on the HPC Slowdownhttp://www.hpcwire.com/2015/02/13/horst-simon-hpc-slowdown/?utm_source=rss&utm_medium=rss&utm_campaign=horst-simon-hpc-slowdown
http://www.hpcwire.com/2015/02/13/horst-simon-hpc-slowdown/#commentsSat, 14 Feb 2015 00:09:40 +0000http://www.hpcwire.com/?p=17470At an HPC meetup event in San Francisco on Feb 10, Berkeley Lab Deputy Director Horst Simon makes the case that Moore’s law and parallelism can no longer be counted on to provide the exponential growth that has been driving high-performance computing for six decades. If indeed Moore’s law is coming to an end, there

]]>At an HPC meetup event in San Francisco on Feb 10, Berkeley Lab Deputy Director Horst Simon makes the case that Moore’s law and parallelism can no longer be counted on to provide the exponential growth that has been driving high-performance computing for six decades.

If indeed Moore’s law is coming to an end, there will be a need for new architectures and new technologies, he says, citing examples from the post-CMOS space, or even non-von Neumann options like quantum computing and neuromorphic (brain-like) computing.

Horst gives measurable evidence of his claim that Moore’s law is running out of steam using TOP500 metrics as well as other data.

Looking at a slide of projected performance development using TOP500 data, it may appear obvious that an exaflops system is on track for 2020, says Horst, but that would be a mistake.

“Even if you don’t know anything about high-performance computing at all, you should be very much concerned about [making these assumptions],” he adds, “because what you are doing here is extrapolations on a semi-log scale. And whenever you’re dealing with exponential growing data, very small perturbations in the beginning can give you a big variation in the end.”

Horst goes on to identify two such perturbations. By zeroing in on the graph, especially the line representing the sum of total list flops over time, it can be seen that in June 2008 something happened to cause a leveling out of the slope of this extrapolation. A similar break point also appears in June 2013.

This leads Horst to the conclusion that this five-year span marks a turning point where the growth attributed to Moore’s law and parallelism are no longer there. It’s a case that is supported further by the lack of turnover in the top ten machines, with this grouping remaining virtually unchanged for two years.

Even the CORAL announcement, the joint Collaboration of Oak Ridge, Argonne, and Lawrence Livermore, which is noteworthy for funding three 150-petaflops systems can be seen as a marker of slow-down. Horst says these machines would have had to be implemented already to put the US on track for a 2020 exascale timeline, yet they are still two-to-three years away.

Horst goes on to address some of the possible reasons for the stagnation, including a lack of investment stemming from the worldwide recession, and a lack of engagement by key vendors. There are also the steep technical challenges associated with exascale, such as overcoming data bottlenecks and power constraints. But in the end, Horst maintains that the performance decline is primarily due to the limits of Moore’s law.

It’s not all doom and gloom, however, as HPC is currently thriving around the world as a driver of innovation. High-level supercomputing is no longer the purview of one or two nations and the first to field an exascale system will be the one that makes the most targeted funding commitment, if they act soon.

As Horst puts it: “the only thing standing between us and an exascale machine is a lot of money – billion of dollars of investment and maybe a power bill of $50-100 million a year.”

]]>http://www.hpcwire.com/2015/02/13/horst-simon-hpc-slowdown/feed/0One Atom Thin Silicon Transistors Are Herehttp://www.hpcwire.com/2015/02/07/one-atom-thin-silicon-transistors/?utm_source=rss&utm_medium=rss&utm_campaign=one-atom-thin-silicon-transistors
http://www.hpcwire.com/2015/02/07/one-atom-thin-silicon-transistors/#commentsSat, 07 Feb 2015 17:16:27 +0000http://www.hpcwire.com/?p=17330Move over graphene, there’s a new 2D wonder material being hailed as a potential Moore’s law extender, called silicine. This one‐atom‐thick two‐dimensional crystal of silicon could be the ultimate miniaturization enabler, setting the stage for future generations of faster, more energy-efficient microchips. A cousin to graphene, silicine consists of a single layer of atoms arranged

]]>Move over graphene, there’s a new 2D wonder material being hailed as a potential Moore’s law extender, called silicine. This one‐atom‐thick two‐dimensional crystal of silicon could be the ultimate miniaturization enabler, setting the stage for future generations of faster, more energy-efficient microchips.

A cousin to graphene, silicine consists of a single layer of atoms arranged in a honeycomb pattern, but where graphene is carbon-based, silicine is made from silicon atoms. Thus it offers an easier path to integration for a microelectronics industry already dominated by silicon.

Traditional silicon-based transistors can only get down to about 5 atoms thick before becoming unpredictable and losing performance, the invention of a silicene transistor pushes that limitation back to just one atom. And in computing, smaller means faster.

Silicine has had a rather remarkable journey. First theorized in 1994, it wasn’t until 2010 that researchers began making headway in developing the silicon analogue to graphene. Two years later, several teams around the world independently succeeded in creating silicine in the lab.

Now researchers at the University of Texas at Austin’s Cockrell School of Engineering are unveiling details of the first silicene transistor, a crucial characteristic for logic operations.

The devices were developed by Deji Akinwande, an assistant professor in the Cockrell School’s Department of Electrical and Computer Engineering, and his team, which includes lead researcher Li Tao. Their demonstration that silicine can be made into transistors is a major advancement in the search for alternative CMOS materials.

“Nobody could have expected that in such a short time, something that didn’t exist could make a transistor,” says Guy Le Lay, a materials scientist at Aix-Marseille University in France, who was one of the first scientists to create silicine.

Despite its promise, there are major challenges associated with silicene, such as its instability when exposed to air.

To reduce exposure to air, the researchers formed a silicene sheet by allowing a hot vapor of silicon atoms to condense onto a crystalline block of silver inside a vacuum chamber, then added a 5-nano­meter-thick layer of alumina on top. Thus protected, the team was able to safely peel it of its base and transfer it to an oxidized-silicon substrate. By scraping off some of the silver, the team exposed two islands of metal (acting as electrodes) with a strip of silicene between them.

The exposed silicine still degrades in about two minutes, but that window of time was sufficient to measure its properties. While its electrons were “sluggish” in comparison with graphene, silicine’s buckled structure endows it with a tuneable band gap, which graphene, being flat, lacks. Since band gaps are what give semiconductors the ability to switch currents on and off, they are the foundation of transistors.

The technique shows promise for other paper-thin, air-sensitive materials too, but its silicon-nature makes silicine a serious contender for commercial adoption.

“Apart from introducing a new player in the playground of 2-D materials, silicene, with its close chemical affinity to silicon, suggests an opportunity in the road map of the semiconductor industry,” Akinwande said. “The major breakthrough here is the efficient low-temperature manufacturing and fabrication of silicene devices for the first time.”

]]>http://www.hpcwire.com/2015/02/07/one-atom-thin-silicon-transistors/feed/0IBM Advances Self-Assembly in 3D Transistorshttp://www.hpcwire.com/2015/01/28/ibm-advances-self-assembly-3d-transistors/?utm_source=rss&utm_medium=rss&utm_campaign=ibm-advances-self-assembly-3d-transistors
http://www.hpcwire.com/2015/01/28/ibm-advances-self-assembly-3d-transistors/#commentsWed, 28 Jan 2015 22:55:03 +0000http://www.hpcwire.com/?p=17184The key to Moore’s law is the ability to incorporate ever-smaller feature sizes into each new chip generation. While the exponential progress ensconced in the “law” has slowed down in the last decade, the payoff is still compelling enough that chip engineers will go to great lengths to forestall its long-predicted demise. One of these

]]>The key to Moore’s law is the ability to incorporate ever-smaller feature sizes into each new chip generation. While the exponential progress ensconced in the “law” has slowed down in the last decade, the payoff is still compelling enough that chip engineers will go to great lengths to forestall its long-predicted demise. One of these workarounds is to expand transistors into the 3rd dimension, which has the effect of speeding switching while reducing power. On the manufacturing side, a technique called self-assembly holds promise as a way to achieve smaller circuit elements by getting molecules to automatically arrange themselves into tiny but useful patterns.

According to a recent article at MIT Technology Review by guest contributor Katherine Bourzac, researchers at IBM have combined these approaches to create the first 3-D transistor made with molecular self-assembly.

As Bourzac explains, one of the primary enablers for chipmaking, photolithography, has hit a roadblock when it comes to the fastest microchips. Generally regarded as acceptable down to 14-nm, conventional photolithography is expected to become too expensive and complex past that point due to limits imposed by the wavelength of light.

The technique that IBM researchers employed involves solutions of compounds known as block copolymers that are coaxed to assemble themselves into complex structures. In this way, it is possible to create patterns that are much denser than what would be achievable with lithography. The technique is directed at the smallest elements of the integrated circuit that are most difficult to achieve with conventional methods, i.e., the channels in silicon transistors and the fins in 3-D transistors. For the rest of the circuit, standard manufacturing technologies would still suffice.

The IBM research group used photolithography to create deep, parallel trenches in a silicon wafer. The trenches guide the assembly of block copolymers, which then act as a template for a chemical process that etches even tinier features. The end result was a working device with transistor fins smaller and more densely-packed than would be possible with just lithography. The features were just 29 nanometers apart, much smaller than the 80 nanometers that is currently possible, writes Bourzac.

There is a lot of buzz around self-assembling circuits in the chip industry, where it is seen as a potential Moore’s law extender. However, according to Kowk Ng, director of nanomanufacturing at the Semiconductor Research Corp. (interviewed for the source article), the process is still open to defects that will need to be controlled before it is suitable for large-volume production.

]]>http://www.hpcwire.com/2015/01/28/ibm-advances-self-assembly-3d-transistors/feed/0Stanford Group Creates Four-Layer Stacked Chiphttp://www.hpcwire.com/2014/12/18/stanford-group-creates-four-layer-stacked-chip/?utm_source=rss&utm_medium=rss&utm_campaign=stanford-group-creates-four-layer-stacked-chip
http://www.hpcwire.com/2014/12/18/stanford-group-creates-four-layer-stacked-chip/#commentsThu, 18 Dec 2014 23:44:51 +0000http://www.hpcwire.com/?p=16858With transistor scaling slated to come up against some fundamental limits over the next five to seven years, chip designers are hot on the trail of technologies to extend the exponential advances described by Moore’s law. One of the most promising areas of R&D involves stacking layers of logic and memory into 3D chips. The implementation

]]>With transistor scaling slated to come up against some fundamental limits over the next five to seven years, chip designers are hot on the trail of technologies to extend the exponential advances described by Moore’s law. One of the most promising areas of R&D involves stacking layers of logic and memory into 3D chips. The implementation faces many challenges, however, including how to remove heat from the inner layers, yet the energy-efficiency and bandwidth payoffs are compelling. Quite a few research groups and all of the big semiconductor makers are working on the technique, but so far cost and risk are holding up progress.

Stanford University is home to one of the group’s working to usher in a new breed of chips that are smaller, faster, and more energy-efficient than today’s traditional ICs. Stanford engineers recently announced the successful development of a four-layer prototype that employs two logic transistor layers on the outside and the two memory layers on the inside. Thousands of nanoscale electronic “elevators” carry information between logic and memory at a rate that is much faster than traditional IC cards, with less electricity.

Project leaders Subhasish Mitra, a Stanford associate professor of electrical engineering and of computer science, and H.-S. Philip Wong, the Williard R. and Inez Kerr Bell Professor in Stanford’s School of Engineering, discuss the new approach in a paper, which the duo presented at the IEEE International Electron Devices Meeting held in San Francisco this week.

As explained in an article from Stanford, the researchers credit three breakthroughs with enabling the technology: a novel transistor design based on carbon nanotubes; memory fabricated using titanium nitride, hafnium oxide and platinum that is more conducive to stacking; and an innovative high-rise technique that employs a “multiplicity of connections.”

Although other research groups have experimented with carbon nanotubes (CNTs), which are considered less leakage prone than silicon due to their small diameter, the Stanford team did something different. They started out the usual way growing CNTs on round quartz wafers. Then using an adhesive process they developed they transferred the CNTs off the quartz growth medium and onto a silicon wafer. This wafer was used as the foundation of the high-rise chip.

The technique provided a way to get lots of CNT’s into a small area, which the team claim gives them some of the highest-density, highest-performance CNT transistors ever built.

“This research is at an early stage, but our design and fabrication techniques are scalable,” Mitra said. “With further development this architecture could lead to computing performance that is much, much greater than anything available today.”

“Paradigm shift is an overused concept, but here it is appropriate,” Wong added. “With this new architecture, electronics manufacturers could put the power of a supercomputer in your hand.”

]]>http://www.hpcwire.com/2014/12/18/stanford-group-creates-four-layer-stacked-chip/feed/0Will Magnets Be the Cure for What Ails Moore’s Law?http://www.hpcwire.com/2014/10/01/will-magnets-cure-ails-moores-law/?utm_source=rss&utm_medium=rss&utm_campaign=will-magnets-cure-ails-moores-law
http://www.hpcwire.com/2014/10/01/will-magnets-cure-ails-moores-law/#commentsWed, 01 Oct 2014 21:57:48 +0000http://www.hpcwire.com/?p=15422With silicon-based processors facing some inexorable limits, scientists are looking elsewhere to keep computing on its exponential growth track. One potential alternative that is getting some traction is magnet-based computing. A group of electrical engineers at the Technische Universität München (TUM) is studying the feasibility of using miniature magnets as the building block for integrated

]]>With silicon-based processors facing some inexorable limits, scientists are looking elsewhere to keep computing on its exponential growth track. One potential alternative that is getting some traction is magnet-based computing. A group of electrical engineers at the Technische Universität München (TUM) is studying the feasibility of using miniature magnets as the building block for integrated circuits.

The group ran experiments using three-dimensional arrangements of nanometer-scale magnets instead of transistors. Their results are detailed in the journal Nanotechnology.

The 3D stack of nanomagnets function as a majority logic gate, which could act as a programmable switch in a digital circuit. The mechanism is a lot like ordinary bar magnets. When you bring them near each other, the opposite poles attract and like poles repel each other. If you bring together several bar magnets and hold all but one in a fixed position, the magnet that is free to flip will be determined by the orientation of the majority of fixed magnets.

Gates made from field-coupled nanomagnets work in a similar way, with the reversal of polarity representing a switch between Boolean logic states, i.e., 1 and 0. In the 3D majority gate created by the research team, the state is determined by three input magnets, one of which sits 60 nanometers below the other two, and is read out by a single output magnet.

Nanomagnetic logic is one of the technologies being considered by the industry group, International Technology Roadmap for Semiconductors. Magnetic circuits are non-volatile, so they maintain state without power. They also have the benefit of extremely low energy consumption, operate at room temperature and resist radiation.

Perhaps most importantly, nanomagnetic logic can support very dense packing. The building blocks, the individual nanomagnets, are equivalent in size to individual transistors, but transistors require contacts and wiring, and nanomagnets operate purely with coupling fields.

The 3D design also works to make nanomagnetic logic competitive. TUM doctoral candidate Irina Eichwald, lead author of the Nanotechnology paper, explains: “The 3D majority gate demonstrates that magnetic computing can be exploited in all three dimensions, in order to realize monolithic, sequentially stacked magnetic circuits promising better scalability and improved packing density.”

“It is a big challenge to compete with silicon CMOS circuits,” adds Dr. Markus Becherer, leader of the TUM research group within the Institute for Technical Electronics. “However, there might be applications where the non-volatile, ultralow-power operation and high integration density offered by 3D nanomagnetic circuits give them an edge.”

]]>http://www.hpcwire.com/2014/10/01/will-magnets-cure-ails-moores-law/feed/0Quick Change Material Begs Post-Silicon Considerationhttp://www.hpcwire.com/2014/09/22/quick-change-material-begs-post-silicon-consideration/?utm_source=rss&utm_medium=rss&utm_campaign=quick-change-material-begs-post-silicon-consideration
http://www.hpcwire.com/2014/09/22/quick-change-material-begs-post-silicon-consideration/#commentsTue, 23 Sep 2014 01:02:32 +0000http://www.hpcwire.com/?p=15292With Moore’s law in peril, the search is on for the next computing substrate. Keeping up the pace of progress in an ever more compute and data driven world will likely require a post-silicon invention that can satisfy humanity’s need for faster, smaller, greener and more powerful computers. In recognition of the size and speed

]]>With Moore’s law in peril, the search is on for the next computing substrate. Keeping up the pace of progress in an ever more compute and data driven world will likely require a post-silicon invention that can satisfy humanity’s need for faster, smaller, greener and more powerful computers.

In recognition of the size and speed limitations of current compute and memory technology, scientists from the University of Cambridge, the Singapore A*STAR Data-Storage Institute and the Singapore University of Technology and Design are exploring the feasibility of replacing silicon with a material that can switch back and forth between different electrical states. Phase-change materials (or PCMs) as these promising substrates are known can switch between two structural phases with different electrical states, one crystalline and conducting and the other glassy and insulating, in billionths of a second. It’s a technology that could one day enable processing speeds one-thousand times faster than current systems, while using less energy.

As described in the journal Proceedings of the National Academy of Sciences, the researchers created a PCM-based processor using chalcogenide glass, which can be melted and recrystallized in as little as half a nanosecond (billionth of a second) when the correct voltage is applied. The team showed that logic-processing operations can be performed in non-volatile memory cells using particular combinations of ultra-short voltage pulses, which is not possible with silicon-based technology. This works because the PCM devices put logic operations and memory in the same location.

Silicon-based systems are movement-intensive, and that takes time and energy. “Ideally, we’d like information to be both generated and stored in the same place,” said Dr. Desmond Loke of the Singapore University of Technology and Design, the paper’s lead author. “Silicon is transient: the information is generated, passes through and has to be stored somewhere else. But using PCM logic devices, the information stays in the place where it is generated.”

“Eventually, what we really want to do is to replace both DRAM and logic processors in computers by new PCM-based non-volatile devices,” added Professor Stephen Elliott of Cambridge’s Department of Chemistry, who led the research. “But for that, we need switching speeds approaching one nanosecond. Currently, refreshing of DRAM leaks a huge amount of energy globally, which is costly, both financially and environmentally. Faster PCM switching times would greatly reduce this, resulting in computers which are not just faster, but also much ‘greener’.”

A drawback of current PCM devices is that they are not as fast as their silicon-based counterparts. There is also a stability issue affecting the amorphous phase. But the researchers found that by performing logic operations in reverse, putting the crystalline phase first, they can facilitate faster, more stable performance.

PCMs were developed in the 1960s and found their way into optical-memory devices and more recently in electronic-memory applications. They are just now starting to replace silicon-based flash memory in some smart phones. As researchers continue to identify speed enhancements, non-volatile PCM could eventually supplant the more energy-intensive DRAM.

]]>http://www.hpcwire.com/2014/09/22/quick-change-material-begs-post-silicon-consideration/feed/0Deconstructing Moore’s Law’s Limitshttp://www.hpcwire.com/2014/08/18/deconstructing-moores-laws-limits/?utm_source=rss&utm_medium=rss&utm_campaign=deconstructing-moores-laws-limits
http://www.hpcwire.com/2014/08/18/deconstructing-moores-laws-limits/#commentsMon, 18 Aug 2014 20:37:32 +0000http://www.hpcwire.com/?p=14704For the past five decades, computers have progressed on a predictable trajectory, doubling in speed roughly every two years in tune with Gordon Moore’s oft-cited observation-turned-prophecy. Although semiconductor scaling continues to yield performance gains, many perceive a tipping point is nigh, where the cost-benefit analysis of further miniaturization breaks down. The latest researcher to weigh

]]>For the past five decades, computers have progressed on a predictable trajectory, doubling in speed roughly every two years in tune with Gordon Moore’s oft-cited observation-turned-prophecy. Although semiconductor scaling continues to yield performance gains, many perceive a tipping point is nigh, where the cost-benefit analysis of further miniaturization breaks down.

The latest researcher to weigh in on this tipping point, commonly referred to as the death of Moore’s law, is University of Michigan computer scientist Igor Markov. In a recent article in the journal Nature, Markov tackles the issue not just in terms of the physical limits of integrated-circuit scaling, but as a culmination of various limiting factors in the areas of manufacturing, energy, physical space, design and verification effort, and algorithms. With consideration of these limitations, as well as to emerging alternative technologies, Markov outlines “what is achievable in principle and in practice.”

“What are these limits, and are some of them negotiable?” he asks. “On which assumptions are they based? How can they be overcome?”

“Given the wealth of knowledge about limits to computation and complicated relations between such limits, it is important to measure both dominant and emerging technologies against them.”

Advanced techniques such as “structured placement,” shown here and developed by Markov’s group, are currently being used to wring out optimizations in chip layout. Different circuit modules on an integrated circuit are shown in different colors. Algorithms for placement optimize both the locations and the shapes of modules; some nearby modules can be blended when this reduces the length of the connecting wires.

The Nature article addresses the more obvious physical limitations in materials and manufacturing as well as limits related to design and validation, power and heat, time and space, and information and computational complexity. Markov recounts how certain previous limits were circumvented, and compares loose and tight limits. An overview of emerging technologies includes the reminder that these can also indicate as yet unknown limits.

“When a specific limit is approached and obstructs progress, understanding the assumptions made is key to circumventing it,” remarks an NSF writeup of the research. “Chip scaling will continue for the next few years, but each step forward will meet serious obstacles, some too powerful to circumvent.”

“Understanding these important limits,” says Markov, “will help us to bet on the right new techniques and technologies.”