HPCwire » electronicshttp://www.hpcwire.com
Since 1986 - Covering the Fastest Computers in the World and the People Who Run ThemSun, 02 Aug 2015 12:39:43 +0000en-UShourly1http://wordpress.org/?v=4.2.3Unmasking the Speed Limit of Modern Electronicshttp://www.hpcwire.com/2014/12/11/unveiling-speed-limit-modern-electronics/?utm_source=rss&utm_medium=rss&utm_campaign=unveiling-speed-limit-modern-electronics
http://www.hpcwire.com/2014/12/11/unveiling-speed-limit-modern-electronics/#commentsFri, 12 Dec 2014 04:27:36 +0000http://www.hpcwire.com/?p=16806For the first time, scientists have captured the essence of semiconductor computing on film by taking snapshots of the electron transfer from valence to conduction band states. It is this leap that forms the basis for the entire semiconductor industry, digital electronics and modern computing as we know it. Using attosecond extreme ultraviolet (XUV) spectroscopy Read more…

]]>For the first time, scientists have captured the essence of semiconductor computing on film by taking snapshots of the electron transfer from valence to conduction band states. It is this leap that forms the basis for the entire semiconductor industry, digital electronics and modern computing as we know it.

Using attosecond extreme ultraviolet (XUV) spectroscopy much like a stopwatch, the team of physicists and chemists based at UC Berkeley were able to time the step rise at ~450-attoseconds, shedding light on the fundamental speed limit of modern electronic circuitry.

Just how fast is this microscopic event? Consider that an attosecond is equal to one quintillionth of a second. Put another way, an attosecond is to a second what a second is to approximately 31.7 billion years.

As explained by Berkeley science writer Robert Sanders, the age of digital electronics is based on mobile electrons making a semiconductor material conductive so that the application of light or voltage results in a flowing current. In a computer chip, electronic current flowing across transistors facilitates the switch between two binary states, zero and one, giving rise to the fundamental language of computers.

The key event occurs when electrons attached to atoms in the crystal lattice jumps from the valence shell of the silicon atom across the band-gap into the conduction electron region. The previous generation of femtosecond lasers were unable to glimpse this event, which takes place faster than a quadrillionth of a second after laser excitation from the slower lattice motion of the silicon atomic nuclei.

“Though this excitation step is too fast for traditional experiments, our novel technique allowed us to record individual snapshots that can be composed into a ‘movie’ revealing the timing sequence of the process,” said Stephen Leone, UC Berkeley professor of chemistry and physics.

The attosecond extreme ultraviolet (XUV) spectroscopy responsible for the breakthrough recording was developed in the Attosecond Physics Laboratory, which is operated by Leone and Daniel Neumark, UC Berkeley professor of chemistry.

The experimental data was supported by supercomputer simulations of the excitation process and the subsequent interaction of X-ray pulses with the silicon crystal. A team from the University of Tsukuba and the Molecular Foundry at the Department of Energy’s Lawrence Berkeley National Laboratory performed the computing using resources provided by Lawrence Berkeley National Laboratory, the National Energy Research Scientific Computing Center (NERSC) and the Institute of Solid State Physics, University of Tokyo. Funding for the project was provided by the US Department of Defense and the Defense Advanced Research Projects Agency’s PULSE program.

The UC Berkeley colleagues together with researchers from Ludwig-Maximilians Universität in Munich, Germany, the University of Tsukuba, Japan, and the Molecular Foundry at Lawrence Berkeley National Laboratory describe their findings in the Dec. 12 issue of the journal Science.

]]>http://www.hpcwire.com/2014/12/11/unveiling-speed-limit-modern-electronics/feed/0Will Magnets Be the Cure for What Ails Moore’s Law?http://www.hpcwire.com/2014/10/01/will-magnets-cure-ails-moores-law/?utm_source=rss&utm_medium=rss&utm_campaign=will-magnets-cure-ails-moores-law
http://www.hpcwire.com/2014/10/01/will-magnets-cure-ails-moores-law/#commentsWed, 01 Oct 2014 21:57:48 +0000http://www.hpcwire.com/?p=15422With silicon-based processors facing some inexorable limits, scientists are looking elsewhere to keep computing on its exponential growth track. One potential alternative that is getting some traction is magnet-based computing. A group of electrical engineers at the Technische Universität München (TUM) is studying the feasibility of using miniature magnets as the building block for integrated Read more…

]]>With silicon-based processors facing some inexorable limits, scientists are looking elsewhere to keep computing on its exponential growth track. One potential alternative that is getting some traction is magnet-based computing. A group of electrical engineers at the Technische Universität München (TUM) is studying the feasibility of using miniature magnets as the building block for integrated circuits.

The group ran experiments using three-dimensional arrangements of nanometer-scale magnets instead of transistors. Their results are detailed in the journal Nanotechnology.

The 3D stack of nanomagnets function as a majority logic gate, which could act as a programmable switch in a digital circuit. The mechanism is a lot like ordinary bar magnets. When you bring them near each other, the opposite poles attract and like poles repel each other. If you bring together several bar magnets and hold all but one in a fixed position, the magnet that is free to flip will be determined by the orientation of the majority of fixed magnets.

Gates made from field-coupled nanomagnets work in a similar way, with the reversal of polarity representing a switch between Boolean logic states, i.e., 1 and 0. In the 3D majority gate created by the research team, the state is determined by three input magnets, one of which sits 60 nanometers below the other two, and is read out by a single output magnet.

Nanomagnetic logic is one of the technologies being considered by the industry group, International Technology Roadmap for Semiconductors. Magnetic circuits are non-volatile, so they maintain state without power. They also have the benefit of extremely low energy consumption, operate at room temperature and resist radiation.

Perhaps most importantly, nanomagnetic logic can support very dense packing. The building blocks, the individual nanomagnets, are equivalent in size to individual transistors, but transistors require contacts and wiring, and nanomagnets operate purely with coupling fields.

The 3D design also works to make nanomagnetic logic competitive. TUM doctoral candidate Irina Eichwald, lead author of the Nanotechnology paper, explains: “The 3D majority gate demonstrates that magnetic computing can be exploited in all three dimensions, in order to realize monolithic, sequentially stacked magnetic circuits promising better scalability and improved packing density.”

“It is a big challenge to compete with silicon CMOS circuits,” adds Dr. Markus Becherer, leader of the TUM research group within the Institute for Technical Electronics. “However, there might be applications where the non-volatile, ultralow-power operation and high integration density offered by 3D nanomagnetic circuits give them an edge.”

]]>http://www.hpcwire.com/2014/10/01/will-magnets-cure-ails-moores-law/feed/0Research Holds Promise for Atomic-Scale Circuitryhttp://www.hpcwire.com/2014/04/23/research-holds-promise-atomic-scale-circuitry/?utm_source=rss&utm_medium=rss&utm_campaign=research-holds-promise-atomic-scale-circuitry
http://www.hpcwire.com/2014/04/23/research-holds-promise-atomic-scale-circuitry/#commentsThu, 24 Apr 2014 01:31:43 +0000http://www.hpcwire.com/?p=12550As integrated circuits get smaller, past the current 20nm-22nm process technology, they increasingly come up against quantum mechanics quirks such as electron tunneling and current leakage. Chip designers and academic researchers mainly pursue the next nanometer threshold, in adherence to the ITRS (the International Technology Roadmap for Semiconductors), using all kinds of clever workarounds to Read more…

]]>As integrated circuits get smaller, past the current 20nm-22nm process technology, they increasingly come up against quantum mechanics quirks such as electron tunneling and current leakage. Chip designers and academic researchers mainly pursue the next nanometer threshold, in adherence to the ITRS (the International Technology Roadmap for Semiconductors), using all kinds of clever workarounds to get there. However, another segment of ambitious researchers have chosen to focus on fundamental redesigns to create circuits at the atomic scale. (For comparison’s sake, let’s recall that one nanometer is 50,000th of a hair width and a silicon atom has a diameter of .22 nanometers.)

A single layer of organic molecules connects the positive and negative electrodes in a molecular-junction OLED. (Graphic by Alexander Shestopalov/University of Rochester.)

Researchers from the University of Rochester and Duke University have achieved a breakthrough in this exciting space, using a bi-layered molecular interface to send an electric charge across a circuit one molecule wide. Their work appears in the April edition of the journal Advanced Material Interfaces.

Led by Alexander Shestopalov, an assistant professor of chemical engineering at the University of Rochester with a focus on unconventional nanoscale electronics, the team used a single layer of organic molecules to connect the positive and negative electrodes in a molecular-junction OLED (organic light-emitting diode).

One of the main problems that scientists face in developing circuits at the atomic scale is how to control the current flowing through the circuit. Shestopalov responded to the challenge by adding a second, inert layer of molecules. This inert layer acts like the plastic casing on electric wires that insulates the live wires from the surrounding environment.

“Until now, scientists have been unable to reliably direct a charge from one molecule to another,” said Shestopalov in an official release. “But that’s exactly what we need to do when working with electronic circuits that are one or two molecules thin.”

The inert layer is comprised of a microscopic chain of organic molecules. On top of that is the active layer, a one-molecule thin sheet of organic material. Following the wire analogy, the top layer conducts the charge while the lower inert layer insulates it, thus reducing interference.

Shestopalov was able to control the current by making small changes to the organic molecules’ functional groups – using some functional groups to accelerate the charge transfer and others to slow it down.

The ability to alter the functional group enables fine-tuning of the charge to support different applications. For example, an OLED may need a faster charge transfer to output a certain luminescence, while a biomedical injection device may require a slower rate for delicate procedures.

The accomplishment is a significant milestone for molecular electronic devices, however there is still work ahead, namely with respect to durability.

“The system we developed degrades quickly at high temperatures,” said Shestopalov. “What we need are devices that last for years, and that will take time to accomplish.”

The applications for such nanoscale circuitry are numerous, ranging from solar cells and other photovoltaics to drug delivery and bioimaging – not to mention the potential for atomic-scale computing.

]]>http://www.hpcwire.com/2014/04/23/research-holds-promise-atomic-scale-circuitry/feed/0Computing at the Speed of Lighthttp://www.hpcwire.com/2014/02/27/computing-speed-light/?utm_source=rss&utm_medium=rss&utm_campaign=computing-speed-light
http://www.hpcwire.com/2014/02/27/computing-speed-light/#commentsThu, 27 Feb 2014 22:39:29 +0000http://www.hpcwire.com/?p=4229With Moore’s law type advances showing signs of stagnation and decline, researchers around the world are hard at work innovating techniques to improve the speed of computing. A research duo from Northeastern University has come up with a breakthrough that could lay the groundwork for a new generation of fast, powerful computing devices. Assistant professor Read more…

]]>With Moore’s law type advances showing signs of stagnation and decline, researchers around the world are hard at work innovating techniques to improve the speed of computing. A research duo from Northeastern University has come up with a breakthrough that could lay the groundwork for a new generation of fast, powerful computing devices.

Assistant professor of physics Swastik Kar and associate professor of mechanical and industrial engineering Yung Joon Jung have created a device that uses optical and electronic signals to perform basic switching operations more efficiently.

At the most essential level, computing is comprised of a series of on-​​off switches. It takes billions of these operations to carry out even the simplest of computing tasks, so making this switching process even the tiniest bit faster can have a strong net positive effect on overall efficiency and productivity.

“People believe that the best computer would be one in which the processing is done using electrical signals and the signal transfer is done by optics,” Kar said. “It would save precious nanoseconds.”

The partnership began a couple years ago. Kar’s speciality was graphene, an emerging carbon-based material, prized for its strength and conductivity, and Jung’s research centered on carbon nanotubes, nanometer-​​sized cylinder of carbon atoms.

Early on, the research team made a startling discovery. They found by taking the metal out of traditional nanotube photodiode devices and replacing it with carbon, light-​​induced electrical currents rose much more sharply. “That sharp rise helps us design devices that can be turned on and off using light,” Kar said.

To better understand the curious phenomenon, the Northeastern team collaborated with Young-Kyun Kwon, a professor from Kyung Hee University, in Seoul, Korea, on the computational modeling. They then got to work building logic circuits that could be manipulated both electrically and optically. The resulting prototype marks the first time that electronic and optical properties have been integrated onto a single electronic chip.

“What we’ve done is built a tiny device where one input can be a voltage and the other input can be light,” Kar toldIEEE Spectrum.

The team actually developed three devices: an AND Gate, which requires both an electronic and an optical input to generate an output; and an OR Gate, which can generate an output if either sensor is engaged. The third device works like the front-​​end of a camera sensor and consists of an array of 250,000 photoactive elements assembled over a centimeter-scale wafer. It functions as a four-bit digital-to-analog converter.

The nanotubes are created in a solution and placed on a patterned silicon/silicon oxide substrate, which should make the technology compatible with existing CMOS processes, according to Jung.

By using light for data movement and some of the logic operations, the technique could pave the way for a new generation of faster computing chips, according to the researchers. Computers process billions of steps each second, so improving their capability begins with the “demonstration of improving just one,” notes Kar.

A paper describing their research appears in a recent edition of journal Nature Photonics.

]]>http://www.hpcwire.com/2014/02/27/computing-speed-light/feed/0The Week in Reviewhttp://www.hpcwire.com/2010/02/25/the_week_in_review/?utm_source=rss&utm_medium=rss&utm_campaign=the_week_in_review
http://www.hpcwire.com/2010/02/25/the_week_in_review/#commentsThu, 25 Feb 2010 08:00:00 +0000http://www.hpcwire.com/?p=5473A safer nuclear reactor could be on the horizon thanks to computer modeling; and the National Science Foundation awards $24.5 million to UC Berkeley researchers engaged in reducing the power draw of electronics. We recap those stories and more in our weekly wrapup.

A New York Times article this week reports on the development of a new kind of nuclear reactor that uses depleted uranium for fuel, posing a much lower risk than traditional nuclear reactors. The so-called traveling wave nuclear reactor is emerging as a potential game-changer according to top science and energy officials.

The article explains the design:

This reactor (pdf) works something like a cigarette. A chain reaction is launched in one end of a closed cylinder of spent uranium fuel, creating a slow-moving “deflagration,” a wave of nuclear fission reactions that keeps breeding neutrons as it makes way through the container, keeping the self-sustaining reaction going.

Usually, these types of projects are publicly-funded, but, in this case, a private research firm, TerraPower LLC, is running the show. And although this is a private venture, the team gets support from MIT, DOE’s Argonne National Laboratory and other scientific centers.

According to the head of TerraPower, former Bechtel Corp. physicist John Gilleland, the reactor, once ignited, could continue to react for 100 years.

“We believe we’ve developed a new type of nuclear reactor that can represent a nearly infinite supply of low-cost energy, carbon-free energy for the world,” Gilleland said.

The project relies on supercomputing resources to simulate and verify the traveling wave concept. The supercomputers are also engaged in finding alloys for the reactor cylinders that can withstand the heavy damage caused by neutron impacts.

The story is replete with lots of “ifs” and “whens” and acknowledges that no one has actually created a working deflagration wave. However, the Massachusetts Institute of Technology’s Technology Review magazine selected the traveling wave reactor last year as one of 10 emerging technologies with the highest potential impact.

Gilleland said that we may see a commercial version of the reactor in 15 years, pending a working physical prototype.

NSF Award to Create Center Dedicated to Reducing Power Consumption

The National Science Foundation (NSF) has awarded $24.5 million to UC Berkeley researchers for the development of a multi-institutional center whose aim is to increase the energy-efficiency of electronics. The lofty goal? A million-fold reduction in the power consumption of electronics. The five-year NSF grant will be used to establish the Center for Energy Efficient Electronics Science, or E3S.

To reduce the energy requirement of electronics, researchers will focus on the basic logic switch, the decision-maker in computer chips. The logic switch function is primarily performed by transistors, which demand about 1 volt to function well. There are more than 1 billion transistors in multi-core microprocessor systems.

Eli Yablonovitch, UC Berkeley professor of electrical engineering and computer sciences and the director of the Center for E3S, explains that the transistors in the microprocessor are what draw the most power in a computer, giving off heat in the process.

According to Moore’s Law, named after Intel co-founder Gordon E. Moore, the number of transistors on an integrated circuit double every two years. But Moore also predicted that the power consumption of electronic components will drop dramatically.

Researchers plan to design lower-voltage transistors, noting that the wires of an electronic circuit could operate on as little as a few millivolts. Power needs drop by the square of the voltage, so a thousand-fold reduction in voltage requirements adds up to a million-fold reduction in power consumption, says Yablonovitch.

With the increase in information processing needs skyrocketing, the importance of changing the underlying power requirements at the most basic level of our computational technology cannot be overstated.

]]>Designing energy efficient electronics will require architectural changes at every level from post-CMOS circuits to smart building networks, according to speakers at a symposium on the topic hosted by the University of California at Berkeley.