spintronics – HPCwirehttps://www.hpcwire.com
Since 1987 - Covering the Fastest Computers in the World and the People Who Run ThemFri, 09 Dec 2016 21:51:05 +0000en-UShourly1https://wordpress.org/?v=4.760365857Researchers Devise Promising Spintronics Semiconductorhttps://www.hpcwire.com/2015/02/23/researchers-devise-promising-spintronics-semiconductor/?utm_source=rss&utm_medium=rss&utm_campaign=researchers-devise-promising-spintronics-semiconductor
https://www.hpcwire.com/2015/02/23/researchers-devise-promising-spintronics-semiconductor/#respondMon, 23 Feb 2015 23:41:10 +0000http://www.hpcwire.com/?p=17552Spintronics — the practice of using electrons to read, write and manipulate data — has long been hailed as a promising avenue for post-CMOS exploration, but imbibing a substrate with the necessary levels of magnetism and conductivity has proved challenging. A cross-disciplinary team of researchers at the University of Michigan have created a semiconductor compound […]

]]>Spintronics — the practice of using electrons to read, write and manipulate data — has long been hailed as a promising avenue for post-CMOS exploration, but imbibing a substrate with the necessary levels of magnetism and conductivity has proved challenging.

A cross-disciplinary team of researchers at the University of Michigan have created a semiconductor compound that is more conducive to this level of control.

The new compound shows promise as a base material for spintronic-based devices, in the same way that silicon is the base for electronic computing devices. It’s a breakthrough that could hold the key to smaller, faster, more energy-efficient computing devices.

Circuits that use spin have a smaller footprint than charge-based circuits, which means that more of them can be squeezed onto a single processor. In this way, spintronic offers a path beyond the physical limits of silicon-based microelectronics. Additionally, spintronics devices store information using the “on” or “off” electrical charge and the “up” or “down” magnetic spin of electrons. This is an advantage because the spin of electrons stays stable at smaller states of miniaturization.

“You can only make an electronic circuit so small before the charge of an electron becomes erratic,” explains Ferdinand Poudeu, assistant professor of materials science and engineering at the University of Michigan “But the spin of electrons remains stable at much smaller sizes, so spintronic devices open the door to a whole new generation of computing.”

Another benefit of spintronics is the ability to combine logic, storage and communication onto a single chip, again enabling a much smaller footprint and lower power consumption.

For years, researchers in the field have sought to make spintronic semiconductors by working to tweak existing materials, but Poudeu’s team went back to the drawing board, and created a new crystal structure made from a mixture of iron, bismuth and selenium. The result is a material that offers the ability to manipulate conductivity and magnetism independently.

Based at the University of Michigan, the project drew from chemistry, crystallography and computer science to create a novel semiconductor spintronics substrate. While the initial research was based on a powder form of the material, the next step will be to manufacture the thin film that would be required for a spintronic device. The process is expected to take about a year.

]]>https://www.hpcwire.com/2015/02/23/researchers-devise-promising-spintronics-semiconductor/feed/017552Moore’s Law Meets Exascale Computinghttps://www.hpcwire.com/2011/06/29/moore_s_law_meets_exascale_computing/?utm_source=rss&utm_medium=rss&utm_campaign=moore_s_law_meets_exascale_computing
https://www.hpcwire.com/2011/06/29/moore_s_law_meets_exascale_computing/#respondWed, 29 Jun 2011 07:00:00 +0000http://www.hpcwire.com/?p=4799Moore's Law is projected to come to an end sometime around the middle of the next decade -- a timeframe that coincides with the epoch of exascale computing. A white paper by Marc Snir, Bill Gropp and Peter Kogge discusses what we should be doing now to prepare high performance computing for the post-Moore's Law era.

]]>There are no exascale supercomputers yet, but there are plenty of research papers on the subject. The latest is a short but intense white paper centering on some of the specific challenges related to CMOS technology over the next decade and a half. The paper’s principal focus is about dealing with the end of Moore’s Law, which, according to best predictions, will occur during the decade of exascale computing.

Titled Exascale Research: Preparing for the Post-Moore Era (PDF), the paper is authored by HPC experts Marc Snir, Bill Gropp and Peter Kogge, who argue that we need to start using CMOS technology much more efficiently, while simultaneously accelerating the development of its replacement.

One of the tenets of supercomputing, and information technology in general, is that processors are expected to get more powerful and less expensive each year. Like the shark that needs to keep swimming to stay alive, the IT industry is based on the assumption that the hardware has to keep moving forward to support the expectations of the market.

This is certainly true for exascale proponents, who see the next level of HPC capability as a way to move forward on big science problems and help solve global challenges like climate change mitigation and the development of alternative energy sources. In the US, there is also the need to support our nuclear stockpile with compute-intensive virtual simulations — a task that is becoming increasingly difficult as the original expertise in designing and testing nuclear weapons disappears.

National security, too, has become very dependent on supercomputing. As the authors state, “Inan era where information becomes the main weapon of war, the US cannot afford to be outcomputed anymore that it can afford to be outgunned.”

It’s a given that the semiconductors behind exascale computing will, at least initially, use CMOS, a technology that’s been in common use since the 1970s. The problem is that CMOS (complementary-symmetry metal–oxide–semiconductor) is slowly giving way to the unrelenting laws of physics. Due to increasing leakage current, voltage scaling has already plateaued. That occurred nearly a decade ago when transistor feature size reached 130 nm. The result was that processor speeds leveled off.

And soon feature scaling will end as well. According to the white paper, CMOS technology will grind to a halt sometime in the middle of the next decade when the size of transistors reaches around 7 nm — about 30 atoms of silicon crystal. As the authors put it:

We have become accustomed to the relentless improvement in the density of silicon chips, leading to a doubling of the number of transistors per chip every 18 months, as predicted by “Moore’s Law”. In the process, we have forgotten “Stein’s Law”: “If something cannot go on forever, it will stop.”

And unfortunately there is currently no technology to take the place of CMOS, although a number of candidates are on the table. Spintronics, nanowires, nanotubes, graphene, and other more exotic technologies are all being tested in the research labs, but none are ready to provide a wholesale replacement of CMOS. To that end, one of the principal recommendations of the authors is for more government funding to accelerate the evaluation, research and development of these technologies, as a precursor to commercial production 10 to 15 years down the road.

It should be noted, as the authors do, that the peak performance of supercomputer has increased faster than CMOS scaling, so merely switching technologies is not a panacea for high performance computing. In particular, HPC systems have gotten more powerful by increasing the number of processors, on top of gains realized by shrinking CMOS geometries. That has repercussions in the failure rate of the system, which is growing in concert with system size.

The larger point is that the end of CMOS scaling can’t be compensated for just by adding more chips. In fact, it’s already assumed that the processor count, memory capacity, and other components will have to grow substantially to reach exascale levels, and the increased failure rates will have to be dealt with separately.

On the CMOS front, the main issue is power consumption, most of which is not strictly related to computation. The paper cites a recent report that projected a 2018-era processor will use 475 picojoules/flop for memory access versus 10 picojoules/flop for the floating point unit. The memory access includes both on-chip communication associated with cache access and off-chip communication to main memory.

To mitigate this, the authors say that smarter use of processor circuitry needs to be pursued. That includes both hardware (e.g., lower power circuits and denser packaging) and software (e.g., algorithms than minimize data movement and languages able to specify locality). More energy-aware communication protocols are also needed.

The good news is that most of the performance/power improvements discussed in the paper will also benefit the commodity computing space. But the authors also say that some of the technology required to support future HPC systems will not be needed by the volume market:

We need to identify where commodity technologies are most likely to diverge from the technologies needed to continue the fast progress in the performance of high-end platforms; and we need government funding in order to accelerate the research and development of those technologies that are essential for high-­end computing but are unlikely to have broad markets.

The authors aren’t suggesting we need to build graphene supercomputers, while the rest of the world moves to spintronics. But there may be certain key technologies that can be wrapped around post-CMOS computing that will be unique to exascale computing. As always, the tricky part will be to find the right mix of commodity and HPC-specific technologies to keep the industry moving forward.

]]>https://www.hpcwire.com/2011/06/29/moore_s_law_meets_exascale_computing/feed/04799The Weekly Top Fivehttps://www.hpcwire.com/2011/06/02/the_weekly_top_five/?utm_source=rss&utm_medium=rss&utm_campaign=the_weekly_top_five
https://www.hpcwire.com/2011/06/02/the_weekly_top_five/#respondThu, 02 Jun 2011 07:00:00 +0000http://www.hpcwire.com/?p=4818The Weekly Top Five features the five biggest HPC stories of the week, condensed for your reading pleasure. This week, we cover NERSC's acceptance of its first petascale supercomputer, the potential for magnets to revolutionize computing; NCSA's private sector supercomputer; the official debut of Australia's MASSIVE supercomputer; and PRACE's biggest supercomputing allocation yet.

]]>The Weekly Top Five features the five biggest HPC stories of the week, condensed for your reading pleasure. This week, we cover NERSC’s acceptance of its first petascale supercomputer, the potential for magnets to revolutionize computing; NCSA’s private sector supercomputer; the official debut of Australia’s MASSIVE supercomputer; and PRACE’s biggest supercomputing allocation yet.

NERSC Accepts ‘Hopper’ Supercomputer

The National Energy Research Scientific Computing Center (NERSC) has officially accepted its first petascale supercomputer. The Cray XE6 system was named “Hopper” in honor of the renowned American computer scientist Grace Murray Hopper. The supercomputer will benefit more than 4,000 researchers and will support advancements in the fields of wind energy, extreme weather, and materials science.

NERSC Director Kathy Yelick, commented on this latest achievement:

“We are very excited to make this unique petascale capability available to our users, who are working on some of the most important problems facing the scientific community and the world. With its 12-core AMD processor chips, the system reflects an aggressive step forward in the industry-wide trend toward increasing the core counts, combined with the latest innovations in high-speed networking from Cray. The result is a powerful instrument for science. Our goal at NERSC is to maximize performance across a broad set of applications, and by our metric, the addition of Hopper represents an impressive five-fold increase in the application capability of NERSC.”

NERSC is the U.S. Department of Energy’s primary high-performance computing facility for scientific research. A pictorial journey of the delivery and installation process can be found here.

Chameleon Magnets Hailed as Potential Game Changers

Researchers at the University at Buffalo (UB) are studying the behavior of magnets and exploring their potential to revolutionize the field of computing. The researchers are asking questions about the nature of magnets and whether it’s possible to control their behavior to create more versatile transistors.

In the current issue of Science, University at Buffalo researcher Igor Zutic, a theoretical physicist, together with fellow UB physicist John Cerne, discuss the results of a Japanese study that demonstrates the potential to turn a material’s magnetism on and off at room temperature.

The release explains the basis for the research:

A material’s magnetism is determined by a property all electrons possess: something called “spin.” Electrons can have an “up” or “down” spin, and a material is magnetic when most of its electrons possess the same spin. Individual spins are akin to tiny bar magnets, which have north and south poles.

Zutic explains that the ability to switch a magnet “on” or “off” is revolutionary, bringing with it the promise of magnet- or spin-based computing technology — called “spintronics.” Spintronics-based devices will store and process data by exploiting electrons’ “up” and “down” spins. These spin states are similar to the ones and zeros found in standard digital transmission, but the technology makes it possible for more data to be stored using less energy.

Chameleon magnets could set the stage for a new era in processor design, and according to the researchers, may one day bring about the “seamless integration of memory and logic by providing smart hardware that can be dynamically reprogrammed for optimal performance of a specific task.”

NCSA Brings Supercomputing to Industry with iForge

The National Center for Supercomputing Applications (NCSA) is launching a supercomputer, called iForge, which will be dedicated to the center’s industrial partners. Rolls-Royce, Boeing, and Caterpillar are few of the companies that will be putting this computer cycles to work on a range of modeling and simulation problems.

A 22-teraflop high-performance computing cluster, iForge employs 121 Dell servers and a mix of Intel Xeon AMD Opteron processors designed to optimize workflows. 48 cores worth of high-level AMD parts are on hand to support memory-intensive pre- and post-processing jobs and highly-threaded applications. The system’s nodes are connected with 40 gigabit QDR InfiniBand from Mellanox. iForge doubles as a Linux-cluster or a Windows machine, since it runs both Red Hat Enterprise Linux and Windows HPC Server 2008 R2 operating systems.

“iForge is a unique resource at NCSA, as it is designed specifically for commercial and open-source applications widely used by industry. This machine offers our Private Sector Partners several platforms to reach higher and higher levels of scaling and performance for physics-based modeling and simulation applications.”

Australia’s MASSIVE (Multi-modal Australian ScienceS Imaging and Visualisation Environment) supercomputer is now open for general use. The resource is part of a collaboration that includes the Victorian Partnership for Advanced Computing (VPAC), the Australian Synchrotron, CSIRO, Monash University, and the NCI. The State Government of Victoria also provided funding for the project.

The MASSIVE supercomputer is comprised of two tightly-coupled high performance computers — two 42 node IBM iDataPlex systems, each having 84 NVIDIA M2070 GPUs, 504 Intel Westmere compute cores, and 2 TB of memory. The combined resource offers 1,008 CPU-cores and 168 NVIDIA M2070 GPUs. Ten nodes have been upgraded to advanced M2070Q GPUs and 192 GB memory each, to address the specific requirements of interactive visualization workloads. Each system uses a high performance GPFS parallel file system, and both Linux and Windows HPC Cluster-based services are available.

The allocation process is open to the Australian research community and is managed by the NCI Merit Allocation Scheme. Researchers with a need for MASSIVE’s extensive rendering and visualization capabilities will be given priority, as will those whose applications leverage GPU acceleration. The next call for proposals starts in November for access in 2012, but early access may be sought by sending an email request to info@massive.org.au. Additional information regarding the allocation process is available at www.massive.org.au/access.

PRACE Now Accepting Applications for Supercomputing Time

The Partnership for Advanced Computing in Europe (PRACE), which provides Europe with access to cutting-edge supercomputing resources, is now accepting submissions for its third call for proposals. Successful applicants will be able to access a total of 3 Tier-0 supercomputers and 17 national Tier-1 systems.

This call marks the first time that PRACE affiliates will get to use the Tier-0 “HERMIT” supercomputer. This Cray XE6 system offers one petaflop peak performance and will be installed in the fall at the High Performance Center of University Stuttgart. A planned upgrade is already in the works for the 2013, which will supply “HERMIT” with an additional 3-4 petaflops of power, creating a system with a possible 5 petaflops of peak performance.

The one-petaflop IBM BlueGene/P system, JUGENE, based at Germany’s Jülich Supercomputing Centre, and the 1.6 petaflop Bull Bullx cluster, CURIE, hosted by the French research agency, CEA, will also be available as part of this allocation. And for the first time, seventeen Tier-1 systems are also being included in the PRACE call. These Tier-1 resources were previously overseen by DEISA (the Distributed European Infrastructure for Supercomputing Applications) and were part of DECI calls, which now fall under the purview of PRACE.

More information about the PRACE allocation process is available at www.prace-ri.eu/hpc-access. The current application period runs from May 2 – June 22, 2011.

]]>The Weekly Top Five features the five biggest HPC stories of the week, condensed for your reading pleasure. This week, we cover Bull’s third petascale computing contract; IBM’s new POWER7 servers, the first hybrid spintronics computer chips, Bull and Whamcloud’s beefed-up Lustre support; and Tilera’s latest manycore development tools.

Bull to Provide Supercomputer for Fusion Research

The Paris-based Commissariat à l’Energie Atomique et aux Energies Alternatives (CEA) has selected Bull to provide a supercomputer for the International Fusion Energy Research Center (IFERC) in Rokkasho, Japan. The petaflop-class system will support advanced modeling and simulation in the field of plasmas and controlled fusion equipment. The contract marks the third time Bull will create a system with this level of performance.

The new supercomputer is designed to be operational 24 hours per day. Its peak performance of almost 1.3 petaflops places it among the most powerful systems in the world. The computing components combine, within a “cluster” architecture, 4,410 blades bullx series B including 8,820 Intel Xeon processors of the “Sandy Bridge” type and 70,560 cores. The supercomputer is equipped with a memory exceeding 280 terabytes and a high bandwidth storage system of more than 5.7 petabytes, supplemented by a secondary storage system designed to support 50 petabytes. The connection network for the cluster is based on InfiniBand technology.

In addition to the above specs, 36 bullx series S systems and 38 bullx series R systems will be dedicated to the cluster’s administration, for management of the Lustre file systems and for user access. Bull will also provide 32 bullx series R systems including high-performance graphics cards for pre-and post processing and visualization. The high-end cluster will be equipped with the bullx supercomputer suite advanced edition, which was developed and optimized by Bull for petascale computers.

The installation process will begin in June. The supercomputer will be available to European and Japanese researchers for a period of five years, beginning January 2012. Bull will be responsible for the machine’s installation, maintainance and operation, and will receive support from local parter SGI Japan.

IBM Boosts POWER7 Systems

IBM has unveiled its latest POWER7 systems, including a performance bump to the Power 750, the server used in the famous Watson supercomputer. However, the new and improved Power 750 servers are even more powerful than the ones used in the Jeopardy-winning AI darling.

The new Power blades and Power servers will be used in mission-critical application areas, such as healthcare management, financial services, and scientific research. According to the release, “the specialized demands of these new applications rely on processing an enormous number of concurrent transactions and data while analyzing that information in real time.”

At the heart of the announcement are two new blades and two upgrades. The new blade servers, which IBM touts as providing an alternative to sprawling racks, include the two-socket (16-core), single-wide PS703, and the 32-core, double-wide PS704. Also debuting is the enhanced IBM Power 750 Express, like the one used in the Watson system. This server offers more than three times the performance of comparable 32-core offerings, such as Oracle’s SPARC T3-2 server, and more than twice the performance of HP’s Integrity BL890c i2. Last up is the enhanced IBM Power 755, a high-performance computing cluster node with 32 POWER7 cores and a faster processor.

A full accounting can be found in Editor Michael Feldman’s feature coverage. Here’s a sampling of what you’ll read:

Both the 750 and the 755 are four-socket Power7 servers that were introduced last year. The 750 is built for database serving and general enterprise consolidation/virtualization, while the InfiniBand-equipped 755 is aimed specifically at HPC users. The additional options on the 750 include new four-core and six-core Power7 CPUs running at 3.7 GHz, and two new eight-core Power7s running at 3.2 GHz and 3.6 GHz, respectively. The Power 755, which used to come only with 3.3 GHz chips, is now being outfitted with 3.6 GHz Power7s.

Why they didn’t offer an option for the faster 3.7 GHz Power7s on the Power 755 is a little mysterious. It seems like there would be some interest by HPC users that needed faster threads and a higher memory-to-compute ratio on certain applications.

OSU Lab Creates First Hybrid Spintronic Computer Chips

Ohio State University researchers have taken significant steps toward the creation of viable hybrid spintronic computer chips. The team developed the “first electronic circuit to merge traditional inorganic semiconductors with organic ‘spintronics’ — devices that utilize the spin of electrons to read, write and manipulate data.”

The group worked to combine an inorganic semiconductor with a unique plastic material being developed by OSU professor Arthur J. Epstein’s lab at Ohio State University. Epstein, a distinguished university professor of physics and chemistry and director of the Institute for Magnetic and Electronic Polymers at Ohio State, was the first to successfully store and retrieve data using a plastic spintronic device.

A paper published in the journal Physical Review Letter describes how the researchers were able to transmit “a spin-polarized electrical current from the plastic material, through the gallium arsenide, and into a light-emitting diode (LED) as proof that the organic and inorganic parts were working together.”

If scientists could expand spintronic technology beyond memory applications into logic and computing applications, major advances in information processing could follow. Spintronic logic would theoretically require much less power, and produce much less heat, than current electronics, while enabling computers to turn on instantly without “booting up.” Hybrid and organic devices further promise computers that are lighter and more flexible, much as organic LEDs are now replacing inorganic LEDs in the production of flexible displays.

More work will need to be done before hybrid spintronics devices are ready for mass-production, but this hybrid circuit presents a good first step, one that lays the groundwork for future advances.

Bull, Whamcloud Extend Lustre Collaboration

A strengthened partnership with Whamcloud is enabling Bull to increase support and professional services for Lustre customers everywhere. Under the enhanced agreement, which builds on the duo’s existing technology partnership, Lustre users will “have access to Bull’s complete range of services starting from building scalable and highly available architectures, up to effective deployment and service level agreement (SLA) driven operations and support.”

Eric Monchalin, HPC software director at Bull, commented on the importance of parallel file systems for high performance computing HPC applications. Lustre is a high-performance, distributed open source file system used for large-scale cluster computing.

According to the release, the collaboration “enables Bull to leverage its long experience and deep knowledge in Lustre technology to provide validation and optimization of Lustre on Bull’s Extreme Computing bullx systems, integration with the bullx supercomputer suite HPC software stack, plus further development of Lustre’s administration and high availability functionality.”

European IT company Bull and venture-backed Whamcloud first announced a joint agreement for Lustre development in February. The team’s ultimate goal is to create a file system worthy of exaflop-class machines.

Tilera Tools Simplify Manycore Development Efforts

This week manycore chip specialist Tilera announced the release of its Multicore Development Environment (MDE) version 3.0, with enhancements aimed at simplifying manycore processor development.

From the release:

The new MDE is based on the recently released Linux 2.6.36 kernel, which integrates Tilera’s TILE architecture into the main Linux tree. The MDE includes cross compiling and native tool chains GCC 4.4, GDB 7.1, and GLIBC 2.11.2. The 3.0 MDE provides a full Linux distribution with over 1,000 Linux packages based on RHEL6 sources.

Support for Tilera’s architecture in the main Linux kernel creates many opportunities for open source developers to run their application on Tilera processors, the first manycore architecture to be supported by Linux. Tilera offers 64 cores today and up to 100 cores with the Tilera TILE-Gx family, coming later this year.

Linus Torvalds, founder and chief architect of the Linux kernel, was pleased with the news. “I am happy to have the TILE architecture in the kernel,” he said. ”Tilera provides innovative approaches for manycore processors.”

Tilera’s new software release includes both standard Linux and a GNU tool chain, helping users shorten development times. Tilera customers are able to use the same build infrastructure and make files, leverage the community’s resource and available software, and reduce the learning curve with standard tools and software environment.

]]>https://www.hpcwire.com/2011/04/14/the_weekly_top_five/feed/04893The Week in Reviewhttps://www.hpcwire.com/2010/10/14/the_week_in_review/?utm_source=rss&utm_medium=rss&utm_campaign=the_week_in_review
https://www.hpcwire.com/2010/10/14/the_week_in_review/#respondThu, 14 Oct 2010 07:00:00 +0000http://www.hpcwire.com/?p=5075BLADE Network Technologies unveils a single-chip 40 Gigabit Ethernet switch capable of one terabit of throughput to the datacenter; and UC Riverside physicists make breakthroughs using graphene as a spin computing substrate. We recap those stories and more in our weekly wrapup.

Switch maker BLADE Network Technologies (BLADE) today unveiled the RackSwitch G8264, a single-chip 40 Gigabit Ethernet (GbE) top-of-rack switch. The switch delivers more than one terabit of low-latency throughput to the datacenter. This is the first time that a single-chip switch is available for terabit-scale deployment of 10GbE.

The new switch touts 64-10GbE ports, up to four-40GbE ports and 1.28 terabits of non-blocking throughput. Designed to handle I/O-intensive and highly virtualized workloads, the switch is well suited for HPC clusters, cloud computing, and algorithmic trading.

BLADE is aiming to fulfill the needs of mainstream enterprise datacenters, which are responding to increased data demands by increasingly deploying servers equipped with 10GbE. BLADE is going forward with the belief that 40GbE is the next logical step. Higher speed uplinks, such as 10/40 Gigabit Ethernet switches, will be required to handle the increased network bandwidth of the next-generation of datacenters.

According to Vikram Mehta, president and CEO, BLADE Network Technologies:

“BLADE is proud to break the terabit barrier in a single-chip design with the RackSwitch G8264. Our new switch is designed for today’s most demanding requirements at the datacenter edge to interconnect highly utilized servers equipped with 10 Gigabit Ethernet and provide seamless migration to 40 Gigabit upstream networks.”

The RackSwitch G8264 will be available in November at a cost of $22,500 USD. Interested parties can view the product at the upcoming Supercomputing Conference (SC10).2

UC Riverside Physicists Advance Spin Computing

“Spin computing” — aka “spintronics” offers great potential for the future of computing — think superfast computers that can overcome present Moore’s Law limitations while using less energy and generating less heat than the current batch of number crunchers.

Here’s how it works: electrons can be polarized so that they have a particular directional orientation, called spin. An electron can either be polarized so attain two states, called “spin up” or “spin down.” Storing data with spin would effectively double the amount of data a computer could store since it allows two pieces of data to be stored on an electron instead of just one, as is currently the case.

While researchers have been working on the technology for about four decades, it’s not quite ready for primetime. This week, however, Physicists at the University of California, Riverside have taken spintronics to the next level by successfully achieving “tunneling spin injection” into graphene. Their study results appear this week in Physical Review Letters.

Tunneling spin injection is a term used to describe conductivity through an insulator. Graphene, brought into the limelight by this year’s Nobel Prize in physics, is a single-atom-thick sheet of carbon atoms arrayed in a honeycomb pattern. Extremely strong and flexible, it is a good conductor of electricity and capable of resisting heat.

While graphene has characteristics that make it a very promising candidate for use in spin computers, the electrical spin injection from a ferromagnetic electrode into graphene is inefficient. Additionally, and even more troubling to the research team, observed spin lifetimes are thousands of times shorter than expected theoretically. Longer spin lifetimes are important because they allow for more computational operations.

The research team, led by Roland Kawakami, an associate professor of physics and astronomy, was able to dramatically increase the spin injection efficiency by inserting an insulating layer, known as a “tunnel barrier,” in between the electrode and the graphene layer. The team thus achieved the first demonstration of tunneling spin injection into graphene, and the 30-fold increase spin injection efficiency set a world record.

The Kawakami lab was also to reconcile the short spin lifetimes of electrons in graphene. They discovered that using the tunnel barrier increased the spin lifetime. According to Kawakami, graphene has the potential for extremely long spin lifetimes.

The next step for the Kawakami lab is to demonstrate a working spin logic device. Ultimately, a chip capable of manipulating the spin of a single electron could pave the way for futuristic quantum computers.