Helix pattern is observed by prolonging life of electron spins to about the speed of a mobile processor clock pulse

So-called "zinc-blende" semiconductors (so named due to the zinc-like crystalline structure of III-V semiconductors, rather than the presence of elemental zinc) have seen growing use in recent years. Materials like indium arsenide (InAs) and Gallium arsenide (GaAs) have been used in everything from lasers to thin-film solar cells due to their unique electrical properties.

I. In Search of a Spin

Now International Business Machines, Inc. (IBM) researchers working at a company-sponsored research center at the Eidgenössische Technische Hochschule Zürich -- or, ETH Zürich, as the college name is typically shortened -- have managed to discover a new property of this special kind of semiconductor. That property has allowed the team to achieve a major advance in spintronics, which could eventually take the storage/processing technology out of the lab and onto the market.

A previously unknown aspect of physics, the scientists observed how electron spins move tens of micrometers in a semiconductor with their orientations synchronously rotating along the path similar to a couple dancing the waltz, the famous Viennese ballroom dance where couples rotate.

Dr. Gian Salis of the Physics of Nanoscale Systems research group at IBM Research – Zurich explains, "If all couples start with the women facing north, after a while the rotating pairs are oriented in different directions. We can now lock the rotation speed of the dancers to the direction they move. This results in a perfect choreography where all the women in a certain area face the same direction. This control and ability to manipulate and observe the spin is an important step in the development of spin-based transistors that are electrically programmable."

Electrons have two key traits -- motion (typically, rotation around an atom) and spin. In a way they're like tiny planets orbitting their equivalent of the sun, in this regard.

Typically electrons rotate in a stochastic fashion, but researchers predicted in 2003 that some semiconductors' electrons could "spin lock" when exposed locally to a magnet field or massaged with laser pulses. But the theorized phenomena had never been observed until now.

IBM's researchers managed to prolong the lifetime of the spins over 30 times using a purified GaAs semiconductor and carefully regulated interactions. That was enough to allow the spins synchronizations to last 1.1 nanoseconds -- or about the speed of a modern smartphone CPU (1 GHz).

Taking advantage of the longer-lived spins the researchers observed the "persistent spin helix", a striped pattern of spin types, using a scanning electron microscope. Spins were seen "waltzing" 10 um along the semiconductor.

Spintronics could eventually offer subatomic replacements to both memory storage and processors.

But despite the breakthrough and recent progress in the field, many hurdles remain to marketization. One challenge is squeezing the lasers or micro-magnetics needed to control the spin onto tiny semiconductor devices.

Another key hurdle is the temperature. The IBM experiment was performed at a frigid 40 Kelvin (-233 C, -387 F). That's colder than near-boiling liquid nitrogen, which is liquid at 77 K.

The spintronic experiment was performed at a temperature colder than the boiling point of liquid nitrogen. [Image Source: Friday Explosion]

Squeezing spintronics on the mobile devices of the future could give Moore's Law new life by catapaulting computer chips over the fundamental limits of atomic physics, into the realm of subatomics. But figuring out how to chill a cell phone CPU to 40 K -- or alternatively how to coax the finnicky electronics to behave and more terrestial temperatures -- is a daunting task.

The paper on the work was published in the prestigious peer-reviewed journal Nature Physics. The senior author was Gian Salis (IBM), while Matthias Walser (IBM) was the first author. The paper's two other authors are Professor Werner Wegscheider (ETH Zürich Physics) and Christian Reichl (ETH Zürich Physics Ph.D candidate), who contributed by growing the semiconductor specimens for IBM.

I'm interested in an explanation as well. So they can make all the electrons maintain same spin for a while. Neat. Now what good does that do ?

I also don't quite understand the frequency reference - the article seems to imply that keeping the spin "locked" for as long as possible is a good thing. Longer duration means lower "frequency" ... so what exactly does this discovery have to do with the frequency of today's processors ? Is it in some way beneficial to keep the spin thingy synchronized for as many CPU clock cycles or possible or something ?

The explanation is pretty simple really. Right now, we have to use large groups of atoms magnetically aligned up or down (HDDs), or layers of oxides with the ability to hold certain voltages (flash/SSDs) to act as representations of 1's and 0's. Right now, flash is working on the 22 nm scale, which is roughly >220 atoms in size to construct the system necessary for storing a single bit of information.

Spintronics allow us to use individual electrons and their spin direction as a 1 or a 0. Electrons are a 1/2 spin particle, which means they have a spin that's either positive or negative -- perfect for binary storage or transistor. In theory, a single atom or pairs of atoms could then act as a unique storage bit or transistor, allowing us to push our electronics down to the single digit nanometer to sub nanometer scale (an atom is ~0.1 nm in size).

This breakthrough doesn't help too much for storage per ce, as it doesn't keep coherence for very long. However, having the ability to flow coupled-spin electrons through a semiconductor opens up whole new realms of computing potential. Having it last long enough for the gigahertz range means that our current silicon technology has the speed to write and read these coupled spins before they dissipated. That means they can be used for computing.

Think of it like this: a CPU filled with spintronic transistors working by this method, would need a front side bus or equivalent that ran at 1 gigahertz to allow it to be interfaced with. Completely doable.

Only problem is this is happening at 40 kelvin! So... we've got a way to go.

But in the end, we've almost reached the limit of semiconductor physics; soon we will not be able to shrink our electronic anymore. Spintronics, phase change memory, quantum q-bits, photonics, all represent methods to try to push us past the size limit we're about to hit in the next five years. All have different abilities superior to our current technologies; and so one or more of them are the keys to our electronic future.

Comes across as one of those "would be nice if it worked" technologies, like super conductors. We've known about super conductors for a while now but we don't have any super-conducting materials that operate at temperatures practical for our applications.

I don't think we've hit the limitations of semiconductor technology. More appropriate to say that we're hitting the limitations of current semiconductor paradigms that largely rely on transistors to function.

We do not have any computer systems that can accurately emulate the function of even a semi-intelligent creature like a monkey; and we're not even close to emulating the human brain with current system design methodology.

Before you say that the human brain is fallible - true; but subconsciously it is quite a processing powerhouse that puts even the best processors we have today to shame.

Take the apparently simple act of throwing a crumpled piece of paper into a waste bin 10 feet away. Seems easy to us, but if you consider what the brain has to calculate (mathematically) in order for us to toss the paper and get it into the bin, it's a series of ballistics, trajectory, force, mass - all that good stuff. Most people take a second or less to "aim" and toss.

So yeah...going smaller allows us to use our current outdated transistor tech, but it's going to take a drastic new direction in our approach to computing to really make the kind of technological leaps forward like we did back in the 60s.

Only, your brain doesn't work like that at all. We aren't calculating ballistics and trajectories - the human brain actually isn't computationally fast enough for that! The human brain is low speed, but massively parallel. Instead, shooting a ball actually relies on pattern recognition - you aren't shooting based on the math of where the basket is, but aligning your shot with memories of how other shots coordinated. That's why repeatedly aiming for the same hoop gets better - if it was pure ballistics trajectory, you couldn't retrain that large a part of your brain with new solutions that quickly. It also works with how the human mind operates - low speed but massively parallel. While your brain can't operate fast enough to solve trigonometry for ballistics, it can operate fast enough to align "pixels" (yeah we don't really see in pixels) with where it remembers previous shots lining up with those "pixels".

False, and it has been shown to be at least that fast if not moreso in certain people labeled as savants. Consciously the average person cannot do complex math in their head instantaneously but our brain is more than capable of doing so.

quote: Instead, shooting a ball actually relies on pattern recognition - you aren't shooting based on the math of where the basket is, but aligning your shot with memories of how other shots coordinated

No, that is our interpretation of how it works and the same principle we apply when attempting to program a robot to emulate us. By your logic, every first attempt should "miss" since we require an example in order to draw corrections upon.

Depth perception allows us to estimate the distance to the target, and our brain computes the appropriate mucsle force to apply to the object we are tossing based on our feeling of its mass in our hand.

We are reaching the limits of semiconductor production both in terms of physical photolithography (physical shrinking of lines) and being able to store a detectable charge in those ever decreasing lines. The 193nm wavelength of light used in scanners can only print lines so small even with double and triple patterning and EUV technology is still years away from mass production. When lines get close theres more cross talk and closer paths for leakage. As NAND continues to scale down below 20nm, ECC is scaling exponentially. Of course all this applies to the CMOS transistor and until something else is found, like Phase change/spintronics/RRAM/etc, semiconductors have just a few more die shrinks ahead of them.

quote: Before you say that the human brain is fallible - true; but subconsciously it is quite a processing powerhouse that puts even the best processors we have today to shame.

Our brains, unlike the traditional computer or transistor, are masters of distributed computing to the extreme. Each neuron is an individual "computer," which, everyone one of us is born with a similar amount.

What separates a genius from someone who is below average is how these neurons are networked with each other. It is these interconnects that make the difference between a super-brain and an average brain (at least, in theory). Unfortunately, most of these interconnects are made in the first five to seven years of life (hence the rapid expansion of cranial matter--it is these interconnects that are being built).

So, a neuron by itself is far worse than a typical computer. But, when you take billions of them and network them in such an elaborate, criss-cross way that they can delve upon each others computation and spread it out among them all rapidly--you have something immensely powerful that is only made more robust by its amazing ability to "learn" even in latter years.

Yes, these interconnects are for the most part built early on, they continue to be shaped and built later in life--hence, the brains ability to "rewire" itself. This is much like an IT team working on a server farm, except in this case, it does it internally on its own.

Adaptive, distributive computing is what makes our brains so powerful and different from silicon. Once silicon is set, it is there and cannot change. The closest we've come to it so far is quantum computing which can exist in both states simultaneously, being adaptive in concept. The interconnects though, not so much.

We have a long, long way to go before robotics catch up with our minds. I'd say maybe a hundred years or so--maybe less, it all depends on where we head. The profound thing is, though, we still don't exactly know how memory operates. In a way, I think there is far more to this than neurons and interconnects that can only be answered by delving deeper into the subatomic (and smaller) levels.

quote: Adaptive, distributive computing is what makes our brains so powerful and different from silicon. Once silicon is set, it is there and cannot change. The closest we've come to it so far is quantum computing which can exist in both states simultaneously, being adaptive in concept. The interconnects though, not so much.

The adaptive nature of our brain is an asset and also a drawback. We retain information by our brain actually changing its physical geometry as it learns, however each section of the brain deals with various functions of our body including our senses.

My point in bringing this up was to say that we can utilize current, and to a degree, legacy semiconductor technology if it is modeled after the way biological computers (brains) work. Our neurons are quite large compared to the nano-scale etching being done on CPUs these days.

quote: The profound thing is, though, we still don't exactly know how memory operates. In a way, I think there is far more to this than neurons and interconnects that can only be answered by delving deeper into the subatomic (and smaller) levels.

I think the best analogy for how our memory works is like key frames in an animation. Rather than drawing every frame between two timestamps, you have a key frame that shows the current position and "after" position, with the frames in between being an interpolation of the movement from current to after.

We store key fragments of information and to recall this information, our brain actually reconstructs the event...not necessarily exactly as it happened. The level of detail we can recall varies between people, but I believe that the more senses that are stimulated during an event the more accurate the recall since our brain is essentially using a combination of data from our senses to perceive the world and form memories in the first place.

The biggest benefit of a new technology is something you haven't thought of. Thus, the real world applications where it has the biggest benefit are probably wrong. While the scientists suggest it has uses in computer memory, and it may well do so, the biggest use is probably in some field we haven't thought of, e.g. pesticides or corrosion resistance.

Superconductivity has been stuck at low temperatures for decades. If that technology couldn't make the leap to room temperatures, there's a good chance this won't either. Don't open the champagne just yet.

quote: One challenge is squeezing the lasers or micro-magnetics needed to control the spin onto tiny semiconductor devices.

Maybe the answer is to use an arrangement similar to that of a hard drive or CD/DVD, where you have a medium that spins and have a "head" that moves across the medium to read or write data.

"Game reviewers fought each other to write the most glowing coverage possible for the powerhouse Sony, MS systems. Reviewers flipped coins to see who would review the Nintendo Wii. The losers got stuck with the job." -- Andy Marken