Posted
by
timothyon Monday July 29, 2013 @05:11AM
from the paywalls-suck-worse-than-interstitials dept.

First time accepted submitter Consistent1 writes "A paywalled article in the "Nature Materials" journal describes the use of Magnetite to achieve ultra fast electronic switching, albeit, at the moment, only at extremely low temperatures. According to a story on Quartz, the team, led by Dr. Hermann Dürr from the Stanford Institute for Materials and Energy Sciences hopes 'to continue the experiment with materials that can operate at room temperature. One possibility is vanadium dioxide.' Chips utilizing this technology may operate at clock cycles thousands of times faster than the silicon-based chips used today."

I taught we already had gallium-arsenide transistors. The problem is cost as it is reserved for application where power enveloppe is very thin (earing aids) and switching speed is critical (telecom equipment).

I taught we already had gallium-arsenide transistors. The problem is cost as it is reserved for application where power enveloppe is very thin (earing aids) and switching speed is critical (telecom equipment).

Another problem with GaAs and other III-V semiconductors is that they do not scale well, and so you can not pack as many transistors on a chip, and so they just can not compete with silicon in logic. They are quite useful for other applications, but not in your computer. Besides the low temperature hurdle, it's not clear if these new materials will face the same cost and scalability problems as III-Vs.

Wasn't it also hard to make decent P-type MOSFETS? I seem to remember that GaAs electron mobility is much higher than hole mobility, but it's been a long time since that one semiconductors class in college.

One of the reasons silicon is great for mass-produced anything: silicon simply happens to be one of the most common and easily refined elements on Earth. For electronics, on top of being much cheaper than exotic materials, silicon's chemical properties (generally inert) makes it much easier to work with at high temperatures and caustic chemicals than most other materials.

One of the reasons silicon is great for mass-produced anything: silicon simply happens to be one of the most common and easily refined elements on Earth.

The fact that pure silicon is an intrinsic semiconductor doesn't hurt, either. Just try making intrinsic GaAs...the amount of precision required to avoid making p-type or n-type material is ridiculous.

There would not be a lot of sense in a clocked design. If we are talking about a pico second switching speed, any signal would only travel about 0.3mm in that time. That really calls for clockless operation.

Clockless operation will likely converge faster at lower temperatures due to lower thermal noise, so the overclockers would focus their attention on undercooling.

I thought one of the main issues with increasing clockspeeds on processors besides heat is also the latency. at 3 Ghz a signal can only travel 10 cm anymore, and processors already have stages in their pipelines just to get the signals around. So going 1000 fasters would have to mean some major changes in how processors work i guess? since having your signal only travel 0.1 mm per clock pulse makes it rather hard to get the data around...

That's true, there are also chips that are meant for other purposed than computing, what bottlenecks do currently exists that current chipspeeds can't handle? You give the example of wlan chipsets, what would a faster chip improve for them?

Many WLAN chipsets today use SDR(software defined radio), so most of the design is just a big DSP - so more clock speed = more complex algos. Alternatively since you'd likely have multiple channels in operation each of which probably has its own DSP by going faster you could put multiple channels onto a single DSP so save silicon area.Or if you had hardened part of the algos into custom logic you could ease the memory latency requirements/move the hardened parts into DSP to save area.Or move parts of the de

For the current generation/standards at most some power efficiency.
More processing power might allow better coding schemes, better beamforming (=less interference), smaller circuits (since less
has to be done in parallel). So in the end it'll mostly come down to power efficiency.
http://slashdot.org/comments.pl?sid=4025309&cid=44410675# [slashdot.org]

Latency is a problem certainly, but there's still some headroom. With a pipelined processor the signal doesn't have to propagate further than the next stage (ok that simplifies it a bit). At the moment, a top end processor is of order 1cm across (and now that's mostly cache and graphics), and even quite substantial ARM cores are down into the fairly small number of mm.

I suspect that unlike in the good old days, much like increasing transistor count no longer increases performance linearly, the same will go with clock speed once the processor is around one wavelength across.

One hypothetical way would be to have lots of really tiny, simple processors which are 0.01mm across, and then juice them up to 3THz.

True that, but cache latencies will have to go vastly up measured in clock cycles. If we say 3GHz = 10cm then 3THz = 0,1mm and an SO-DIMM module is 6.76cm across, you go from <1 cycle to 676 cycle latency just crossing the module. At those rates keeping the CPU fed with data might be the biggest challenge.

I think only distributed transputer-style processing will be able to tackle that efficiently. Big networks of small CPUs with local memories will be "it". Assuming 0.2mmx0.2mm size of one compute-memory element, we'd have 4,000 such elements fit on a Haswell die.

I don't think that's such a big problem: you can still have large numbers of small processors that are extremely fast on local data but take a bit more to communicate with each other. There have been plenty of parallel machines like that already. Think Beowulf cluster, just on a much smaller scale.

So going 1000 fasters would have to mean some major changes in how processors work i guess? since having your signal only travel 0.1 mm per clock pulse makes it rather hard to get the data around...

It seems like it would just change the design optimization criteria, making spatial distance dramatically between components dramatically more important than it is now. 3D chip design would become crucial, since it enables shorter paths. Of course, moving from flat or shallowly-layered designs to spherical construction would make heat dissipation an even bigger challenge than it is now, and would require completely new fabrication approaches.

How much energy it takes to switch 0/1 states? What voltage? As I am not in the field, it would take me too much time to extract this information from the article (what is "trimeron annihilation" and how/does it relate the classical hole-electron recombination?).

I assume that it is possible to be 1000 faster only if it takes considerably less energy to switch states. It means that even if the latency constrains the speed, it would still produce less heat and will allow simpler clock/power lines.

Nope. The signal can travel as far as you wish, as evidenced by the DSN (deep space network) using the 8.5 and 32GHz bands at pretty significant distances within our Solar System. Voyager comms are in the 8.5GHz band IIRC.

The fact that the length of a clock pulse is physically small (on the order of 1mm) only makes it interesting from the engineering side of things, not impossible.

It's only an engineering problem in the sense that pipelines are no more logical concepts, they have physical representation and you can't skip it. Those are of course solvable problems, only that the current CPU architectures aren't amenable to such treatment. That's not the end of the world, though, even now MS is pushing for parallelizing compilation.

I seem to remember about 10 or so years ago a bit of talk about diamond semiconductors.

IIRC, making P-type material was easy doping with boron, and someone had finally come up with a way to make n-type material.

In addition, around that time there were two or three startups looking to manufacture diamonds using various -cheaper- processes. The combination of these things was supposes to give is diamond based chips that, due to the incredible heat resistance of diamond, could tolerate much more heat and hence higher clock cycles.

The thing that immediately occurs to me is that this won't replace silicon. Silicon is massively available, it works, is well used and understood. Vanadium, in comparison is not. Plus, isn't it toxic? I know the semiconductor industry isn't what you would call green, but introducing an even more toxic element into the mix might not go down too well. I suspect this might, at the very best, have limited use in specialist applications. Making your computer thousands of times faster simply isn't going to happe

At 3.5Ghz light travels 8.6cm per clock cycle. A thousand time performance improvement would mean ~86 micrometers. Ie roughly 400 transistor widths at current feature size. Since there are about a billion transistors in a chip assuming a square configuration you'd have ~31600 transistors on a side. Ie your 1000X chip would take ~75 cycles just to cross from one side of the CPU to the other. That is assuming speed of light which electrons definitely don't achieve. You still have to get electrons from RAM, di

And have losses of more than 20 dB depending on the materials used for interconnect. Which means that the signal would have either be rebuffered every few microns or recovered at the receiver with something comparable to PCI Express but a thousand times faster.

Oh man, that's a fun question. Some googling suggests that just wafer-grade silicon ingot production is in the tens of thousands of tonnes per year, per factory range. So it might be reaching a megatonne.

There is a reason we use different materials for high end optical and electrical switches. In material science we unfortunately see this all the time, where an optics group measures some interaction in a highly controlled environment and then projects that result onto a very complex electrical circuit. Generally optics groups which get published in places like Nature don't consider that they're measuring properties that are not actually relevant to a practical electrical circuit and not the only propertie

Of course, most of the delay that limits clock speeds now is in the interconnect and not the switching devices. We're already using copper conductors and low-K dielectrics, so the next step is going to have to be superconducting interconnects.

I strongly suspect that people are already suffering from future shock but have not put a finger on what is going on. Technology is a huge cause of job and social displacement at this time. It is not just the economy that is causing such chaos but the fact that less people can do a lot more work due to technology. Very fast and very smart computers will accelerate this pending upheaval. I am all for it but we need to be paying attention and doing triage on the wounded and displaced and even learn

No, only slow economy causes job displacement. there is plenty of work to do that technology has created. in US the biggest increase in hiring is in "leisure and entertainment", followed by professional and business services, then retail.....this and housing market coming alive will soon ripple into manufacturing and construction

it's silly to be of the mindset, OMG, the lamp lighters and buggy whip makers and horseshoe smiths and chimney sweeps will starve!

Isn't magnetite that natural iron form they make trinkets of to sell in Jamaican bazaars, typically in the form of animatable copluating humans, for placement as a dongle on mechanical security device unlocking portable storage ring?

We can already make silicon faster than we do, electromigration [wikipedia.org] is why we don't. Switching to a different wafer material doesn't change the fact that we still have to interconnect the transistors somehow.

You do understand that somebody has to do groundwork before anything can be made in large scale. Even first silicon transistors where originally just proof of concepts until engineers where able to make manufacturing process around it.

There's a huge spectrum between "the sample worked in the lab" and "we can ship complex CPUs to customers in million-sized batches". Sometimes it just turns out that a process is impractical. BiCMOS was dropped after Pentium Pro. Thermal output is becoming the bottleneck for Si these days, not switching speed. Also, whatever needs cryogenics simply won't end up in your desktop or cell phone.

BiCMOS is alive and well, thank you very much. It's just silly to use it for CPUs. Was it even used for any Intel chips at all? What for? It's pretty pointless unless you need bipolar-specific analog stuff on the same die.

French suggested that the company could sell 50,000 computers, but more skeptical executives disagreed and suggested 1,000 to 3,000 per year at the target $199 price. Roach persuaded Tandy to agree to build 3,500—the number of Radio Shack

Times have changed though, and you have to make a big case for smaller batch sizes. Otherwise, a lot of the chip producers already have worked out exactly how many they need to make in what amount of time to have a reasonable chance of making a profit. Some friends who left academia for chip producing companies have complained of how often the tech they worked on got dropped from designs, all because it slowed down the process too much. This isn't like a factor of ten issue, but because more like they we

The first working Silicon transistor was 1954 and worked at room temperature. The first microprocessors were in the late '70s. It's great that people are working on other materials for transistors, but it's a very long road from 'works in the lab' to 'ships in a mobile phone'. 20 years is not unusual.

20-30 years seems to be a good rule of thumb. So if you want to know what the promising technologies of the next decade will be you should look at what has been done in the lab in the late '80s early '90s. (FDM 3D printing seems to be right on the mark, and if the Oculus Rift thing pans out VR will be too. Looking at stuff from the late '90s, electric cars will have to wait another decade to get mass adoption. LED lighting is ahead of schedule. Decent adoption rates a mere 20 years after the first superbright blue LED was demonstrated by Shuji Nakamura).

It wasn't cheap oil that killed those electric cars it was their range. Early on in the automotive world steam (external combustion) [wikipedia.org], electric [wikipedia.org], and gasoline vehicles were all competitive but the technology for internal combustion engines progressed faster than it's competitors thus allowing greater benefits if using an internal combustion engine. Electric vehicles of started falling out of fashion due to recharge times and limited range (sounds familiar). Steam vehicles had the problem where they needed to

Yep, 20-30 years is a good estimate, especially since you also need to factor in the cost of the factory that will build something.

Regular silicon fabs using current feature sizes (and new toolsets) cost billions. Whereas older fabs with larger feature sizes (and older toolsets) that will still do the job for 90% of the applications needed can be picked up or built relatively cheaply in the hundreds of millions or even tens of millions.

Totally offtopic, but this immediately inspired the contrapositive:"Just because it can't be done doesn't mean that it won't be done."I'm sure this applies to something, somewhere!:D Politics comes to mind...:P

The speed of light is actually a very important consideration, a signal can only move so far in a single cycle, if you operate at 1000 times faster you exponentially reduce the distance the signal can travel in that time and at a thousand times smaller distance you actually come into some very real physical limitations for the chip size and usefulness of the signal. It isn't that this has no uses, but it does have significant limitations on what this can be useful for.

The speed of light is actually a very important consideration, a signal can only move so far in a single cycle, if you operate at 1000 times faster you exponentially reduce the distance the signal can travel in that time and at a thousand times smaller distance you actually come into some very real physical limitations for the chip size and usefulness of the signal. It isn't that this has no uses, but it does have significant limitations on what this can be useful for.

No, it's not exponential. If you want to have 1000 times shorter cycles, you need a 1000 times smaller chip.

So in practice what you are saying is clearly within an acceptable margin of true, but is perhaps not clearly stated (you need a 1000 times smaller process, not a 1000 times smaller chip!)

Well of course, it's a ballpark figure, not taking into account what exactly the pipeline does. However, it's not only the size of a transistor that matters. I don't know much about how a processor works (I am a Physicist), but as far as I know data in the form of electricity comes from somewhere (register), goes through some transistors, and then back into a register. If you want a cycle to last around 0.3 nanoseconds (which corresponds to 3.3GHz, close to modern i7's), the entire roundtrip can be at most

No, the clock signal needs to time between two connecting flip flops nothing more. It's extremely common (i.e. it's about 5% of my job) to have to change the design in order to achieve this local clocking requirement.That's without having multiple asynchronous clocks on a single chip.Or asynchronous logic

Even when you need to do very long paths it's called a clock tree for a reason you can have a 1GHz clock that takes several ns to get from its source PLL to its destination flop because the delay through the tree to all the leaf nodes is matched. that is a 1ns period clock can take 4ns to get from the source to the destination, and that's all fine because as long as it's the same 4ns...
Now things get harder when different bits of the chip have silicon that runs at different speeds so you can't balance the tree like you'd like to, but that's what makes this job interesting;-)

Also, a clock signal is a single-bit signal. You can use a wide interconnect for distributing it over large distances in the higher levels of the tree, making it much faster compared to the local interconnects. That makes it somewhat less of an issue than is the case with long-range data interconnects, which are parallel (or did they switch to serial lines even on-chip?), therefore have to use narrower interconnects, therefore are slower.

That's not really how it works... Propagation delay is not related to clocks (at least not in the way you seem to imply). With a stable and monotonous clock then you can easily propagate the clock to every point of the chip with a controllable delay (see comments from an actual designer of this stuff above). A simple clock tree, for example, can be implemented using a fractal like structure. Basically imagine a capital H, you have an equal length to each of the 'ends' of the lines from the centre. To e

The "1000 times faster than" current technology is blatantly false. They're claiming 1 ps. I couldn't find propagation delay data for the best current silicon processes, but 3 ps is a reasonable estimate, at room temperature.
They may have made a nice discovery, and it may be amenable to significant improvement, but so far they haven't demonstrated that they're going to replace silicon.

That's a bit of an obvious troll coming from someone with a seven digit UID...:p

I've been reading Slashdot for over a dozen years, and I don't even have a UID because I never bothered signing up for an account. If I signed up now it'd be a very large number, and so would have a low perceived "seniority", and yet I remember when the Columbine and Hellmouth stories were posted here.

I've been reading Slashdot for over a dozen years, and I don't even have a UID because I never bothered signing up for an account. If I signed up now it'd be a very large number, and so would have a low perceived "seniority", and yet I remember when the Columbine and Hellmouth stories were posted here.

See, now we know you're faking it. If you could actually remember when the Columbine and Hellmouth stories were posted here, your nostalgia would be tainted by the memories of JonKatz articles.

Well, let's see. The Solar System weighs on the order of 10^30 kg. That's 2^100 kg. There's 2^86 atoms in a kilogram of hydrogen. That's only 2^186 hydrogens in our solar system, if its whole mass was hydrogen. You seem to be right - iterating through 2^256 is quite unfeasible.

Assuming iteration speed of 2^32/second, given 2^24 seconds per year, and a billion PCs worldwide (2^30), we could "crunch" only a space of 2^86. Our current resources are about a factor of 2^170 too small:)