Posted
by
ScuttleMonkeyon Monday March 17, 2008 @03:11PM
from the proof-of-concept dept.

WirePosted writes to mention that a new highly efficient microchip has been announced by researchers from MIT and Texas Instruments. The new chip touts up to 10 times more energy efficiency than current generation chips. "One key to the new chip design, Chandrakasan says, was to build a high-efficiency DC-to-DC converter--which reduces the voltage to the lower level--right on the same chip, reducing the number of separate components. The redesigned memory and logic, along with the DC-to-DC converter, are all integrated to realize a complete system-on-a-chip solution."

The processors, chances are, are not very useful for things that require more processor horsepower. The article mentions use in things that could be powered by minuscule power input (i.e, body heat). I think it was invented for that purpose, that the power bill will be minuscule at worst, nonexistent at best for these things. Therefore, use in portable devices for now are not so much, except maybe a simple PDA that requires no batteries, only heat.

Some of us have a better coefficient for heat transfer. (Am I saying that right? Some of us "put off" more of our body heat than others.)

Like for instance. If I touch the girls here at work (...I'll likely lose my job *rim shot* [you know someone was thinking it]), they are usually "colder" than my body temp. Would this mean that the "warmer feeling" bodies would power these easier? Would those with "cold bodies" not have enough heat to power their PDA at all? Does it matter?

Well, power doesn't come from heat, it comes from moving heat from a hot place to a cold place. so, if you are in 98 degree weather, your body can't power a thing. likewise, if you are in sub zero weather, you have a nice thermal gradient to exploit. I think the variance in environmental temperature far outweighs any contribution from abnormal body heat temperatures.

The thing about this is body heat is not a source of power.
To have a source of power (for a thermocouple to work) you need a temperature differential, so you would need a part of the thermocouple sticking out of your body.
Another thing is no matter how low the current drawn from batteries is they still need changing batteries only have a limited shelf life (with no current drawn.) Lower power means smaller batteries but I dont think thermocouples are a viable source of power for implants, perhaps they c

The requirement for "A LOT OF ELECTRICITY" to drive a processor is a serious fallacy. 180W Intel 1.2GHZ processors produce blazing amounts of waste heat; more modern, higher performance processors use a third of the power. A smaller circuit (RISC processor for example) can perform simple tasks faster with less power too (indeed some RISC processors can take 1.5 times the insns to do a logical task and blow through it in 1/2 the clock time anyway)

Chances are, these things will not be used for PDAs of any sorts. The most likely use would be sensory type applications where you want it running for a long time, but it is not crunching a lot of data. When you think the lowest power micro is a texas instruments one running at 8Mhz, and they just released their new line running at max of 16Mhz, can you imagine what a clock speed of a micro running at 0.3V would be?

Like any other research, it is only wasted as long as it doesn't work. Too bad you can't tell if it eventually going to work until you try hard enough (or if you know that your competitors are no longer trying so that you can safely give up too).

Note: I posted this earlier as a reply, but no one seems to have noticed it. Hence this repost. I'm a long-time-reader-very-rare-poster, so sorry if this is not the right way to go about it.

Although the study quoted by the OP got a lot of media attention because of MIT involvement, what is more interesting is this actual product that has been released last month: "One AAA battery! The boss must be kidding..." [embedded-computing.com]

This company (Silicon Labs) has managed to put a DC-DC converter in a microcontroller and have

Yes and no. If it cuts the cost of the battery by 50%, then the chip could be considerably more expensive without affecting the overall product cost. Batteries and their related circuitry are expensive!

The first thing that came to mind when I saw this article was the Transmeta [wikipedia.org] Crusoe [wikipedia.org] processor. Which unfortunately never achieved much of any significant market penetration. Indeed, it seems that you really have to have something more than just an incredibly efficient chip in order to compete against the Intel - AMD behemoth.

Personally, I would love to see a chip that requires very low power make it into the mainstream market. I think it would great to have something like that for the miniITX form factor or something of that nature that hobbyists could tinker with and find fun applications for. The Transmeta, unfortunately, never realized that as far as I ever saw.

I don't think they're demonstrating a particular CPU, but a technology or design strategy that can be built into *any* chip. So Intel or AMD could pick up this research with their own chips. (subject to patents and licensing of course)

Also, from the article: "So far the new chip is at the proof of concept stage. Commercial applications could become available "in five years, maybe even sooner, in a number of exciting areas," Chandrakasan says

Sense the research was sponsored by TI, I am sure this technology will find its way into all sorts of embedded devices. Think everything from 32-bit uCs to Opamps. If it really does increase power efficency 10 fold, it wouldn't supprise me to see AMD and/or Intel license the technology from them for high speed uPs.

The reason is that they needed to build up an industry to accept them. There were already other chips fabs that had a name. So what did transmeta do? Nothing. They should have spent a few bucks and looked for new ideas that used their chip. They only needed a few interesting ideas to make it. How much money? What a million / idea? Even had it cost them 3 million, then it would have been NOTHING in the long (or even short) run.

Well, if you read the fine article you will see that the applications they talk about are things like medical implants, where you'd like to avoid surgery every few years to replace the batteries. The article makes no claims that these chips will appear any time soon in your desktop computer. Since they save power in the usual way (by reducing voltage) they're probably slower than stock chips. This doesn't matter in a lot of imbedded applications but it won't attract the gamer crowd.

Since they save power in the usual way (by reducing voltage) they're probably slower than stock chips.

Yes, I do recall that the Transmeta chips were a fair amount slower than the Intel / AMD chips that were out at the same time, though in some regards one could say they made up for it with far better battery life in laptops.

This doesn't matter in a lot of imbedded applications but it won't attract the gamer crowd.

I can't speak for everyone, but I wasn't planning to run duke nukem forever on a low-power system... But I can think of plenty of typical household applications that would be well suited to a cpu that consumes less power.

That depends on youe definition of 'the mainstream market'. This technology may never appear in desktop/laptop PCs, but become popular in handheld devices where power consumption is a major issue. There is a limited amount of power saving economically feasible in PCs as long as the displays and other peripherals continue to be major power hogs.

Another interesting market might be in server farms. But I wouldn't count on this driving the market. CPU architectures specific to servers haven't sold well, so this isn't an economically viable niche.

Microcontrollers are a large enough market segment to justify the R&D. I forget where I read this, but if you take the total percentage of the uP and uCs installed in PCs and round it to the nearest whole percent, that number is zero.

Transmetta had radically better power consumption for a while and might have some day come to dominate the portables market, had they retained an advantage like the one they had at their debut. Transemetta's problem was underestimating how rapidly Intel could improve the power efficiency of their chips. In response to Transmetta, Intel suddenly got serious about power consumption and got competitive so fast it left Transmetta with little to differentiate their chips from the competition.

Like anything, the commercial viability of this doesn't just depend on how much better it is than what's already out there, but on how long it'll take their competitors to catch up.

Transmetta didn't do so well, but the real winner of Transmetta's actions was the consumer. Transmetta drove Intel and AMD to improve efficiency much more rapidly than they had been. Let's hope this new technology makes it into production and does the same.

Ugh. Transmeta. Two t's, not three. Sony and Intel are licensees of Transmeta's technology (the latter being the terms of a legal settlement, but no doubt Intel makes use of it). They were always a flop as a chipmaker, but seem to be doing all right as a smaller fabless concern.

Their ancestor is already a commercial success.
The processor used to demonstrate this technology is the MSP430 line of low-power microcontrollers from TI. It is 16-bit Von Neumann RISC with an address space stretched to 1mB that with present technology runs at 3V using about 500uA/mHz. It will run up to 14MHz at 3V and does register-to-register operations in one clock tick, to and/or from memory in 2 to 6 depending. It idles at 0.1uA waiting for an interrupt, or at 0.6uA when keeping time with a 32KHz o

So as usual, something ten times better than we have now is going to be available in five years. Since these breakthroughs happen all the time, we continue our remarkably linear trend by continually filling in the gaps.

So as usual, something ten times better than we have now is going to be available in five years.

Sure, in five years the available chips will be a lot better than the stuff that's here now. But when this technique has matured enough, it could be applied to the chips in 5 years and we'll still get a 10 fold improvement! (Or something like that:-P)

This seems to be a complete other kind of advancement than regular chip evolution we've seen so far.

This seems to be a complete other kind of advancement than regular chip evolution we've seen so far.

There's not enough in TFA to say for sure, but I'd guess rather the opposite. The main thing they mention is a lower power supply voltage. Power supply voltages have been dropping steadily for a long time. Once upon a time, the most common logic family was the 7400 series, which all used 5 volt power supplies. Somewhat later 3.3 volt CMOS logic was introduced. Most CPUs, memory, etc., now use somewhere between 1 and 2 volts.

For the most part, you get a trade-off between voltage and speed -- with a higher voltage, you can charge up a more reactive load more quickly, giving faster rise and fall times. That translates directly to higher bus speed.

At the same time, the power you use is the product of the voltage and the current, so as you raise the voltage you raise the power usage. Worse, the current you drive through a given impedance also rises linearly with the voltage -- so the power usage is proportional to the square of the voltage.

That (probably) explains to a large degree how/why they've reduced the power usage by a ration of 10:1 by reducing the voltage by a ratio of something like 4:1 (in theory, a 10:1 power reduction should imply a voltage reduction by the square root of 10, roughly 3.16).

In any case, however, nothing in the article really suggests that they've departed a great deal from the path everybody's been following for quite a while. Of course, they may have done something truly radical here -- but based on what they've said, that isn't necessarily the case.

They seem to have departed from standard practice in two ways. First, they use an on-chip DC-DC converter to dynamically scale the voltage down to the minimum required to meet whatever performance metric is specified. Second, they intend to power it using some kind of energy-harvesting technique (namely human body heat). I agree the reduction of power is just first-order physics, but how they do it is quite interesting.

They seem to have departed from standard practice in two ways. First, they use an on-chip DC-DC converter to dynamically scale the voltage down to the minimum required to meet whatever performance metric is specified.

That's not really new by itself. Just for an obvious example, most Flash ROM chips need a relatively high voltage for writing. Early Flash ROM chips used dual power supplies to support that, but most current Flash ROM chips use a single external power supply and an on-board DC-DC convert

For the most part, you get a trade-off between voltage and speed -- with a higher voltage, you can charge up a more reactive load more quickly, giving faster rise and fall times. That translates directly to higher bus speed.

I've been out of it for awhile, but yeah, that's what I was thinking. Unless they've really pulled out some whizbangery, they've just made a really slow processor that doesn't take much power. Meh. How much different is this, really from making a CMOS processor with a low-voltage ext

It wasn't clear whether the chip circuits actually operated at the 0.3 volts, or that's whats fed to the on-board dc-dc converter and it's stepping it up or down as needed. If they really do have transistors that reliably work at 0.3 volts that's a significant breakthrough that I'm sure the heavy duty CPU guys would love to know about. I'm thinking this might simply be a matter of better battery use by integrating a high-quality dc-dc converter on the chip. It seems like most consumer grade electronics

You're right, it's relatively easy to scale the chip's power supply input voltage to reduce the power by a factor of V^2. That's not what this article discusses though. They built a DC-DC converter ON-CHIP to step the input voltage down to something much lower(0.3v) that the internal logic circuitry runs on. That is the real interesting part, and unfortunately where the article is woefully short on details. If they really built a complete buck converter on a microprocessor die, and got it to run at a re

Although the study quoted by the OP got a lot of media attention because of MIT involvement, what is more interesting is this actual product that has been released last month:
"One AAA battery! The boss must be kidding..." [embedded-computing.com]

This company (Silicon Labs) has managed to put a DC-DC converter in a microcontroller and have managed to do this on an actual product that you can buy now (not just a research project!). They claim to be able to run for years (even >15 years) on typical low-power applications such a

While MS is expanding, the question is how much of their research did the United States Gov (or any other western nation) fund? Well, Gates has stolen a lot of tech, but he steals from all over. In the end, the answer is that little is paid for by American or EU nations. But on the hardware side, that is a very different matter. We fund a great deal of HP, Suns, Intels', AMDs, etc. work. Then we allow the real money to go overseas. It is bad enough that boards are built over there. But now chips are going.

It seems like all of American know-how goes into designing things like this, then companies move the jobs overseas...

The researchers are: "graduate students Yogesh Ramadass, Naveen Verma, and Joyce Kwong, along with Professor Anantha Chandrakasan". While they may very well all be U.S. citizens, it makes me want to ask for a precise definition of "American know-how".

Even when I was in engineering school, the majority of graduate students were foreign. I forget where, but I once read a quote that went something like this: "American universities are the best in the world. In fact, they are so good that American high school graduates can't compete in them".

Who paid for it? Who exactly is Anantha Chandrakasan? Trained at Berkley and MIT. All the current work took place at MIT in association with a DARPA grant. Yeah, I would say that is America know-how as well as funding.

The researchers are: "graduate students Yogesh Ramadass, Naveen Verma, and Joyce Kwong, along with Professor Anantha Chandrakasan". While they may very well all be U.S. citizens, it makes me want to ask for a precise definition of "American know-how".

You were expecting to see "graduate students Geronimo, Running Bear, and Pocahontas, along with Professor Hithawea"?

I am very interested to see how they managed to reliably demonstrate on/off states of individual transistors at the 0.3V level given that standard logic families use between 1-5V for individual transistors. Of course the article wouldn't have these details considering the article was entitled "Ten times more energy-efficient microchip recharges itself". I suppose whoever wrote the article drew that conclusion from the CONJECTURE posed in MIT's press release.

As the MIT report states, the key was to make the chip operate at 0.3 V instead of ca 1.0V

Since power usage is (roughly!) proportional to voltage squared, getting the chip to run at less than one third the usual voltage will indeed give an order of magnitude reduction in power usage.

From the report:

One of the biggest problems the team had to overcome was the variability that occurs in typical chip manufacturing. At lower voltage levels, variations and imperfections in the silicon chip become more problematic. "Designing the chip to minimize its vulnerability to such variations is a big part of our strategy," Chandrakasan says.

I.e. current state of the art transistors does not work reliably at such voltage levels, I'm guessing that they have to give up significant parts of the theoretical power reduction in order to make it work at all.

Without RTF I would guess that you could run every voltage as needed on a CPU instead of a single voltage. The MMX processor would need a higher voltage than the pipeline units (just making an example for illustration)

Perhaps memory chips may hold data at a much lower voltage and only need a boost during a write operation.

So it sounds like "10 times the efficiency" means 1/10 the power. I read the article specifically to see how they defined a tenfold increase in efficiency. I imagined that an increase from, say, 9% to 90% was not reasonable to expect. Maybe it was energy converted to waste heat that was reduced.

Anyway, I didn't find an explanation in the article. So what is a theoretical 100% efficiency with respect to logic circuits? Every electron turned into a bit of information? Every pair of electrons? It seems

Simple: You assume that every bit of power going in is wasted. Therefore, there is no such thing as absolute "conversion efficiency", but using 10 times less power does make you 10 times as efficient, since you get the same bits coming out (assuming the same program and input) for 1/10 the watts going in.

Actually, max clock speed also goes up/down proportionally to voltage. If we make the reasonable assumption that you clock the chip as high as it allows at a certain voltage, then power grows as the voltage cubed.

Another thing to note here is that having multiple power rails tends to add a cost, and eat up rel estate on a PCB board. By embedding the DC-to-DC converter on the SOC you can possible reduce the power supply to a simple regulator that manages the battery power. TI already uses embedded DC-To-DC converters in their Firewire IC's. This is a big deal for embedded system design.

What kills me is that people talk about "body heat powered" and "implantable". You need a temperature differential to generate power - I suspect that there is the same differential inside the body as there is in a glass of room-temperature water. You'd need an external radiator (like implants in your ears), which isn't nearly as attractive as something that just goes in your body.

I have actually wondered if an implanted version of a bluetooth headset would be possible if one could provide power from the heat transfer of the head.Tap the skin behind your ear, turns the unit on for talking. a thin hollow cable to the front of the ear for listening, and a jawbone transducer for talking.

It would look a lot better than existing headsets(since everything is subdermal/cranial) but you would look even crazier talking in public without anything on your head. Also it couldn't be stolen or t

If memory serves power dissipation has a formula on the order of I2R, I being current in amps and R being resistance in ohms. So, if you had a chip that ran on 1,000 volts at 30ma instead of the usual 1 volt and ~30 amps wouldn't it be just as efficient as a 0.3volt chip running at (and I'm guessing here because tfa doesn't mention current) 1 amp or maybe even less?

Yes, if the current and voltage scale in a way governed by Ohm's Law. However, since power is also equivalent to V^2/R, if you can build a chip that can operate at lower voltage AND the same current, then you win by the square of the voltage.

I believe current is constant in this case, so that the power decreases with the square of V. By reducing the voltage to 1/3.3 the typcial usage, a theoretical power decrease to (1/3.3)^2 or 1/10 is achieved. Again , in theory, and provided that they can get the transistors to work at that voltage. Then again, IANAEE, so I wouldn't take that as genuine truth.

You're thinking about a physics-land purely resistive circuit, where we can arbitrarily control the resistance. Unfortunately, that's not quite the case with very-much-minified microprocessors. We can't arbitrarily force a couple hundred million transistors to use less current. And at the same time, transistors can be designed so that they don't require 1000V to operate.

Every transistor leaks current to some extent. And as those transistors get smaller, that amount of leakage likes to get bigger, becau

You're assuming R is independent of applied voltage, which is not true for any transistor. Resistance is a derived quantity that can be (for example) formulated in terms of the ratio of resulting current from an applied voltage. Ultimately, electronic devices require a certain current to operate, so it's not as simple as minimizing power by arbitrarily scaling down current. If you cannot supply enough current to a system, transistors may not have enough juice to produce those 1's and 0's quickly enough,

Well, not if you wanted any kind of performance. A very crude model of electronics is to think of voltage as the difference in the amount of charge between two points and the current as the rate at which units of charge are being shovelled through the system. Think of it as lumps of coal and the size of shovel being used. Ten times the voltage with one tenth the current would mean it would take a hundred times as long to do the shovelling. Smaller voltages work with smaller currents, if the two are comparab

Their work is definitely interesting, but I think some important questions remain unanswered, the main among them is the tradeoff between correctness of operation vs. performance because of variability. There is a paper in ISLPD 2006 which shows that for a 65nm circuit to operate at 0.3 V, the clock period must be scaled up by a factor of at least 230% to compensate for variability related issues. Additionally, there is a huge problem as far as tool support goes. This is not just mix-and-match style design. In order for this to have widespread use, it needs to work well in the EDA tool workflow. This means that libraries (and to some extent transistors) need to be characterized well at the subthreshold operating voltages. This causes a catch-22 situation. In order to design something using this subthreshold voltage technology, you need good transistor models, but the fabs have no interest in providing these models unless there is large customer demand. It is pretty expensive to get good models. The way this works is most fabs actually create transistors/gates at the given feature size, characterize them, including parameters for variation/process variability and give these to their customers, who design their chips based on these simulations. The reason these are so important is that for synchronous circuits, you have to base the design of the clock scheme on the worst/average case delay, and this you can get only by doing complete (usually Monte Carlo based) simulation of the chip using the transistor models that fab gives you. If you base the parameters solely on simulation based tools, you ignore all sorts of effects in the real world, causing a massive drop in yield(i.e. working chips made by fab).

Well, some things are "expected" to take 3 or 7 years. So how about just tagging this 2013. If in 2012, there's an article predicting that Duke Nukem Forever will come out next year, then we can tag that 2013, too. Then at the end of 2013, we can check that tag, and see what was supposed to happen that year, and compare that to the reality.