Intel predicts ubiquitous, almost-zero-energy computing by 2020

Share This article

Intel often uses the Intel Developer Forum (IDF) as a platform to discuss its long-term vision for computing as well as more practical business initiatives. This year, the company has discussed the shrinking energy cost of computation as well as a point when it believes the energy required for “meaningful compute” will approach zero and become ubiquitous by the year 2020. The company didn’t precisely define “meaningful compute,” but I think in this case we can assign a solid working definition. Adding two integers together is computing, but it isn’t particularly meaningful. Accurately measuring geospatial location via GPS, making a phone call, or playing a game is meaningful.

The idea that we could push the energy cost of computing down to nearly immeasurable levels is exciting. It’s the type of innovation that’s needed to drive products like Google Glass or VR headsets like the Oculus Rift. Unfortunately, Intel’s slide neatly sidesteps the greatest problems facing such innovations — the cost of computing already accounts for less than half the total energy expenditure of a smartphone or other handheld device. Some of the recent trends in smartphones, like the push for high-quality Retina displays and LTE connectivity, have significantly increased device power consumption. Smaller CPUs and more power-efficient components have been offset by higher storage capacities and additional RAM.

Intel has previously acknowledged these challenges in last year’s IDF presentation and in a separate whitepaper on the growth of More-than-Moore scaling. The relationship between radio power consumption, available bandwidth, and signal strength is a classic “pick any two” Euler diagram. Future all-digital radios and metal-oxide-based displays may reduce the power consumption of these components, but they aren’t going to shrink it to zero.

Justification for slides like the first is given with slides like this:

Looks great, but ignores the fact that transistors don’t scale like they used to. Remember, the point of near-threshold voltage and the research into replacing silicon is intended to move the bar forward bit by bit, not to re-enable the classic Dennard scaling of the 1980s and 1990s. That era is gone, and nothing short of a miracle material that fulfills all the roles of silicon will ever bring it back.

Intel’s decision to present on the zero cost future of computing is disappointing because it flies in the face of everything the company has said in the past year and ignores the previously-acknowledged difficulty of scaling all the various components that go into a modern smartphone. The idea that 2020 will bring magical improvements or suddenly sweep neural interfaces to the forefront of technology is, in a word, folly.

In the late 90s and early 2000s, IT professionals often quipped that “What Intel has given, Microsoft will take away.” This pithy statement referred to the fact that advances in compute performance were soaked up by new software editions virtually as fast as they appeared. That’s changed dramatically in recent years as battery life, not CPU cycles, have become the scarce resource in question.

Can Intel build small compute engines with a near-zero cost of calculation by 2020? Maybe it can. But the real question is whether Intel, or other manufacturers, can manufacture the touch screens, displays, radios, speakers, cameras, and audio processors that would go into such devices to drive the ubiquitious computing revolution. Lithium-air batteries may eventually be capable of replacing today’s current lithium-ion designs, but commercial Li-air is thought to be at least 10 years away.

This doesn’t mean technology won’t advance, but it suggests a more deliberate, incremental pace as opposed to an upcoming revolution. Smartphones of 2018-2020 may be superior to top-end devices of the present day in much the same way that modern computers are more powerful than desktops from the 2006 era. Modern rigs have significant advantages — but 2006 hardware is still quite serviceable in a variety of environments. The early years of the smartphone revolution were marked by enormous leaps forward from year to year, but we may already be reaching the end of that quick advance phase.

Tagged In

Post a Comment

warcaster

Yes, it’s called “ARM chips”.

http://fennecweb.net/ Alexander ypema

ARM is neat but unfortunally not very powerful. And I don’t think intel would present a non-x86 instruction set any time soon ;P
..Which really isn’t all that bad. x86 is very powerful compared to ARM, in fact, x86 is so much more powerful that the performance-per-watt is actually higher with x86 than ARM, especially when you look at intel atom or amd geo chips.

I think what intel tries to say is that they found some stuff in their R&D that enables them to do more flops per watt, but they haven’t really found a practical way to do it, so instead they’re just generalizing and setting people to dream with it until they do have a practical implementation.

warcaster

I’ve seen no proof of that anywhere. But I’ve seen plenty of proof showing ARM chips and ARM chips stacks outperforming Intel chips or Intel chip stacks for servers in power consumption and performance per Watt.

The only situation where I’ve seen Intel outperform ARM is in Sunpsider test, but that was probably mostly due to software optimization, because I’ve seen Motorola handsets scoring around 2000 ms back then, and now scoring 1400, which is just a few ms above Intel Atom’s score. Plus that Medfield chip had mediocre battery life, and its overall performance was at 2011 level – Tegra 2/OMAP 4430. The GPU performance was half as good as new ARM chips.

And that chip is counted at 1.3 Ghz. So when Intel gives its TDP, it’s for 1.3 Ghz, to make it seem lower. But either Intel is being misleading about its true TDP when it runs at 1.6 Ghz, or turboboost is not used much in real situations other than benchmarks to beat the scores, in which case you don’t get that kind of performance you get in benchmarks anyway

http://fennecweb.net/ Alexander ypema

1GHZ on x86 is a lot faster than 1GHZ on ARM, add to that the specific sets for media purposes such as SSE, and tweaks in the processor layout which make it harder to predict.

Instead of comparing the (old) atom to the fastest ARMs, compare an i7 3770K ( http://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i7-3770K+@+3.50GHz ) at 80 watts, to a Tegra 3. and see what the amount of flops/watt is. And with the flops performance, don’t take the tegra’s GPU performance and compare it to the CPU’s flops performance, even though x86 still outperforms that, the CPU is much slower at floats than GPUs are by design. if you do you might as well compare them to the GPU’s flops performance on the desktops (e.g tegra 3’s 12 gflops compared to as much as over 6 teraflops, over 6000gflops, for some high end desktop GPUs).

Fact stands that the instruction set x86 is much better at high load processing per cycle than any RISC instruction set such as ARM.

Amar

I’m not an expert but i can give you an idea of cpu performance.

Ghz of a processor dictate at what speed the processor executes a set of instructions. And different processors, according to their architectures, process different amount of data in one moment.

For eg: a processor processes 8 bits of data at the rate of 1 clock cycle per second, its raw power is 8*1=8. This gives rough idea of its performance, though other factors also matter. Now another processor processes 4 bits of data at 2 cycles per second, its raw power is 4*2=8. These CPU should hence be almost equal in performance.

Again, this is not 100% accurate as other factors like bandwidth and instruction sets matter.

Also: HD7970(A high end gpu) has 3.8 teraflops of processing power

https://twitter.com/xarinatan Alexander ypema

It’s a lot more complicated than that even, at the basis perhaps, but modern CPUs have so much tweaks and hacks and things that make it almost impossible to compare them. For example, many CPUs these days are ‘quad pumped, meaning where older processors would have one execution per tick (hz), these processors do 4, one every quarter tick. Also, more bits per cycle doesn’t necessarily mean more performance, if your application can only feed the CPU 32 bits per cycle while the CPU has 64 bits bandwidth, half will go unused. Furthermore has Intel been doing hyperthreading to regain lost clockcycles and unused parts of the processor, while AMD has started to make their pipelines for a single thread much longer in their latest processors, which are pretty much the opposites, yet they both have advantages.
…And then I haven’t even started on the instruction sets.

In the end, finding the fastest processor is much less of an issue than finding a processor that is good at your specific task load, for example, the average GPU will be really really bad at anything but crunching numbers, whereas the average x86 CPU has builtin memory controllers, tons of L1/2/3 cache, and have GPIO interfaces that allow it to run an operating system and interface with the rest of the motherboard, a thing a GPU generally can’t even dream of. Or if you’re comparing of the same type, the average SoC in a smartphone/tablet has much more than just a ARM processing core, they usually also have a small GPU, some streaming codecs, and other hardware such as GPS/3G/etc, plus they only take minimal bits of power, which is what they’re ultimately designed for, being super power efficient. You really don’t want to do high performance tasks on it though, in a desktop you’d end up hating the hell out of it.

(By the way, the HD7990, the dual GPU variety of the HD7970 has well over 6 TFlops)

Amar

Totally Agree! Thanks for the info.

LittleUK

Got me thinking. Thanks

http://www.quora.com/Ho-Sheng-Hsiao Ho-Sheng Hsiao

This article is funny because the author is projecting what can be done with present technology against what can be done with zero-cost compute. Zero-cost compute means we can embed computing power in things that you normally don’t think about it. For example, Thunderbolt connectors have active chips that compensate for the noise in its cables. That’s why it can run so fast — and why it is so expensive. That doesn’t require lower energy costs, but it does point out new capabilities of embedding “meaningful computing.”

So some examples off the top of my head:

* Previously “passive”, dumb sensors that have computational units. In day-to-day life, we might start seeing better sensors for the car, to go with the self-driving car technologies
* Super-computer-in-a-pocket, which translates to better speech recognition and synthesizing.
* Mesh routing networks. Instead of using a few, power-hungry nodes with good radio strength, use many, low-energy nodes with poor radio strength.
* Active paper. Finally put a rest to the dead-tree-book vs. ebook nonsense, and pave the way for digitizing the most conservative paper users (court system).
* Far better robot technologies, with intelligent motors.

VirtualMark

Another unnecessarily negative article from Joel. On one had we have Intel, who is possible the best chip maker we’ve ever had. On the other had we have Joel, who has written several negative articles lately saying how hard progress is and how some things won’t ever happen etc. I choose to believe Intel, as they’ve consistently delivered in recent years.

Remember – a 3 and 4nm transistor has been made a few years back, one was made from 7 atoms!

This site is called ExtremeTech, i’d like to see a bit more positivity on it! One of my favourite scientists is Michio Kaku, who basically takes current research and explains what will be possible in the future. I love watching his programs, as he connects the dots and paints amazing pictures of how life will be in 100’s of years.

http://www.facebook.com/alex.maurin.908 Alex Maurin

i cant wait for diamond semiconductors to become cheaper and more ubiquitous.

http://www.facebook.com/profile.php?id=1223563048 Angel Ham

I predict that future mobile phones are going to be nothing but a rectangular battery pack with a touch screen glued on one side.

ZungDoo

Looks like 2020 is gonna be cool!

Anon-Right.tk

Erik Erikson

For display power consumption reduction and resolution advancement: retinal projection.

http://singularity-2045.org/ Singularity Utopia

Actually instead of zero energy the statement was regarding zero size: “As we approach the year 2020 the size of meaningful computational power, right, the size of the chip, begins to approach zero.” http://www.youtube.com/watch?v=3SA-IrhEQ8s#t=3m54s

The “power” refers to “processing power” as in speed (FLOPS) not energy consumption, so you will have zero-sized processors capable of meaningful computational power, although maybe energy consumption will also be close to zero.

I think the main point of this Intel presentation was to discuss the fact that decreasing SIZE of computers will allow them to be used in new applications that just weren’t possible before. They can now be incorporated in clothing for example and new medical applications are possible.

The power savings associated with shrinking architectures also have enormous positive ramifications in the worlds of data centers and supercomputers where power consumption can run in the megawatt range and cooling challenges are enormous.

Even within the niche of the mobile computing industry, there is a lot of cause for optimism over the next decade. Process sizes still have quite a ways to go go before hitting fundamental limits, memristors and other technologies may replace RAM and yield power savings there, displays show lots of pathways for further improvement including nanotube and quantum dot emitters and the ongoing digitization of radio components offers lots of benefits including significant power savings.

Jeffrey Byers

Intel’s larges contribution to eliminating power consumption is in the area of TURNING OFF the unneeded CPU cores. In idle-mode especially there is very little happening so we should not only have all SECONDARY cores be completely off, but the main core should be quickly turning off and on.

Dedicated video and audio decoding chips help as well and they should also feature the rapid on/off cycling.

However, as said the DISPLAY portion will then continue to become the proportionately largest usage of power and I think the ability to reduce that power consumption will be a very slow process.

Use of this site is governed by our Terms of Use and Privacy Policy. Copyright 1996-2015 Ziff Davis, LLC.PCMag Digital Group All Rights Reserved. ExtremeTech is a registered trademark of Ziff Davis, LLC. Reproduction in whole or in part in any form or medium without express written permission of Ziff Davis, LLC. is prohibited.