Moore's observation has already been invalidated almost a decade ago. The battle since then has been the cost/benefit of shifting to the next processing node while contenting with the laws of physics at those scales.

Intel's PR department is trying to keep "Moore's Law" alive by shifting the focus to power consumption reduction, however this has its own limits. We are closing onto the hardware limits of what semiconductor-based computing can do. That's why the new focus is shifting gears onto a new computing model. Quantum computers are getting most of the buzz.

I don't think quantum computers will be useful outside of certain specialized niches. IMO the looming battle is teaching more developers how to write reliable multi-threaded code. Since clock speed isn't scaling any more, the only way to squeeze more performance out of our systems is more parallelism (more cores).

just brew it! wrote:IMO the looming battle is teaching more developers how to write reliable multi-threaded code. Since clock speed isn't scaling any more, the only way to squeeze more performance out of our systems is more parallelism (more cores).

This.

I think it could be strange to see how this might evolve (how many cores to fit on a die), but it certainly is the obvious path to improvement, and would definitely push how powerful components effectively are.

"A life is like a garden. Perfect moments can be had, but not preserved, except in memory. LLAP"

JBI wrote:[/color]]Sounds like we are overdue for a new metric to replace the traditional "process node" designation. Something that factors in number of transistors per square mm, clock speed, and power usage, perhaps?

And then there are all the design rules...

Krogoth wrote:Moore's observation has already been invalidated almost a decade ago.

Not exactly.

Moore wrote:The complexity for minimum component costs has increased at a rate of roughly a factor of two per year.

Technically, it was invalidated within a decade of the prediction being made, because Moore changed it to every two years. But that's not really the point of what he was saying, or how you (and most everyone) has misconstrued what he said.

Krogoth wrote:Intel's PR department is trying to keep "Moore's Law" alive by shifting the focus to power consumption reduction, however this has its own limits. We are closing onto the hardware limits of what semiconductor-based computing can do. That's why the new focus is shifting gears onto a new computing model. Quantum computers are getting most of the buzz.

But the power consumption problem is primarily related to how fast the transistors are driven, not really how many transistors there actually are. Hence, what you are saying doesn't really have anything to do with Moore's original observation, but rather all the corollaries about performance.

But that's not what he said. Moore's observation holds true if chipmakers (for example) just keep making lower-clocked ICs with more and more cores for the same price.

We might very well be getting the point where even that's not possible, but we certainly didn't hit it a decade ago.

JBI wrote:Since clock speed isn't scaling any more, the only way to squeeze more performance out of our systems is more parallelism (more cores).

Yup, and that's exactly why IC makers are putting out all those multi-core chips in the first place.

It's a neat article at the IEEE about how the naming conventions are all junk, but I really wonder if any of it addresses the whole manufacturing costs and defect rates, the original point of Moore's Law. I assume you can keep the same process node but reduce the defects even more to get the same effect, no?

Power consumption is driven both by "clockspeed" and active transistor count. Adding more cores/ICs into the design will still run into the problem of power consumption and thermal output if they are all being utilized. The current power saving schemes on modern chips involved reducing voltage, clockspeed and putting idles parts of the silicon into a "sleep state".

Moore's observation has been invalided by scale of economics on the production end. It is becoming more and more expensive to move to the next process node while the returns are diminishing. That's why more and more semiconductor companies are going "fabless" or sticking with current and older manufacturing technology.

Krogoth wrote:Power consumption is driven both by "clockspeed" and active transistor count.

I said the power consumption "problem", in which I was alluding to the same phenomenon that JBI's article did: in the early 2000s gate leakage suddenly become a huge concern in ways it never had been before. You might even remember some of this; I think you're probably old enough to remember Prescott and Intel's 90nm process. It was around that time that Dennard's Scaling broke, not Moore's Law.

That's what changed: power consumption was no longer driven in the same way by those factors. That's what I was saying. The fact that more transistors consume more power is obvious, entirely non-controversial and clearly not in contention.

krogoth wrote:Adding more cores/ICs into the design will still run into the problem of power consumption and thermal output if they are all being utilized.

This, again, is obvious. You've added nothing to the discussion by adding it. Absent violating several bedrock laws of physics, more switching transistors = more power used. That's just as true in 1965 as it is today.

krogoth wrote:The current power saving schemes on modern chips involved reducing voltage, clockspeed and putting idles parts of the silicon into a "sleep state".

As opposed to the non-current power-saving schemes?

Those techniques are't ultra-modern, it's just that we need to do them today in ways we didn't have to do them 15 years ago. Why? Not because Moore's law broke but because Dennard's scale did.

i.e., "the power consumption problem is primarily related to how fast the transistors are driven, not really how many transistors there actually are."

Krogoth wrote:Moore's observation has been invalided by scale of economics on the production end.

Demonstrate how.

Krogoth wrote:It is becoming more and more expensive to move to the next process node while the returns are diminishing.

Perhaps, and I don't dispute that perhaps Moore's observation won't hold true very soon, but you said it stopping holding true 10 years ago.

I ask you again, please demonstrate how.

Because that's not what the IEEE article states, or is even really about, and because you've shown that you don't properly understand Moore's law in the first place.

Moore's observation has always been about scale of economics on the manufacturing end. He simply noticed an early trend where roughly after 24 months, the transistor budget and density was doubling throughout the industry. This was made possible by shrinking the process node on the manufacturing end. Increasing clock speed and reducing power consumption were synergical effects.

This allowed designers of ICs to make more and more complex designs without it becoming cost prohibitive. It also allowed existing designs to become smaller which meant you can obtain more chips per wafer. It allow the semiconductor manufacturers to capture more markets to help offset the equipment and R&D cost. This was the source of the explosion known as "information age". The amount of growth since the first successful IC was phenomenal.

Computers were no longer exclusively in the realm of pure research projects and numbering crunching (the old big irons). ICs and by extension computers became very ubiquitous over course of a few decades. Larger IC designs made economically feasible by smaller node process allowed new applications and the industry was able to capture new markets (automotive, utilities, aerospace, multimedia etc).

My personal 3570K yields almost as much overall performance as higher-end big iron systems back in late 1980s and early 1990s with only a fraction of power consumption and manufacturing cost. Portable computing devices such as current smartphone rival the power of performance desktops in the late 1990s and early 2000s.

All good things most come to an end and Moore's observation is no exception. This came roughly around mid-2000s when the entire semiconductor industry was starting to hit some walls on the manufacturing and yielding end. The cost of upgrading the manufacturing infrastructure to the next node process was outpacing the returns you get out of it. The yield themselves were becoming power hungry and it was harder to obtain "golden samples". This is roughly the same time that big players were adopting more creative binning strategies. This allow them to sell yields that were less than ideal.

In order for the big players (Intel, Samsung, Texas Instrument, GF, TSMC) to continue to game, they had to shift their focus on making more power efficient designs and expanding them to unsaturated market(ultraportables, smartphones and embedded systems). The smaller players who couldn't afford the growing costs gave up the race or became fabless and outsource the manufacturing to the big players. You can see this in history charts on percentage of products related to ICs being shipped from vendor x. The big players are taking up larger and larger percentage of the charts, while smaller players are diminishing or just drop from the charts.

The industry is coming up against some known hard physical limitations on how small you can make a working transistor and how large/dense you make an IC. They are expected to hit them as early as the end of this decade or as late as the next decade at the industry's current rate. There are countless research papers on this subject. Engineers and researchers are working around the clock to find ways to delay this as much as possible.

A new measurement is definitely needed, because now it seems like every foundry is just naming their next process as the next node or half-node with no real regard to how it's really scaled. That is the main crux of the article and it's spot-on...perhaps something semi-standardized like transistors/area would be better, or (and this seems unlikely because the fab companies would have to come to an agreement) a standardized 'test chip' theoretically made on a given process with the resulting transistor/area to make comparisons between foundries possible. The reason real chips won't work for this and we can't just compare a TSMC GPU to an Intel CPU is because of the variation in what makes up a given chip - certain parts are more dense than others so having more or less of that part affects the overall density. I say chips because there should be different ones for simpler processes optimized for say, flash or DRAM, versus bulk silicon for GPUs, versus complicated processes used for CPUs.

I don't see how naming matters to end-users. Sure, marketing would have one less number to flaunt, but what matters in the end is performance, noise, power efficiency, price, etc. Actually, I don't see how it matters to engineers as well. There are a thousand things to worry about when designing a chip; there's no practical reason for the name of the process to be one of them. It's just like the MHz/GHz wars of yore.

We need a hard-coded AI in the processor, that way the new metric can be how smart the AI is (AIIQ). The AI would then be responsible to managing and delegating tasks to processors, and optimizing code on the fly that developers are too dumb to do themselves. A bug in the AI may be when it becomes smartassed and starts insulting you. I can imagine an AI much like Bob in the Harry Dresden series, that would keep things entertaining, but perhaps not work-safe.

Ok, just kidding, but I think processors are going to require some higher-level of sophistication to keep jumping up levels of performance and USEFULNESS (being the key word). An AI may be "the thing" to help that happen.