Posted
by
timothyon Thursday October 31, 2013 @06:05PM
from the we-call-this-one-petite dept.

szotz writes "Keeping up the pace of Moore's Law is hard, but you wouldn't know it from the way chipmakers name their technology. The semiconductor industry's names for chip generations (Intel's 22nm, TSMC's 28nm, etc) have very little to do with actual physical sizes, says IEEE Spectrum. And the disconnect is only getting bigger. For the first time, the "pay us to make your chip" foundries are offering a new process (with a smaller-sounding name) that will produce chips that are no denser than their forbears. The move is not a popular one."

It's one thousand (mille) paces of a Roman soldier, as modified through history. That seems to be as reasonable a basis for a unit of length as the meter, which is 1/10000000th the distance between the poles and the equator, as modified through history. Mileposts were markers placed by Roman roadbuilders as reference points.

Why do you ask - do you live in some backwards nation without a good educational system?

If you've ever actually had to do precision pacing and measured it out, you'd know why a pace is 2 steps. It equalizes the difference between left and right. 1% accuracy in pace length over a moderately long distance (50-500 m) isn't unusual.

Oh, the romans used all kinds of impractical systems, and I don't think anybody can ever disagree with that... Just look at their numbers.

After them, our tech advanced a bit, we started using those things called arabic algarisms, positional notation and decimal base numbers. And on the context of a civ that uses that tech, yes, the meter is a more reasonable measurement unit than the mile.

No, there's no inherent advantage to the meter as opposed to the mile. There is an inherent advantage in using kilometers, meters, and centimeters in daily use as opposed to miles, feet, and inches. There is an inherent advantage in having scalable units of measure rather than either making units up (the Angstrom), or using fractions of the smallest available unit (the inch). There is a scientific advantage in having a unit of mass (the gram) rather than a unit of force (the pound). The virtues of the

There was once an inherent advantage to the meter (a defined fration of the Earth's curve) to the mile (a badly define multiple of step sizes). That advantage did go away once we defined the mile as a multiple of the meter, but it lasted for long enough to assure that anybody that cared about it used the metric system.

Really the people who actually live in English look at the system of units in use in the USA and wonder why they are still using a system of units that they depreciated while the USA was still a colony. Further they wonder why they call them English units because they are not.

the people who actually live in English look at the system of units in use in the USA and wonder why they are still using a system of units that they depreciated while the USA was still a colony.

That's not hard to figure out - they lost to not only the colonies, but to the French. But I'm being facetious. They're "imperial units," not "English units." They're based on the British Weights and Measures Act of 1824, which postdates your claim that they deprecated such measures in colonial times. Britain didn't

I don't know about that. It's been a damn useful prediction in that it gave a pretty ambitious roadmap for engineers to follow. They've been quite successful and meeting the challenge up until quite recently.

A wise proverb that is apropos: If you don't know where you're going, you'll never get there.

I do. Its been a fairly accurate prediction, yet a prediction none the less. The "law" part is just an anecdotal, off-the-cuff addendum. You want to support your theory that its in fact an actual law, here's room for your proof right here:

You want to support your theory that its in fact an actual law, here's room for your proof right here:

I didn't claim it was a law. No one even slightly knowledgeable about semiconductors thinks its a law. Did you read my post? The original post stated that "Didn't we already agree that predictions are only useful to talking heads, pundits and hucksters?".

My response was that this was in fact a very useful prediction for engineers and scientists actually doing the work.

So when I said "I don't know about that", it was pretty clear I meant that I didn't agree that predictions are only us

Moore's law is an observation, assumed to be true until observations contradict it, which is exactly what a scientific law is.
Also, correct me if if I'm wrong, but wasn't Moore's law about the number of transistors in an integrated circuit, rather than the (closely related) size of features?

Most scientific laws are orders of magnitude more precise than Moore's "law", and are quite stable over time. Moore himself varied the period for doubling from 12 to 24 months over the course of just a few years. That's better than a meteorologist but not as good as an economist, and economic "laws" are mostly poor approximations even on good days.

No, it's not even a prediction... it's an empirical observation on historical data. It tends to be self-fulfilling, but there's no reason that it must continue to any arbitrary horizon. Using historical trends can be useful tools for predicting the future, except when they're not [xkcd.com].

Anyone who actually works in the semiconductor industry could've told you this. (Ever notice how the GHz stopped growing a while ago? The move to multi-core happened around the same time and even that's stopped growing.) Yes, it's still possible to shrink transistors further but the speed and power reduction gains are diminishing and the costs of further shrinking are moving from merely eye-popping to astronomical.

Intel can afford to stay ahead of everyone else a bit (this is one of the primary reasons A

Clock maxed out. Multicore will take you only so far before you run out of space and hit problems with coherency.

I expect the future is going to involve a lot more specialised silicon. Scientific number-crunching will move onto GPUs or things like the Phi designed just for that type of workload. Mobile processors will start featuring even more single-task accelerators like those already used for video decoding. While general-purpose processors of today become the things that tie all the other parts together

Heat is a big cpu problem. Have they tried rotating the active core on the fly? When core 0 gets too hot, they switch it off (under single core workload of course) and move activity to core 1. When that gets too hot, I type a bunch more, or you get the picture.

Reconfiguring involves a lot of extra hardware and delay involved in the switching between function sets. I doubt that it will give significant improvements for any but a small number of unusual applications.

I work quite closely with various parts of the semiconductor industry, but I've not heard anyone say that Moore's law is dead. Transistors are still shrinking, the problem is not that the number of transistors you can stick on an IC is not changing, but that the number you can have powered at once is not dropping much. Each new generation in process technology reduces the size (or, at least, increases yields or reduces costs), but it only has a small impact on the power consumption per transistor. That i

These line widths of 22 nm or 28 nm etc are some 50 times narrower than the wavelength of visible light. Making the lines thinner is difficult and it is approaching quanum mechanics limit. Unless people start immersing the entire etching machines in water or some such medium, we cant make the lines thinner.

Even if we did, there are not enough electrons in these lines to make the "law of large numbers" work. So this time we are bumping against a real barrier.

All sub-65nm and most 65nm processes are lithographically exposed in water current for the reason you stated. The next step is extreme UV or even e-beam lithography but it's expensive and very, very difficult.

You're quite right that this is an economic/mass-market issue more than a pure technical issue.

As somebody who works in Lithography, I can let you know that they have not been using visible light for a long time. All fine resolution lithography is designed around as close to a monochromatic light source as possible. Having a significant spread in the light spectrum was just not consistent to do much below the 1.0 um feature size. This is because of the diffraction spread is very dependent on the wavelength and the fact that the photons have different energies thus reacting differently (or not at all)

Already been done between boards, for sure. Limitations of copper connections on PCB is at roughly 20GB/s - although there are arguments above or below that, that is what I have been able to get up to with some heroic measures.http://en.wikipedia.org/wiki/Titan_(supercomputer) [wikipedia.org]

Optical connections across boards has been done some but its generally not seriously explored due to the overhead associated with getting in-out of optical medium, people tend to just use copper and put more parallel paths in.

Moore's law has been superseded by Koomey's law [wikipedia.org]: the number of computations per joule of energy dissipated has been doubling approximately every 1.6 years. Koomey's law seems to hold well.

If TSMC isn't keeping up with Moore's Law, that's not a problem with Moore's Law. It's a problem with TSMC.

>

Waaaay towards the end of TFA, it mentions that it's GlobalFoundries who inserted finFETs into the same BEOL (wiring) as their 22nm node and called 22nm+finFET "14nm." It's buried at the end, but it's what supports the whole argument that nodes are "just marketing."

To my knowledge, the node's name was based on the DRAM half pitch. But yeah, it's not that any longer. And in defense of GlobalFoundries, finFET does literally add an extra dimension to the calculation of FET geometries.

The problem with the transitions to finFETs is now we have an apples-to-oranges comparison between finFET (or 3D gate or whatever you want to call it) processes and surface FET processes. GlobalFoundries feels they need to stretch the truth to get the point across that the process really is better objectively, even if the minimum feature size hasn't shrunk.

It reminds me of 10 years ago when the microprocessor companies finally stopped the GHz war. For several years, clock speed was a poor proxy for microprocessor performance, and Mac fans used to scream loudly (and rightly) how the IBM chips beat Intel on real-world benchmarks while Intel touted their higher speed.

Hopefully, this "node as minimum gate width" will go away and we'll move to more meaningful process figures-of-merit such as power density, power-delay product, gm/I, transit frequency, Ioff and the like.

Mac fans used to scream loudly about anything that made Macs look good...and still do. It's called tribalism and it isn't about being "right", it's about being on the winning team.

Apple only used an IBM "chip" once. It's clear you don't know that so it's no surprise you don't know how "rightly" Mac fans were about their screaming either. G5's were, in the balance, not faster than their Intel contemporaries. Better at some things and worse at others. One thing was clear with the G5 and it was that Apple was switching to Intel afterward.

If you asked any "Mac fan" back in the day you'd get explained to you just how superior every generation of PowerPC Mac was to any PC ever. It's surprising then, just how much better Macs got once they switched to a real processor. Macs today ARE PCs in every way yet those Mac fans still have that feeling of smug superiority. They are inherently right always, Steve told them so, they just aren't well informed.

Apple used PPC chips (IBM-derived) for quite a few years, and at least initially they were at least as good as Intel's counterparts, often better. Before that, they used Motorola M680x0 chips, same comparison.

Originally, Intel was hobbled by their architecture. It was quirky and had a lot of historical baggage, dating back to the 8008, the first commercially available microprocessor. The cruft took chip real estate, and made it complicated to decode instructions. That was the era of the RISC chip, wh

I got my first Mac just over a year ago. Because of comments like yours, I was expecting quality hardware.

It is the absolute worst computer I have ever owned. And all of my other computers were built from low-cost parts on the Internet. I've had the thousand-dollar monitor die twice (luckily under warranty) and now the video chip is flaking out whenever it displays videos. My other computers would develop issues over time, but I've never before had such serious problems in so short a time after purchase. I

Whoever modded this post up should be flogged. I don't care if the subject is Apple or anyone else, whenever an Anonymous Coward gets on a high horse and complains about somebody's product, you do not, do not mod it up. Even if they are speaking truthfully about their own experience, if the author can't be bothered to put their name behind it (and, really, since it can be a pseudonymous throwaway account, how hard it that?), it is equivalent to bullshit, FUD, and reverse-astroturfing.

Shut the fuck up you pretentious idiot. Is your name really `necro81'? Is that on your birth certificate? Is that on your drivers license? Stop making useless, hypocritical, offtopic posts and kill yourself already you retarded waste of flesh.

Alright, let me rephrase in that context: an Anonymous Coward post that gripes about product quality, even if the AC is speaking truthfully, does not even rise to the level of anecdotal evidence. In that light, I view it as having zero or negative merit, and should therefore never be modded up.

if the author can't be bothered to put their name behind it (and, really, since it can be a pseudonymous throwaway account, how hard it that?)

You acknowledge that AC posts can be just as anonymous as named accounts. It seems your objection is primarily to the content of this particular post.

There is a distinction between a pseudonymous account that is used once and never again, and an Anonymous Coward; they are not equivalent. I would point to this classic Penny Arcade comic [penny-arcade.com]. My objection is that, by being an Anonymous Coward rather than a long-established pseudonymous account, there is no way that anyone can judge what is being said: is it FUD, a bot, personal anecdote, or someone with some authority on the matter? My objection isn't with the content of the post, but rather that it got mod

While I'm not a fan of Steve or Apple, Apple PCs are still superior PCs. It's just not in "geeky" stuff like processor speed or 3D performance which Apple has no control over, it's in some tangibles like quality and some other things that I personally don't give a shit about ("Design", "Form Factor").

Bull. Its noone elses problem that Mac fans insist on comparing $1200 Macs with $400 Dells. Compare a Mac with a laptop in the same price bracket, and you start to realize that there actually is competition out there. Check out the Samsung Ativ 9, or the last-year's Samsung Series 7s, or the Asus Zenbook prime.

Meanwhile the Apple doesnt have a touchscreen, and has a crappier resolution.

its not a very balanced system.

Utter bull. Intel graphics have been able to drive 1366x768 for YEARS, and somehow the latest Intel HD 5000 driving that resolution is "balanced"? The Zenbook should have no trouble whatsoever driving 1900x1080; this is the first time ive ever seen a better screen (with touch no less) touted as a flaw.

Yes, haswell is exclusive. No, it doesnt matter terribly much compared to other stats; 90% of users-- particularly those looking

You just missed out on the previous generation workstation that experienced the coolant leak debacle, where your Powermac G5 would suddenly leak the coolant they were using down over the motherboard and power supply and then out the bottom of the chassis.

I have one of those liquid cooled G5s. It's been an audio recording workhorse. It's going to be retired soon, but it's still going strong. I look for leaks every so often, but I've never found evidence of one.
I hear that there are faster machines available these days:-)

G3 and G4 were Macs using IBM designed (largely) processors. Motorola and IBM jointly produced Power PC chips that Apple used in the mid/late 90s (G3 and G4) but Motorola eventually dropped out and IBM wasn't interested in keeping up with Intel. For a few years, the Apple chips were better than the IBM chips (I didn't own an Apple computer at the time, so I was evaluating this as an engineer). By the time Intel had closed the gap Apple wisely went over to the Intel architecture.

Motorola kept making PowerPC chips long after IBM threw in the towel, AFAIK they are still making them as embedded processors. POWER is just massive, it doesn't make sense to keep trying to make its little brother every generation.

In the middle of the article it is pointing out that Intels 22nm chips use gates that are 35nm long with channel lengths that are 30nm long, so it seems odd that people are worried about what GlobalFoundaries will be misnaming their not-yet-in-production 3D chips when Intel is already misnaming their already-in-production 3D chips.

And yet the channel width is ~8 nm, which is ~64 atomic layers. How many times do you think they can cut that in half? And does it really matter when the source and drain contacts are 10x the size of the channel itself?

To my knowledge, the node's name was based on the DRAM half pitch. But yeah, it's not that any longer. And in defense of GlobalFoundries, finFET does literally add an extra dimension to the calculation of FET geometries.

The node names are indeed based on DRAM half pitch, but CPUs haven't been made with the same process as DRAM since pitch was measured microns (eg. 0.13u).

The reality is that marketing CDs, including 32 and 22nm are only achieved through multiple patterning, and that won't change unless the industry adopts EUV or moves to a maskless process, neither of which is an economical proposition, given that the current best lightsource for EUV is a Tin vapor excimer laser, with a less than 1% dose/total energy effici

If TSMC isn't keeping up with Moore's Law, that's not a problem with Moore's Law. It's a problem with TSMC.

see, when the data does not support the hypothesis, you **change the hypothesis** not how you interpret the data

Moore's Law has never been a 'law'...it was a cool statistical novelty that seemed to predict processor advancements...it is NOT and HAS NEVER BEEN fit to predict anything invovling money or resources...it's 'for fun'

I've seen Singularity/Kurzweil types in TED talks show some dumb graph of 'Moore's Law' and show how, according to the law, humans will have the processor speed to do XYZ by 2050....it's all bunk...

Using Moore's Law to make important decisions is about like using a Slashdot Poll to do the same...I don't trust people professionally who take a concept like Moore's Law and build their understanding of an industry around it. It's a common mistake of perception.

Maybe there is some sort of pattern to processor speed, but it's not helping us understand anything to be so reductive and irresponsible with how we use scientific concepts.

Moore's Law has never been a 'law'...it was a cool statistical novelty that seemed to predict processor advancements...it is NOT and HAS NEVER BEEN fit to predict anything invovling money or resources...it's 'for fun'

I disagree with you a bit here. Moore's Law is an observation, sure, but to engineers that understand the assumptions that go into Moore's Law it has been extremely useful for making predictions involving money and resources.

At my last job I worked in an advanced development/product group working on CMOS wireless transceivers for basestations and handsets. We used Moore's Law explicitly in our planning. The IC business is brutal and you have very little room to miss your market windows. With multi-year development cycles this is tough. Therefore, like a duck hunter, you have to shoot where the technology is going to be, not where it is.

Basically, we started the design using a CMOS process that wasn't on the market yet. We were confident that it *would be* by the time we were ready to go to market. We were confident because the availability of that process was predicted by Moore's Law and any number of foundries were spending billions to make it happen.

If we hadn't used Moore's Law in our planning, we would have come out with products using two-year old technology, and our competition would have eaten our lunch.

We were confident because the availability of that process was predicted by Moore's Law and any number of foundries were spending billions to make it happen.

Right, so did you just use Moore's Law or did you look at other factors as well?

What I mean by other factors:

> Trends of the capacity of other recent products? Did you look at teh speeds of CMOS processes from that company over the last 10 years and extrapolate?

> Did you talk to a sales rep or engineer or product development manager at the CMOS process company and **ASK THEM** how fast their upcomming models would be (approximately)

> Do literature review of what academic research groups and possible FOSS (idk if it applies for you) were doing in that CMOS wireless type transciever tech? My former university, Ball State University did research for WiMax coverage and speed for Cisco (before WiMax was ditched)...did you look at any of that to predict the CMOS process capability you needed?

I'm trying to be polite, but I call BS.

If you claim your company made that decision based **soley** on math from Moore's Law....well I have a hard time believe that claim's veracity. You are either fabricating or that company is not very wise. And if you company **did** use other factors, then that kind of invalidates your point and parenthetically supportsy my point...I won't deny that using it **might** have added value, but only IF you also did common practices like I mentioned above...

Seriously...did you use other factors besides Moore's Law?

Like asking the vendor? (or any of the others mentioned above)

Of course we used all kinds of inputs into our planning process. We would have been fools not to.

I feel like you're doing a bit of "move the goal posts" here. First you very emphatically state that "[Moore's Law] is NOT and HAS NEVER BEEN fit to predict anything invovling money or resources"

I gave you a reply from experience that that is not true, and in fact companies do use (or at least used to) use Moore's Law in their planning process (where money and resources are involved).

That argument is invalid because it applies universally. For instance, if you said "the price of raw materials is not a factor in product development", and somebody else said "yes it is, consider a recent time when I had to choose between a $200 material and a $10 material for a product we planned to sell at $250 per unit", you can't go use a counterargument the fact that they didn't mention what they were getting for the extra $190, or the increased marketshare they might have at that price, etc..

What you're describing is not to much a "prediction" as a "goal". Which is precisely how Moore's "Law" has been used by the industry. They design each new generation with the goal of doubling the transistor density by some means. The only prediction being made is that they'll meet their goal.

"the availability of that process was predicted by Moore's Law" The Moore's Law apply only retroactively. This being said, while transistor density might not improve according to Moore's Law, usually you have increases in one or more of the main metrics (power density, switching speed, leaked power, works on lower voltage,...) and can compensate for some others in hardware (multiple power planes, wider buses, more execution units, higher speed on the same basic architecture, lower powe

Bollocks. You had me until you got here, this is complete bullshit. They are called data centers, and we very much care about their power usage. They are filled with high-end PCs connected to the mains.

I'm not sure where you heard that "20nm is the first process node which will be more expensive than the preceding one". I'm curious how that could be argued. Nearly every tool has grown near exponentially in price. Perhaps by scaling up production they've been able to keep reducing cost, though.