Posted
by
samzenpuson Monday June 04, 2012 @10:22AM
from the coming-soon dept.

angry tapir writes "ARM chips made with an advanced, 20-nanometer manufacturing process could appear in smartphones and tablets by as soon as the end of next year, the head of ARM's processor division said Monday. The more advanced chips should allow device makers to improve the performance of their products without reducing battery life, or offer the same performance with longer battery life."

I suspect that the answer is a combination of 'don't hold your breath on that' and 'at least a year ago, did you miss it?'

In terms of sheer screaming power(and, for the moment, even supporting 64 bit memory spaces) ARM is a toy and shows no terribly strong signs of making any strides in that area that Intel would really be worried about.

On the other hand, it would appear that an awful lot of netbooks and laptops were never sold, possibly never even built, because of tablets and smartphones... If things like this [dell.com] turn out to be a good fit for some 'cloud' niche or other sales of select Xeons could see similar hits.

At least so far, you don't go up against Chipzilla benchmark-for-benchmark. The world evolves around you such that your virtues are now more desirable than his...

What I'd like to know is....why does this question keep coming up? Its as stupid as saying "When will mopeds replace Mac trucks?" so the correct answer should be "Well that's just a stupid fucking question!" followed by hitting the moron who asked it with a fish, maybe a nice trout or red snapper.

While there is a LITTLE overlap, there's not much, and mostly the two are as far apart as mopeds and trucks. The ARM is made to go in little things and is designed for using little power above all, the Intel needs more space but the IPC is just insane so you can get a hell of a lot more work done. You wouldn't try to run Photoshop on your cell would you? So why would you think the chip in your cell would replace the one running Photoshop? ARM simply can't ramp up to the IPC of a P4, much less a Core CPU and if it tried it'd be just as big a user of power as an Intel so it'd be pointless.

So can't we all just accept that some things are good for some jobs and other things are good for others and leave it at that? Neither chip is gonna "beat" the other because except for a couple of tiny overlaps they really aren't in competition.

Oh and a final note, the reason that laptops and netbooks aren't selling as much is the same reason desktops aren't selling as much and that is because we went past "good enough" and into "insanely overpowered" and frankly once we got rid of the space heater chips like P4 Mobile that killed the laptops by thermal cycling honestly? with just a tiny bit of TLC they can last a looong time. I know people still using early Core and Turion laptops simply because there wasn't a point in replacing them. After all a new battery was only $30 and there hasn't been any "killer apps" other than games for years and most folks don't game on a laptop.

So again I wouldn't say one "killed" the other, as everyone I know that has a smartphone or tablet ALSO has a netbook or laptop AND a desktop, its just they don't replace the X86 units until they die because there really isn't a point. While ARM is undergoing its own MHz war frankly for most users even a bottom of the line Pentium or even E350 netbook or laptop is more than enough power for what they are doing with them so they stick with what works. I predict we'll see the same thing with ARM once quads reach mainstream, they'll run into a power wall just as X86 ran into a thermal wall and we'll see the same thing we do now with X86, nobody tossing until the previous one dies.

Just today [theregister.co.uk] we hear that Windows is the #1 server OS.

We heard no such thing. We heard that Microsoft has over 50% of server revenue, and that from a famously dodgy source (IDC reports whatever their paymasters tell them to). So, we are supposed to overlook the fact that the average Linux OS costs $0.00? That essentially all data centers in internet companies worthy of the name run Linux, with the exception of Microsoft, and look at how well their data center business is looking? That the entire financial industry runs on Linux, having recently booted Windows

the sweet spot for more Wintel users now is the $300 to $500 for a laptop. too bad dell and HP can't make any money at this price point.

Chrome computers are crap at this point. i have a CR-48 and hate it. i'm not paying close to $500 for a gimped computer that's a paperweight without internet access. my iphones and ipad have more functionality

a $600 macbook with a cheap ARM CPU will do almost everything a NORMAL user will want in a year or two. internet, basic photo editing an

it won't do photoshop, the midrange photo editing apps like lightroom, development and lots of other things...

With Gimp 2.8 [gimp.org] there isn't really a reason to use Photoshop. Now Gimp can handle high precision color with GPU acceleration and has a much improved interface. If you are too lazy to learn something better or have orders from your boss or have so much money that you need to send some to Adobe then feel free, but you don't have to.

Does anybody know who the 20nm fabs ARM is expecting to provide these chips are? It was my understanding that Samsung, Globalfoundries and TSMC were still working on a larger process(28mm?) and Intel has been very cagey about fabbing any 3rd-party stuff except for a handful of FPGAs and other high-margin oddballs that don't compete in Intel's area of business in any meaningful way.

thats what TSMC is for and why ARM inc have them and IBM etc as core partners for their tape out implementation program as inhttp://www.eetimes.com/electronics-news/4229820/ARM-TSMC-design-20-nm-A15-processorfrom way back in 10/18/2011

ARM said it would now optimize its physical IP to the TSMC 20-nm process for power, performance and area and produce a specification for a Cortex-A15 processor optimization pack (POP). It did not say how soon this would be completed.

"This first 20-nm ARM Cortex-A15 tape out paves the way for the next generation of SoC integration and performance," said Mike Inglis, general manager of ARM's processor division, in a statement. These SoCs will be suitable for smartphones, tablet computers, digital home systems, servers and wireless infrastructure, ARM said.

It paves the way, but it's still way behind the curve since it's still in the distant future. Intel has been shipping 22nm chips for a while now, as in you can actually go out and buy them. But on the TSMC front, I'm not sure if they're even shipping 28nm, let alone 22 or 20nm...

I'm rooting for ARM, but they're not going to do much damage to Intel if they're a full process node behind. Intel will hit 16nm in 2013, putting them still at least a half-node ahead of TSMC... That might sound minor, but 100 mm^2

TSMC is shipping 28nm, latest tech GeForce (GTX680 and 670) and Radeon (7970 and 7950) cards are TSMC 28nm. They do not have anything smaller yet.

In terms of being behind the curve, yes and no. It is behind Intel but everyone is, always. Intel is generally almost a complete node ahead. Nobody else is doing 22nm, nor will they be for some time. The timeframe ARM is talking about is right along when most companies will start doing 22nm, or its 20nm half node (a number of companies do node and half node processes, some like TSMC are going straight for half nodes).

So they aren't behind the curve except for Intel, but then everyone is behind Intel. Only Intel is willing to spend the billions in R&D to forge ahead like that and build the fabs at the pace required (their 14nm fab, they are going straight to the half node next time, is going up in Chandler AZ right now).

I do imagine this is actually directed at Intel though. ARM is getting worried. Intel keeps producing lower and lower end chips, and they are encroaching on ARM's market. Right now it isn't a huge problem particularly since Intel is reserving their latest node for desktop and laptop CPUs. However if Intel starts making 22nm parts to compete with ARM, that could be a problem for ARM. Eve if the Intel parts were less efficient, size can make up a lot of that.

So they are probably trying to convince partners "stick with us, we'll be there soon!"

TSMC skips 22-nm, rolls 20-nm process....(TSMC) is putting a new spin on its strategy: After the 28-nm node, it plans to skip the 22-nm ''full node'' and will move directly to the 20-nm ''half node.''

At its technology conference here, the world's largest silicon foundry also provided details about its 20-nm CMOS process, which will be the company's main technology platform after the 28-nm node. TSMC will also not offer an 18-nm process.

TSMC's 20-nm process is a 10-level metal technology based on a planar technology. It will feature a high-k/metal gate scheme, strained silicon and newfangled ''low-resistance'' copper ultra-low-k interconnects--or what it calls ''low-r.

'' For the 20-nm node, it will only offer a high-k/metal-gate scheme for the gate stack--and not a silicon dioxide option.TSMC (Hsinchu) will continue to use 193-nm immersion lithography at 20-nm, but it will also deploy a double-patterning and source-mask optimization schemes.

Unlike its previous processes in recent times--which focused on low power first--TSMC's initial 20-nm process will be a high-performance technology. Following that process, it will roll out a low-power technology.

With the announcement, TSMC is seeking to gain an edge over its leading-edge rivals, such a GlobalFoundries, Samsung and UMC....."

ARM is right to be worried; Intel's first production smartphone, despite being single core, was able to produce similar performance and battery life to comparable ARM phones. If Intel pushes out their smartphone SoC on their smallest process, that could spell serious trouble for ARM.

ARM is right to be worried; Intel's first production smartphone, despite being single core, was able to produce similar performance and battery life to comparable ARM phones.

Even if true, it's not enough. Intel also has to ship at a similar price, and given that ARM just takes a few cents per chip in royalties that could present a problem for Intel's margins. They could always try dumping of course but that would be a Sherman act violation.

as you say Tough Love, i too cant see Intel matching $ for $ the likes in bulk of for instance Freescale and their &pound;22ish per 1.2ghz i.MX6 Quad core SOC with any of their intel offerings today or any time soon and that's just one single ARM white box vendor never mind any of the other vendors with faster ARM A15 and better SOC on the table this year.

your theory would be fine except for the slight problem that without ARM Inc the massive global low power smartphone market wouldn't exist today, in fact its only since the ARM cortex A8 that most average users finally realized that ARM existed, but they have been around a very long time and all the many ARM vendor licencee's have sown up the Mass low power world markets a very long time ago now.sure there always was a small contingent of superH, MIPS etc vendors in phones to also contribute a small % and p

your theory would be fine except for the slight problem that without ARM Inc the massive global low power smartphone market wouldn't exist today,

Hu? No, they would be using MIPS or something else, just like they were doing in the days of PalmOS (originally 68k based) and windows CE. There is nothing magical about ARM and power/performance. In fact, MIPS had a power/performance lead for a long time. Apple could have probably used a PPC instead of a ARM, and they would be in the exact same place as today. The

Incidentally, do you know what happened to MIPS that put it behind ARM in the trendy widgets market?

You still see MIPS cores in some routers(Broadcom, if I remember correctly, uses MIPS SoCs in their small router chipsets) and there are a few el-cheapo poor cousin Android devices that run(not terribly well) on MIPS SoCs; but not much action. What happened that everybody is talking about ARM now, MIPS is a lower profile player, PPC has retreated to a few niches, and things like SuperH barely come up for a

superh [renesas.com] They imply that it's just hiding in navigation systems and such... and in motor control. heh, heh heh. How the mighty have fallen.

No ideer what happened to MIPS, though. There's still routers coming out with MIPS cores...

If I had to guess I'd say ARM just got better faster. Wikipedia implies the last major advance in SuperH was about a decade ago. MIPS could be the same. There's some cheap-ass MIPS-powered android devices. They're all pretty slow, that supports the idea that they just couldn't keep up

That's exactly my point. Intel got their power/performance on par with ARM, and the customer-facing experience is seamless, so if they have a compelling phone, people will buy it anyhow.

That said, I'd point out how successful the Centrino branding was. Consumers did care that they got a Centrino laptop. They didn't know what Centrino was, or what it implied, but they knew they wanted it. Something similar with "ultrabook" is being attempted now. You don't need consumers to care about the technical stuff to

Why would Samsung need to switch? HTC owned the Android market before, and now Samsung does. There is nothing to say that somebody else won't do a good phone with an Intel chip and get a good chunk of the market.

Apple is unlikely to switch away only because they value the control ARM licensing gets them. However, Intel has expressed interest in ARM-style licensing, and in fact made a deal with TSMC to manufacture Atom chips in an ARM-style arrangement. They dissolved the agreement due to lack of interest, b

That led some to see ARM's target for 20 nanometer parts as overly optimistic. But the problem with the 28-nanometer chips is a short-term capacity issue, not a technology issue, so ARM's target is a realistic one, said Dan Nystedt, vice president and head of research at TriOrient Investments in Taiwan.

They have problems delivering 28nm right now so take the 20nm predictions with a pinch of salt.
"However, the transition to 28nm does not appear to be going smoothly. ARM heavyweight Qualcomm was the first to introduce a 28nm design, the stunning Snapdragon S4 based on its Krait core. But the outfit is now struggling to meet demand for S4 chips and it is basically becoming a victim of its own success. Other ARM players, such as TI, Nvidia, Samsung and Apple, have yet to introduce a single 28nm part." -- http://www.fudzilla.com/home/item/27414-arm-hopes-to-see-20nm-processors-next-year [fudzilla.com]

Yep, NVidia was scheduled to introduce a bunch of new GK108 based parts last month but due to TSMC production issues at 28nm they aren't even able to make enough of the much, much more profitable GK104 based parts to meet demand.

I often wonder, with traces being made smaller all the time, how does this affect radiation resistance? Are we going to hit the point soon where just laying the chip open in the sunlight creates enough of random electron/hole generation, so that the device becomes useless?
We already know that chips must be hardened to work in space, how long until this is true for Earth-tied ones?
If someone has an answer, it would be interesting to know.

Smaller is more vulnerable(all else being equal, which it isn't necessarily). Sunlight isn't really an issue in practice(in addition to being alarmingly indestructable, that black epoxy stuff is about as opaque as it looks); but even your big-chunky-classic-single-transistor-in-a-metal-can will show quite readily measurable photo-sensitivity effects if you chop the can open.

The radiation problem for chips is primarily due to highly penetrating cosmic rays and the particles they produce. The last time I went looking for literature on this, it sounded like the main concern on the ground was high energy neutrons produced by cosmic rays (see http://en.wikipedia.org/wiki/Cosmic_ray_spallation), which can deposit much more energy in a small space than the muons from the cosmic rays.

We already employ one "radiation hardening" technique in many ground-based systems: error-correcting

Google found in their data centers a bit error rate of roughly 300 per gigabyte per month. That's was in 2009, so who knows how that number has scaled in the last 3 years.

My home file server has 16GB of non-ECC RAM so I find that hard to believe; it gets rebooted every couple of months for kernel upgrades so if it was true I'd expect it to crash or send bad data from the disk cache more often than it's rebooted.

I saw an interesting study on a mailing list a year or two back where their tests showed that individual DIMMS either had very few errors or lots over their testing period; if you saw more than one or two errors on a DIMM you might as well toss it because you were goi

That happened with eproms in the 80s. Even industrial lighting outputs enough UV to erase a eprom. Thats why eproms after a certain era in the 80s always had a paper tape over the clear window when not being actively erased.

Simply laying a late generation eprom out in the sun for a week or so works pretty well. I've done it when my eraser bulb burned out. Obviously this not economical for a commercial dev, but for a dude fooling around in his basement it works pretty well.

Also this stretches your definition of useless, but there was at least one low current opamp I used that had to be used in the dark because the plastic case was transparent enough that it was enough of a photodiode to screw up some small signal figure (I think it was the input offset current, obviously not the gain or slew rate). Shining a 60 hz fluorescent light at it created 120 hz noise in the output not from EMI but from being a poor photodiode. I don't remember if this was a thermocouple amp or a psuedo-wanna-be-electrometer.

First of all these funny numbers come from the ITRS. This is not just random numbers, process stages are not a "preferred number system" like resistor values where statistics determines the weird values. Process stage size steps are indirectly determined by physics. ITRS is an industry association of companies who actually make this stuff. Wikipedia has a page for each stage, yes there is a wiki page called "22nm" or something like that. This "20 nm" process is actually a "half step" from the 22nm process. The next "real" step is 16 nm.

(opinion alert!) Now this is a half step from 22nm to 16nm and is considered a failure. Put your efforts into cheaper higher yield more economic 22 or advance the field to 16, don't screw around halfway at 20. Another interpretation is oxide thicknesses are getting too small at the 22nm process to make anything smaller like 16nm, essentially they're giving up on 16nm because its economically impossible(end opinion alert!) The rest of my post is pretty much factual, as far as I know.

Another interesting thing about process sizes is this is a half-pitch (essentially a radius) of an array cell. Its dumb and/or marketing to spec half-pitch instead of pitch if you're talking memory. One cell of memory using a "20nm process" is actually 40 nm across. You'll read all kinds of foolishness about how the interconnects are 20 nm across, or a unit memory cell is a 20nm on a side square, or the oxide layer being 20 nm across (which would actually be Fing huge by current standards). Basically almost all size comparisons will just be random crap and no journalist or marketing PR guy ever makes a correct analogy using half pitch, they'll say absolutely anything other than the correct answer, which has made me laugh for decades now.

Everyone knows everything comes from China. Including semiconductors. Well, actually, no. There's a nice list of plants at wikipedia. You'll see a lot of US addresses. Yes you can probably buy a knock off 555 or 741 from China, but they have almost no small scale plants at all. Pretty much processors come from the USA and a scattering of small time players around the globe. That's interesting. We (USA) make really tiny processors and really giant industrial machinery and not a whole heck of a lot in between. You want a 500000 ton mining dragline? We got it. You want a 22nm processor? We got it. You want a shoe? No we don't make those in this country anymore.http://en.wikipedia.org/wiki/List_of_Semiconductor_Fabrication_Plants [wikipedia.org]

Finally processes are a moving target and different mfgrs and different products are at each stage. There are a couple plants being built for 16 nm process and there are prototypes of "real 16 nm chips" floating around. 22nm process memory was shipping two years ago, 22nm process CPUs are much harder to design. Intel is already shipping 22nm process family CPUs, so AMD gets a golf clap for promising to catch up later this year with something microscopically better. To the best of my knowledge 11nm is not out of the lab yet even for fooling around with.

And that's about all I know from making some investments in mfgrs since the 80s, some of which worked, some not so good. Not currently investing in this market, but I still keep up with the times and I do a lot of electronics in my basement.

Everyone knows everything comes from China. Including semiconductors. Well, actually, no. There's a nice list of plants at wikipedia. You'll see a lot of US addresses. Yes you can probably buy a knock off 555 or 741 from China, but they have almost no small scale plants at all. Pretty much processors come from the USA and a scattering of small time players around the globe.

You're pretty much exactly wrong. The US was, as of 2009, in fourth place for semiconductor manufacturing with a 14% share. Ahead were Japan (25%), Taiwan (18%), and Korea (17%). The largest independent semiconductor manufacturer in the world, Taiwan Semiconductor Manufacturing Company, is based in (appropriately enough) Taiwan.

When I look at all the different chips in my home and where they were made, pretty much only my Intel processor/chipset was made in the US. The rest? Asia.

There has been a flow for many decades now where the new and tiny stuff comes solely from the US (and a couple europeans) and as that process ages it flows to the asian tigers, then eventually China. I kinda mentioned that in the original post, I donno if you can buy domestic made ancient chips like 555 741 7805 anything TTL series, etc.

It has interesting Moore's Law implications when the shrinking stops, we'll eventually (5 years or so later) completely stop mfgr semiconductors.

Ya I think the half-node obsession came from companies trying to "beat" Intel. For quite some time Intel has been almost a node ahead of everyone. Everyone else is ramping up one node and not long after Intel has the next online. So they started doing the half-node stuff. TSMC first did this for 55nm, though they were still behind Intel at the time, but then for 40nm. So they could claim on paper at least to be ahead of Intel. Intel had a 45nm process, TSMC has a 40nm process. Of course not too long after Intel went to 32nm, but for awhile TSMC could claim to be ahead (as you noted it isn't quite so simple).

Apparently Intel is sick of this and they are going to 14nm next with their new plant in Chandler. How much of that is pure marketing (I could see a hybrid process where most of it is done 16nm but something is done using 14nm half-node process so that you can technically call it 14nm, maybe the cache) I don't know but there you go.

I'm not sure you understand how ARM's business model works. They don't manufacture chips themselves, and they don't even hire somebody else to manufacture chips for them. They also don't design chips for a specific process node. They just produce a design and leave it up to a company like Texas Instruments to figure out how to build them at a certain process node (or hire some fab company to do it).

The 20nm statement is just a prediction. They're saying they expect their customers to get 20nm parts out in 2013.