TSMC is finally making 20nm parts for Apple’s next-gen iPhone, iPad

Share This article

For years, analysts have reported on the shadowy negotiations between the world’s largest foundry, TSMC, and Apple as the two companies haggled and discussed the shape of future collaboration. Now, the fruits of that collaboration are finally moving towards the light of day — TSMC has reportedly begun volume shipments of 20nm silicon earmarked for Apple’s next-gen iPhone (and possibly iPad). The new chip, likely codenamed the A8, will be the first flagship part built at TSMC instead of at Samsung, and it’s a major coup for the Taiwanese company to have stolen the business from its Korean rival.

Exactly how much of Apple’s business is shifting to TSMC is still unknown. The A8 will be the first 20nm SoC available on the market; companies like Qualcomm aren’t expected to introduce their own 20nm hardware until 2015. That gap gives Apple first-mover momentum and it’s undoubtedly part of what the company paid for in its agreements with TSMC. It’s possible that this shift could spark other companies to move production to other facilities — companies that compete with Apple at TSMC could conceivably move business to Samsung or GlobalFoundries if they think the Taiwanese foundry won’t be able to keep up with demand.

Product shipments always lag foundry production by a significant degree.

Such shifts, however, require that the foundries themselves are able to provide the necessary capabilities — and that’s not exactly a given. Rumors from earlier this year have suggested that Qualcomm will shift some 20nm production to GF, but without casting aspersions on the Saratoga-based company, we’ve heard such statements before. Back when 28nm was the new hotness, GF was expected to seize a great deal of volume from TSMC and establish itself as a competitive alternative. That didn’t happen — GlobalFoundries’ ramp has been much slower — and the company has partnered with Samsung for 14nm deployments rather than continue with its own 14nm-XM plans.

We’re guessing that Apple will have a second source for its 20nm hardware, if only to ensure it has adequate supply in the event of a supply chain issue. Samsung is the logical alternative — the two companies may have fought each other tooth and nail in the courts and markets, but they’ve also collaborated for the better part of a decade.

As for the quality of the 20nm production itself, that’s going to be very interesting. Apple could take the A8 in a number of directions, from a relatively straightforward die shrink to a new core to its first quad-core in an iPhone. We expect that the device will combine a 20nm Qualcomm modem with a 20nm processor, which should yield significant performance improvements, and then there are the rumors of sapphire glass use. If true, the iPhone 6 could be a significant step forward from the iPhone 5S of last year.

But will 20nm actually be better than 28nm?

I can see this going either way. the problem is that while investors and shareholders absolutely want Apple to deliver a home run every single year, the fact is, no one expects 20nm silicon to be that much better than the 28nm it replaces. TSMC and other foundries have predicted a roughly 20% improvement in power and performance, and while that’s big enough to notice, it’s not going to deliver the astronomical gains that Apple’s biggest fans might want.

The WSJ claims that Apple and TSMC will continue exploring next-gen plans with work on a 16nm chip, though the publication implies such a processor might ship as early as next year. While that might technically be true, it’s also technically true that TSMC has been sampling Apple on 20nm parts since Q1, yet we won’t see a launch until the middle of Q3. Even if TSMC starts building 16nm parts in 2015, we don’t expect to see that hardware shipping in volume until the tail end of the year or the first part of 2016.

Tagged In

Post a Comment

NoldorElf

We seem to be reaching a point where process node shrinks have rapidly diminishing returns, while costs are rapidly soaring. I think that along with economics, the fact that smaller nodes may simply stop giving much benefit at all may end Moore’s Law for good.

Will this 20% advantage (assuming there are no more “growing pains” that new nodes typically do have, which could lead to further delays) even translate into much? Eventually, I do expect Samsung and GF to catch up, although TSMC may hold a short-term lead.

The only thing we can do is wait and see.

Joel Hruska

The last gasp of Moore’s law died earlier this year when it became clear that 20nm chips will be slightly more expensive than 28nm on a cost-per-sq mm basis. At that point, the last advantage of higher transistor densities went out the window.

NoldorElf

Yes in terms of price per transistor, I think you are right. It is over. FinFet is quite a costly solution. There are claims from ST that a 20nm FD-SOI process could somehow buy maybe 1-2 processes in cost:transistors, but I am skeptical. 450mm wafers and EUV seem to be problem plagued, so they are not happening in the near-future.

Let’s just assume that cost scaling has pretty much stopped and in some cases has reversed (what are they going to have to do past 14nm – quadruple patterning, or more)?

So what is left? I am thinking about performance here. It is claimed to be 20% faster on a low power process. A high power process (ex: for GPUs operating at ~300 watts) was believed to be only 10% faster compared to 28nm.

I would guess that the jump to 14nm for Intel and the jump to 16nm for high power will offer an even smaller performance gain than the 32 to 22 and 28 to 20 nm did respectively. Eventually even the performance gain will be negligible.

I guess at that point it is over? Will we see GPUs where we get maybe ~5% improvement per year and that is it like CPUs? Owing to the parallel nature of GPUs, maybe a bit maybe a bit more, but that is it? And CPUs? Very few gains to single threaded performance?

Joel Hruska

Hah! You’re not wrong about the denialism.

Here’s what I think about FinFETs, FD-SOI, III-V’s, graphene, and all the other proposed changes: All of our technologies deployed over the past ten years — FinFET, high-k metal gate, strained silicon, PD-SOI in 2003 — all of them — have only managed to keep things improving at an ever-decreasing rate.

There’s no such thing as a free lunch.

Does that mean we won’t see any advances anymore? Absolutely not. Keep in mind that a chip that improves performance at 6% a year still doubles its performance every 11 years. Fifty years of steady improvements at 6% a year means we end up with computers that are 22x faster than today’s.

Granted, I’ll be pushing 90 in 50 years, but it doesn’t mean performance *stops.*

The only way to bring back old-school scaling is to reinvent the fundamental way we compute. Lots of people want to do that. Will it ever work? I have no idea.

NoldorElf

We seem to be seeing a trade-off between performance and cost now. I mean to an extent there always was (die size and thereby cost versus performance). But today it is much bigger.

I think the way things are going:
– Below 20nm will always have exponentially more complexity associated with it and therefore cost
– Even as it matures, it may never fall to 28nm levels

This in turn drives the products:
– 28nm will be refined and improved upon until no further gains can be made
– 28nm will be THE mainstream process going forward
– Only a handful of applications will have 20nm or smaller nodes (high end stuff like top of the line phones or high end CPUs)

In the long run:
– None of the “sexy new stuff” like carbon nanotube or graphene will be able to lower the massive capital costs, which is the real barrier
– R&D will of course continue so let’s hope for a breakthrough because society could really use the extra computing

I mean unless something like quantum computers are workable, we are looking at the end of massive gains and the start of costly one offs that give modest gains. We may not even get 5-6% a year forever – it may level off from 5% to 4% to 3% and so on.

It was a good run. I think it was inevitable in a way. The way an engine can only be 100% efficient at maximum, we may be approaching that asymptote, so to speak for computing. Semiconductors scaled faster than anything we have ever seen in human history.

Joel Hruska

I agree with almost everything you’ve said here. A few caveats:

Transitioning to 3D structures is expected to offer increased densities and improvements in cost structure for flash memory and DRAM. It will also eventually offer improvements to SoCs and other components. This will help provide some cost scaling in the long term where planar die shrinks alone do not.

I agree with you that 28nm is going to be a very long-lived node with many designs sitting there. Keep in mind, however, that there’s real precedent for this. As of a year ago, TSMC still derived something like 40% of its revenue from procdess nodes at 65nm and above. It’s very, very normal for a foundry to carry a “long tail” and that’s going to continue to be the case.

It’s entirely possible that we will see foundries work to bring more modern technologies across to older nodes as a way of mitigating costs. Samsung is building its first 3D NAND at 40nm. This gets confusing because process nodes really aren’t related at all to any given gate length or half pitch — a node is literally “The collection of technologies that give us enough additional performance that we need to refer to it as something new.”

So what does this mean?

I think the path is cleared down to 10nm. We will use multi-patterning if we have to, possibly with new semiconductor materials or with FD-SOI in conjunction with FinFET. Below 10nm, the industry will have to take a very hard look at EUV.

But let’s say EUV doesn’t happen. I don’t think that means the semiconductor industry just collectively quits. Instead, I think we start seeing more interest in alternate substrates or circuit building methods as well as a focus on blue sky ideas.

I don’t know if we will *find* those solutions, but I don’t think it’s time to throw the towel in on advances, either. I think that come 2020, we will still have CPUs and GPUs significantly more powerful than the hardware available today.

NoldorElf

It is not really the technical problems that are the real bottleneck here. It is money for the massive capital costs. These new nodes (perhaps we should simply say gate/half pitch instead of a single “nm”) have exponentially rising costs.

Fair enough on the towel not being thrown. The question will be, when is it “not worth pursuing” anymore? As in, we have exponentially rising costs with reducing benefits with each smaller node.

I agree that the older nodes are going to continue to generate the bulk of the business. We have to remember that when we look at the latest and greatest desktop CPUs, mobile CPUs, and GPUs, they are some of the bleeding edge technology.

I mean under 10nm, what will be the cost of the fab? Cost per wafer? Will the performance benefits justify those massive capital costs? It’ll be like 4x patterning (or something crazy like that). The problems get much harder too once we reach 5nm. EUV at this point remains a big question mark. So does the 450 nm wafers.

I agree that the GPUs of 2020 will be a lot more powerful. Maxwell for example shows some pretty impressive gains. HBM too seems like there’s some way to go. Simply owing to the parallel nature of the GPU, there’s more life yet.

The CPUs, I am not as certain, at least not for single threaded performance. Considering the progress from Sandy Bridge (2011) to today Haswell’s Devils Canyon (2014). Assuming you compare a 5 GHz Sandy Bridge to a 4.8 GHz Devils Canyon (so top end air cooling here with decent silicon lottery luck), you’re looking at maybe a 10-15% benefit over these 3 years, more for specific apps and some things that take advantage of the AVX2 instruction set.

Remembering that performance gains are levelling off on future nodes, it may very well be that Skylake could be comparable to Haswell to Ivy Bridge or even less. There are rumors that the FIVR is being removed for Skylake, so that may help enthusiasts somewhat.

So by 2020? Maybe 30% more powerful? There are say, 3 tocks (new architecture) and maybe 2-3 ticks too. That assumes scaling itself does not diminish. Lately it seems the IGP has been the most exciting part of the CPU. What’s remarkable is that at one point, 30% would have been achieved in 1 generation (like Conroe).

Joel Hruska

You’re right when you talk about the headwinds and difficulties of scaling single-threaded performance and I don’t disagree with you on any particular point.

How much more power you can divert to the CPU depends entirely on which chip we’re talking about. I believe Intel dedicates something like a third of the die to GPU, while AMD is using 47% for GPU. But since Intel’s chips are much smaller than AMD’s, you’ve got 1/3 of 177mm sq. vs. 47% of 245mm sq.

Without silicon changes, that 300W TDP doesn’t get you very much. The GPU silicon powers down when not in use.

AMD’s FX-8350 is a 125W TDP chip at 4.2GHz. The FX-9590 is a 5GHz chip at 225W TDP. A 20% clock speed increase requires 1.88x more power.

Right now the AMD architecture can compete well in multi-threaded benchmarks, but for single threaded stuff, yeah Intel has a massive advantage for single threaded performance and performance:watt. Hopefully AMD will catch up eventually.

But yeah, for the sake of argument, let’s say you did have 550mm^2 of die space with an unlimited TDP (let’s say we are watercooling our chip). How much faster would it be if you had 4 cores? Assume no integrated VR too.

Would there be any way to get a really big die with a higher performance per clock, and then clock it at a lower speed to get the power consumption down? Would it be much faster with that extra die space than existing designs?

The only chip I have ever heard of that was anything close to it was Intel’s Tukwila at close to 700mm^2 on a 65nm process.

Joel Hruska

Noldor,

Well, you can look to overclocking results for a good idea of this. Chips cooled with LN2 can hit 6.5-7GHz. AMD’s Piledriver, I believe, hit 8GHz+.

That’s essentially the physical limit of silicon.

marnijwebb

as Thelma
explained I cannot believe that a stay at home mom can make $7420 in four weeks
on the internet . more info here C­a­s­h­f­i­g­.­C­O­M­

marnijwebb

Josiah . although Jacqueline `s stori is surprising,
last week I bought themselves a Chrysler from having made $5060 thiss month
and-in excess of, 10/k last-month . it’s realy the easiest-work I have ever done
. I started this 4 months ago and pretty much straight away was bringin in at
least $78 per-hour . why not look here C­a­s­h­f­i­g­.­C­O­M­

aliciakpowers

Josiah . although Jacqueline `s stori is surprising,
last week I bought themselves a Chrysler from having made $5060 thiss month
and-in excess of, 10/k last-month . it’s realy the easiest-work I have ever done
. I started this 4 months ago and pretty much straight away was bringin in at
least $78 per-hour . why not look here C­a­s­h­f­i­g­.­C­O­M­

nhaj

With the new iPhone 6 coming out I better sell my old iPhone before the value of it drops. I usually search 8-13 different sites to find the best offer, but I just found this company that compares all the buyback companies in one spot, it’s called http://www.recomhub.com.

It’s like Kayak but for electronic devices that show you all the offers in one spot.

bettydlemons

as Thelma
explained I cannot believe that a stay at home mom can make $7420 in four weeks
on the internet . more info here C­a­s­h­f­i­g­.­C­O­M­

Zunalter

” it’s a major coup for the Taiwanese company to have stolen the business from its Korean rival.”

Is it really that TSMC has stolen Apple’s business from Samsung, or that Apple is looking for any port in the storm to move production away from Samsung due to their legal battles?

Peyton . true that Jessica `s blurb is shocking, last
monday I got a gorgeous Peugeot 205 GTi after having earned $6860 this past 4
weeks an would you believe ten-k this past-month . with-out a doubt this is the
easiest-job I’ve ever had . I actually started six months/ago and pretty much
immediately started to bring in minimum $84… p/h . Read More Here C­a­s­h­f­i­g­.­C­O­M­

Timtaper

I can’t believe there are still suckers like this, that ..CASHFIG .can take advantage of, with money promises. Everyone knows this is nothing but an internet scam site.

Nex

A8 at 20nm in iPhone 6 would be a major embarrassment to Intel if it releases earlier than their 14nm consumer parts. THE historical process node leader beaten by a second-rate TSMC and a consumer devices company without any fabs at all?

The narrative Intel and their diehards would probably be like “their 20nm is actually bigger than our 22nm” as if Intel doesn’t play the same tricks too.

Prodromos Regalides

I am possibly wrong, but I don’t think the delay in the smaller nodes has anything to do
With technical difficulties. Intel was boasting back in 2002 in their own site about having working prototypes at 15 nm and maybe 10; I don’t recall perfectly from so long ago.If they had already transitioned to 14 nm this year, this means that they would be forced with a big bang to go to 10nm and maybe 7 before 2017-2018, and then what, nothing?
These companies work on a new paradigm well before core 2 duo, or before windows vista if you like. They may be just not ready for prime time in 2-3 years, so they may want to buy some time by faking technical shortcomings. This and the fact that the competition is dull, so that there’s nothing wrong to maximize profits.

Use of this site is governed by our Terms of Use and Privacy Policy. Copyright 1996-2015 Ziff Davis, LLC.PCMag Digital Group All Rights Reserved. ExtremeTech is a registered trademark of Ziff Davis, LLC. Reproduction in whole or in part in any form or medium without express written permission of Ziff Davis, LLC. is prohibited.