Intel’s next-generation Broadwell CPUs delayed due to yield problems

14nm chips will now begin production in Q1 of 2014.

Intel's next-generation CPUs will arrive slightly later than expected.

Intel

During the company's third quarter earnings call yesterday, CEO Brian Krzanich announced that production of Intel's next-generation Broadwell CPUs would be delayed slightly due to manufacturing issues. CNET reports that a "defect density issue" in the new 14nm manufacturing process was causing lower-than-expected yields and that Intel's first round of fixes didn't improve the yields by the expected amount. Krzanich expressed "confidence" that the issue had been fixed, that it was just a "small blip in the schedule," and that the CPUs would begin mass production in the first quarter of 2014 rather than the fourth quarter of 2013 as expected. Broadwell's successor, codenamed Skylake and due in 2015, will apparently not be affected by the delay.

Broadwell is a "tick" on Intel's CPU roadmap, a refined version of the current Haswell architecture built on a new manufacturing process. Intel typically doesn't introduce a new architecture and a new manufacturing process simultaneously to reduce the likelihood and severity of manufacturing issues like these. Even with the delay, Intel will still be producing 14nm chips while most of its chipmaking competitors (including TSMC and Samsung) are rolling out their 20nm processes.

Intel hasn't gone into much detail on what Broadwell will bring to the table, but smart money says that it will further reduce power usage over Haswell while also increasing CPU and integrated GPU performance incrementally. The company announced at its Intel Developer Forum this year that it was seeing a "30 percent power improvement" over Haswell in early production samples, a number which may stand to improve as the process matures and yields get better.

Intel's 14nm process will also be used to build next-generation Atom chips based on the "Airmont" architecture—like Broadwell, Airmont is a shrink of the Silvermont CPU architecture rather than an all-new chip. Silvermont is just beginning to come to market in the new Bay Trail Atom SoCs, so we wouldn't expect to see its successor show up until later in 2014.

Considering the physics involved in every shrink at this scale, a delay doesn't surprise me considering Intel's aggressive tick tock schedule. Long term I'd expect Intel's foundry lead to decrease not because the other companies are advancing faster but rather further node advances start slowing down as we approach the atomic level. Getting to 7 nm is going to be brutal over the next 5-6 years with mass production of <=5 nm may not be possible with current lithography techniques. Physics here is simply hard.

Broadwell is still expected to be available in a socket format, but the expectation from most pundits is that Broadwell will only be on mobile platforms in 2014. On the desktop for 2014, there will be a Haswell refresh still at 22nm.

This was going to be a "tick"/die shrink year anyways, but then again Ivy Bridge was too but had similar performance gains to Haswell relative to the one before it despite being "just" a shrink.

Even moreso for the GPU side, Intels GPU architectures are actually getting pretty good, if they simply used the shrink to double execution resources from 40 to 80 and had the eDRAM bandwidth to feed it, that would be one hell of an integrated GPU for gaming. If that could have made it into 13" laptops (especially the rMBP), hot damn.

The only thing I'm worried about is that they had best start looking into some better post-production processes with regards to the heat spreader thermal interface. As small as the chips are getting, the heat is getting more and more focused into a tighter areas and are actually limiting their performance potential.

My question though relates to this: what's next? Yes, they're the ones to beat when it comes to making chips, but they can only go down so far before they hit some pretty hard physical limits. Once they hit that limit, they had best come up with some novel ways of doing things if they want to keep their top-dog status.

Considering the physics involved in every shrink at this scale, a delay doesn't surprise me considering Intel's aggressive tick tock schedule. Long term I'd expect Intel's foundry lead to decrease not because the other companies are advancing faster but rather further node advances start slowing down as we approach the atomic level. Getting to 7 nm is going to be brutal over the next 5-6 years with mass production of <=5 nm may not be possible with current lithography techniques. Physics here is simply hard.

Each shrink tends to cost more than the last and these costs are rising more rapidly than the semiconductor market is growing. Clearly that trend is going to hit a limit where further advances aren't worth the cost.

The only thing I'm worried about is that they had best start looking into some better post-production processes with regards to the heat spreader thermal interface. As small as the chips are getting, the heat is getting more and more focused into a tighter areas and are actually limiting their performance potential.

My question though relates to this: what's next? Yes, they're the ones to beat when it comes to making chips, but they can only go down so far before they hit some pretty hard physical limits. Once they hit that limit, they had best come up with some novel ways of doing things if they want to keep their top-dog status.

They're already doing work beyond shrinks, see FinFET for one example.

This was going to be a "tick"/die shrink year anyways, but then again Ivy Bridge was too but had similar performance gains to Haswell relative to the one before it despite being "just" a shrink.

Even moreso for the GPU side, Intels GPU architectures are actually getting pretty good, if they simply used the shrink to double execution resources from 40 to 80 and had the eDRAM bandwidth to feed it, that would be one hell of an integrated GPU for gaming. If that could have made it into 13" laptops (especially the rMBP), hot damn.

I wouldn't be that bullish. As far as I've understood it, Intel GPU's gaming performance went from slideshow to barely playable, and that was while testing current games. From what we've seen about requirements to run so-called next-gen games (that is running on PS4 and XB1), even doubling GPU performance should not be enough to escape new slideshows in big AAA titles. Now, a lot of games are not AAA anymore and should run just fine on future Intel chips.

They just shipped Haswell in June and they are worried about pushing one quarter to release a new process node less than a year later? Sights on mobile much?

Ivy Bridge from last year is what brought the 22 nm process to market. It wasn't clear if Haswell was delayed or they simply with held its launch to clear out Ivy Bridge inventory, but it shipped several months late.

Aren't these the chips that are supposed to be soldered to the board itself?

That was Intel's initial plan but they've been flip flopping a bit on what will be released. There will likely be a desktop Haswell refresh before Broadwell desktop chips arrive. On the more positive note, the Haswell refresh may bring L4 eDRAM to a socketed chip.

This was going to be a "tick"/die shrink year anyways, but then again Ivy Bridge was too but had similar performance gains to Haswell relative to the one before it despite being "just" a shrink.

Even moreso for the GPU side, Intels GPU architectures are actually getting pretty good, if they simply used the shrink to double execution resources from 40 to 80 and had the eDRAM bandwidth to feed it, that would be one hell of an integrated GPU for gaming. If that could have made it into 13" laptops (especially the rMBP), hot damn.

I wouldn't be that bullish. As far as I've understood it, Intel GPU's gaming performance went from slideshow to barely playable, and that was while testing current games. From what we've seen about requirements to run so-called next-gen games (that is running on PS4 and XB1), even doubling GPU performance should not be enough to escape new slideshows in big AAA titles. Now, a lot of games are not AAA anymore and should run just fine on future Intel chips.

Intel's top model current GPU gives frame rates similar to an nVidia GT640. That's good enough to play most current generation games at 720p/medium settings.

Most next gen games are also going to be launched on the PS3/XB360 in addition to the new consoles; and with IGP only laptops an increasingly large share of the PC market game devs are going to want to make sure that their titles are playable on them even if only at minimum settings. Assuming Broadwell's IGP is as much of an upgrade as IVB's was it should be able to handle most games for the next 2 or 3 years.

So if I buy a computer before Broadwell then I'd need a new motherboard if I wanted to upgrade the CPU?

We don't know. Rumors are all over the map. You have your choice of:1) Broadwell is BGA (soldered) only)and there will be no desktop 9 series chipset2) Broadwell is BGA only, a 9 series chipset will exist but for a Haswell refresh3) Broadwell will be available as a socketed chip and work with current 8 series mobos with a BIOS/UEFI upgrade4) Broadwell will be available as a socketed chip but only work with a new 9 series mobo

They just shipped Haswell in June and they are worried about pushing one quarter to release a new process node less than a year later? Sights on mobile much?

Can't complain, though. I'm all for a Razer Edge refresh that can game for six hours on battery.

I think you're confusing dates a bit here. Haswell was launched in June, but production started on it far sooner than that. If production on Broadwell starts in Q1 2014, the launch date will be significantly later than that. Also, since there are yield problems, it may take a while to build a sizable number of them.

Considering the physics involved in every shrink at this scale, a delay doesn't surprise me considering Intel's aggressive tick tock schedule. Long term I'd expect Intel's foundry lead to decrease not because the other companies are advancing faster but rather further node advances start slowing down as we approach the atomic level. Getting to 7 nm is going to be brutal over the next 5-6 years with mass production of <=5 nm may not be possible with current lithography techniques. Physics here is simply hard.

But ain't it grand we've got such an aggressive vehicle to push it forward?

Considering the physics involved in every shrink at this scale, a delay doesn't surprise me considering Intel's aggressive tick tock schedule. Long term I'd expect Intel's foundry lead to decrease not because the other companies are advancing faster but rather further node advances start slowing down as we approach the atomic level. Getting to 7 nm is going to be brutal over the next 5-6 years with mass production of <=5 nm may not be possible with current lithography techniques. Physics here is simply hard.

Each shrink tends to cost more than the last and these costs are rising more rapidly than the semiconductor market is growing. Clearly that trend is going to hit a limit where further advances aren't worth the cost.

True but the cost/benefit will remain favorable for future shrinks to 7 nm. Higher transistor density will be advantageous as SoC's become more complex. For example, in the embedded market I'd imagine some devices experimenting with an on-die eDRAM (512 MB?) memory and forgo external DRAM. There will also be demand for dense transistor count and large dies from the HPC area.

Beyond 5 nm, the costs may hit a financial limitation as you describe, though I see physics being the bigger problem than the bank.

This was going to be a "tick"/die shrink year anyways, but then again Ivy Bridge was too but had similar performance gains to Haswell relative to the one before it despite being "just" a shrink.

Even moreso for the GPU side, Intels GPU architectures are actually getting pretty good, if they simply used the shrink to double execution resources from 40 to 80 and had the eDRAM bandwidth to feed it, that would be one hell of an integrated GPU for gaming. If that could have made it into 13" laptops (especially the rMBP), hot damn.

I wouldn't be that bullish. As far as I've understood it, Intel GPU's gaming performance went from slideshow to barely playable, and that was while testing current games. From what we've seen about requirements to run so-called next-gen games (that is running on PS4 and XB1), even doubling GPU performance should not be enough to escape new slideshows in big AAA titles. Now, a lot of games are not AAA anymore and should run just fine on future Intel chips.

Intel's top model current GPU gives frame rates similar to an nVidia GT640. That's good enough to play most current generation games at 720p/medium settings.

Most next gen games are also going to be launched on the PS3/XB360 in addition to the new consoles; and with IGP only laptops an increasingly large share of the PC market game devs are going to want to make sure that their titles are playable on them even if only at minimum settings. Assuming Broadwell's IGP is as much of an upgrade as IVB's was it should be able to handle most games for the next 2 or 3 years.

You're right about the top-end GPUs but so far in actual shipping systems the HD 4400 has been the most common. That one's not a bad performer for an IGP (a little faster than HD 4000 but not much) but the top-end IGPs are pretty rare relative to the midrange ones.

Intel's top model current GPU gives frame rates similar to an nVidia GT640. That's good enough to play most current generation games at 720p/medium settings.

Iris Pro has passable performance (it's still not as good as AMDs offering from last year with the A10-6800) but it's not available in desktop configurations.

And really, it's not impressive that the latest and greatest from Intel can barely compete with 8 year old consoles.

Hopefully the next gen AMD chips will be more impressive in that regard.

It's available in BGA desktop parts 4770R/2670R/4570R; it's just socketed versions that don't offer it. Not surprising since when used as an L4 cache it barely bumped performance and anyone who buys a desktop and cares about gaming is going to use a discrete GPU anyway.

You could snark equally about current budget discrete cards if you're willing to ignore that the consoles have 8 years of low level optimization, run at significantly lower eye candy levels than minimum desktop settings because even crap GPUs are good enough to take away the wow factor when compared with the state of the art for many people, and take shameless advantage of your generally not being able to tell the difference between native resolution and upscaled 480p from the couch.

Having worked once long ago in the lithography industry when feature sizes were 0.25-0.5 microns, I am simply astounded at 14 nm feature sizes (especially using light at 193 nm -- a wavelength more than 10X larger than the features themselves).

I actually only upgraded a CPU once (going from a 2 GHz to 2.4 GHz Turion) so I don't mind having BGA processors in consumer platforms.

For the 14 nm node everyone is doing double pattering lithography but Intel is doing 14 nm class interconnects while everyone else is doing 20 nm class interconnects. The current rumor is that Samsung will have ready 14 nm SoCs in the first half of 2014. If Samsung also delays their SoCs then it looks like double pattering will be very tricky for high volume manufacturing.

One other thing though: won't power savings benefits become smaller and smaller even if it is 30% savings each time? There is a point whre the shrink doesn't do as much to help given the already significantly better power usage, and that the screens are still the single largest usage of power on a laptop (especially with high DPI screens).

All that being said, it certainly seems like this is an overall non-issue for intel. They are already so far out in the lead that even a 1Q delay won't cost them much of anything. Since this seems to be all process issues and not architecture issues (or at least heavily weighted that way) it shouldn't mean much for the follow-up on the same process, so while its a speed bump now they will be back on schedule pretty quickly. All in all, it is pretty amazing how small its all gotten, and how much they've put into getting there first.

Considering the physics involved in every shrink at this scale, a delay doesn't surprise me considering Intel's aggressive tick tock schedule. Long term I'd expect Intel's foundry lead to decrease not because the other companies are advancing faster but rather further node advances start slowing down as we approach the atomic level. Getting to 7 nm is going to be brutal over the next 5-6 years with mass production of <=5 nm may not be possible with current lithography techniques. Physics here is simply hard.

It has long been a postulation that the increased difficulty of developing a new process would result in fabs getting closer to Intel with shorter lag between Intel's transition and the competition's transition to the same node.

This theory has not proven reliable. Since 90nm Intel has proven itself reliable at executing their process development relatively close to their targets; the remainder of the industry has seen whole nodes cancelled, year long delays, and even horrific yields and production issues on long delayed rollouts.

If anything the gap is getting larger. And with each increasing node being more complex and requiring more investment Intel (rich and full of the industries best, not to mention they work closely between design and process) will continue to best the industry.

I actually only upgraded a CPU once (going from a 2 GHz to 2.4 GHz Turion) so I don't mind having BGA processors in consumer platforms.

For the 14 nm node everyone is doing double pattering lithography but Intel is doing 14 nm class interconnects while everyone else is doing 20 nm class interconnects. The current rumor is that Samsung will have ready 14 nm SoCs in the first half of 2014. If Samsung also delays their SoCs then it looks like double pattering will be very tricky for high volume manufacturing.

Biggest issue is the mb dying and taking out the cpu as well. Hard to troubleshoot which part died. And if you're out of warranty, you can't reuse your cpu with a new board.

I've moved cpus from P4 to Pentium D to Core 2 on the same mb and an i3 to i5 as well.

Considering the physics involved in every shrink at this scale, a delay doesn't surprise me considering Intel's aggressive tick tock schedule. Long term I'd expect Intel's foundry lead to decrease not because the other companies are advancing faster but rather further node advances start slowing down as we approach the atomic level. Getting to 7 nm is going to be brutal over the next 5-6 years with mass production of <=5 nm may not be possible with current lithography techniques. Physics here is simply hard.

I thought less than 6 nm is effectively impossible since electron tunnelling makes the logic gates fail.

Honestly, I've gotten to the point where the high end bores me. 10% increase of a huge number is still just a huge number. I'm more interested in how the Pentium and/or Atom lines evolve with each generation. My dekstop is already fast enough, but my NAS could always do with more to better handle transcoding, while keeping the power usage as low as possible. And right now the pentiums are being purposely kept behind the i series, so it will be more interesting when they have AVX and AES instructions built in as well. And better tablets/phones are always awesome

Andrew Cunningham / Andrew has a B.A. in Classics from Kenyon College and has over five years of experience in IT. His work has appeared on Charge Shot!!! and AnandTech, and he records a weekly book podcast called Overdue.