Junior Member

But before I get to why I think so, first there's another modem-related story.

On October 19 Intel updated their XMM 7660 web page with info on process technology: 14nm. The news was broken not by Intel who did it quietly, but at RWT board [7660-14nm]. As far as I know, Intel didn't disclose it before with one exception: on November 16, 2017 responding to the question asked by Charlie Demerjian after the briefing ended (thus making it semi-official rather than semi-accurate), although the author doesn't rule out possibility of a mistake [CDem].

That explains why Charlie's article appears to be the only original source of the 10nm process technology for 7660, the only other sources I'm aware of refer to his article. But this bit is not the reason of this preamble; if anything, kudos to Charlie for asking and sharing Intel spokesperson's answer.
What I want to note here is that the reason of Intel's quiet update of their web page does not necessarily mean that Intel wanted to share with the world that 7660 baseband chip will be fabed using 14nm process (transceivers are a different story with regard to process, Intel could either fab them themselves using, say, 22FFL or outsource to TSMC who already manufactures 58G transceiver tiles for Intel's Stratix 10 FPGA among other products), much less that they will manufacture the part themselves: if you compare 7560 and 7660 pages, one thing about process technology that catches eye is "Intel's 14nm process" on the 7560 page and "14nm process" on 7660. Nothing more than insignificant omission or careful wording meaning the decision as to who will manufacture it is not made yet?

Anyway, assuming there are smart people left at Intel (I know, it sounds like a stretch with the exception of some very notable recent hires, but still), the added process technology detail may serve no other purpose than strengthening Intel's bargaining position in negotiations with TSMC regarding terms of potential contract for manufacturing of the chip: "Look at our page here, 7660 is 14nm, not 10nm, so we can easily fab it ourselves".

"Easily".

With production capacity shortage as bad as at is [shortage] -- with skyrocketing prices for Intel's consumer CPUs, expectations of the crisis to continue for half a year [short6months] and bad impact on the business [shortageAsus] and with Intel planning to outsource not only chipsets [chipsets], but also CPUs to TSMC [CPUs] and reverting back to 22nm process for H310 chipset [22nm], their total mishandling of production capacity planning is well recognized [planning].

Worse for Intel, they are unable to significantly expand their production facilities in the near term: their notorious $7B Fab 42 in Chandler, AZ is 2-3 years away from completion [Fab42] and $5B expansion or upgrade of Fab 28 in Kiryat Gat -- 1-2 years away [Fab28]. Their response to apparent lack of capacity is balooning already huge $14B 2018 CapEx budget further by $1.5B in $0.5B increments every single quarter [CapEx], but would that be sufficient? Especially considering two things simultaneously: a) their commitment to 10nm products on shelves for 2019 holidays [10nm2019] and b) the need to allocate already scarce production resources for the ramping of 10nm processors, since Intel has "repositioned" their 10nm capacity to 14nm in order to meet demand [10nm2019].

To make matters worse, 7660 with its 1.6x higher downlink bandwidth will increase die size from 7560's 7.15 x 7.98 = 57 mm² [7560die] further. To what extent? Keeping the part's power efficiency roughly the same means computational resources need to take up roughly 60% more die space, which, depending on the share of total die area dedicated to them, would result in a ~74 mm² (for ~50% share) or 88 mm² (for 90% share) chip -- just a couple of examples for illustration.

As a matter of fact, one could put forward an argument here that Intel may not have to not only make 7660 baseband a bigger chip than 7560 used in iPhones released this year, in fact they may not have to design a new chip at all, instead reusing existing 7560 silicon.

How is it possible? Intel's basebands are not "hardwired" -- they use DSPs licensed by CEVA [7360CEVA] [7480CEVA] [7560CEVA] and at least 7560 has x86 cores [x86] -- probably used for nothing but high-level control and ILP-poor and low-overhead protocol processing functions, where DSP or HW accelerators offer little advantage, if any at all. That means that features introduced in newer releases of 3GPP specs may not necessarily require Intel to design a new baseband to comply with them; all that's needed is software update and sufficient performance and power headroom of the silicon.
If the latter is there, then 1.6x higher downlink speed compared to 7560 can come at the cost of 60% to 4x higher power consumption at the peak rate of 1.6 Gbps depending on how much higher Vdd it would take to run the chip at 1.6x the clock rate to process the data.

But how realistic is this opportunity? Depends on a lot of things, and one big and really crucial unknown in this regard is how strict are Apple's power consumption requirements vs what Intel makes -- in other words, yield of 7560. Not functional, which can be 100%, but percentage of parts meeting Apple's power specs, which can be anything but. If reports of 7560 yield of a little over 50% [7560yield] are true, then this opportunity is, well, not an opportunity at all.

And yet, despite apparent success at winning modem spot on iPhone's board, Intel's years as Apple's modem supplier may be counted, and that includes 2020 per recent speculation of Apple's use of Intel's 8161 5G modem [apple5G2020]. Or of any other modem provider, for that matter: Apple was suspected to design its own modem for a long time before [AppleMdm], but job offers posted by Apple over the last few weeks leave no doubt that the company not only didn't give up the plans, but is accelerating the process, which may well be spurred by both ongoing litigation between Apple and Qualcomm [lawsuit] and alleged shortage of Intel's modems in power efficiency department [apple5G2020].

After taking a closer look at the job offers I spotted a while ago, it occurred to me that their strategy towards building their own modem consists of two steps: at first they will design a somewhat simplified version for the Watch which currently uses Qualcomm's baseband and transceiver and then will build upon that foundation to design a full-blown modem for iPhone, and possibly occupying not space on the board, but a spot on Ax die.

Placing a modem on SoC has several advantages: lower manufacturing cost and higher overall power efficiency of the phone. For an on-SoC modem its die-size cost is less than that for a discrete IC, since LTE/5G computational kernels can take advantage of existing resources: GPU, NPU, DSP etc. which can be made further "modem-friendly" by extending them with special execution units to aid such tasks as Viterbi decoding, Turbo/convolutional(LTE) and LDPC/Polar(5G) encoding/decoding, FFT/iFFT, DFT/iDFT transforms etc. for ultimate performance and power efficiency of the smartphone. Nice side effects of such approach is availability of these resources for applications and much lower latency compared to a discrete IC. After all, Ax may already have CEVA cores [A11Ceva] so why not take some real good advantage out of it when it comes to modem design?

So without any further ado, here is a selection of job offers Apple has been posting recently, which caught my attention. Not just for the sake of supporting evidence for the opinion, but also as a handy bonus for those of you in search of a job or simply willing to join Apple -- take a chance as long as you happen to fit Apple's requirements, which span a very wide range -- from internship to MS with PhD preferred and 10+ years of experience or, say, MS in a specified field and 15+ years of experience with proven track record of zero-respin tapeout success.

But why would Apple do this? Intel's modems appear to fit Apple's bill in terms of both tech specs and price, so why embark on such a huge undertaking as in-house design of an LTE/5G modem? After all, Intel is incredibly bullish about 5G, making a hell of PR buzz out of it on their web site [Intel5G] and with nearly three years of using their FPGA-based Mobile Trial Platform under their belt and demonstrated downlink speed hovering within 4-5 Gbit/s range on average [IntelMWC4-5] and 5.3 Gbps peak [IntelMWC5] in a 2-in-1 device, making Qualcomm's own 4 Gbps average and 4.5 Gbps peak rates [QCM4Gbps] demonstrated using a board in glass enclosure look pale in comparison. That being said, Intel must be ahead of the pack in 5G race, right?
Wrong. And I think the exuberance of the vast majority of news reports in the wake of Intel's 8160 press release three days ago is pretty wrongheaded. There are several reasons why I think the condition of Intel's commercial 5G silicon (XMM 8060 or 8160 regardless) is far from the message that Intel's numerous 5G-related press releases send. Put another way, reasons why Apple is starting a 5G modem design project.

First is the recent 8160 news intself [Intel8160]: quite frankly, I don't see any reason for acceleration of the time to market by over half a year other than because of 8060 being a complete dodo. Seriously, read the announcement closely: the features of 8160 that Intel boasts in the news -- multimodality and sub-6 GHz and mmWave bands are exactly features of the 8060 slated for mid-2019 launch [Intel5G2019], except for 6 Gbps DL speed, which hardly counts as an argument here, since 8060 announcement didn't contain any. 8060 was the modem to be used by Spreadtrum in its first Android-based high-end 5G smartphone due in 2019 [Spreadtrum] and there could well be other parners with 2019 5G deployment targets -- what's with those plans now?

So in all probability, this "acceleration" is nothing but delay of 8060 due to need for redesign, possibly adding a few features in the process, with 8060 going by the name of 8160 henceforth. Looking at the news from another perspective, if Intel is indeed capable of accelerating 5G modem design process by half a year by the power of its executives and the word of a public press release, why did they never apply the same power to accelerate development of their 10nm process?

And the actual status of their 5G silicon brings me to the second reason: Intel's 2018 MWC demo -- do you remember it? One thing that really struck me back then is complete lack of technical details about the demonstration [Intel5Gmwc] -- no exposed internals (as opposed to Qualcomm), not even photos of the board or the chip, even blurred, no official power consumption figures, not a word whether they were showing Gold Ridge (aka Goldridge) or early 8060 silicon etc. -- well, nothing except boasting the truly impressive downlink speed -- 4-5 Gbps sustained average. Such data rate is indeed possible for 4K streaming, e.g. a 4K 24bpp 24fps video in raw format requires 4.8 Gbps for pixels. 60fps one with modest compression fits the range as well.

Next up is power consumption: according to Engadget, the convertible Intel showed at MWC lasts just 3 to 4 hours playing 4K video over 5G [3-4hours]. We don't know capacity of the battery, but presuming it's within usual 50 to 60 Wh range, that means power consumption is within 12-20 W range, hence the modem adds somewhere in 10-15 W of power!

For conspiracy theorists out there, I have a good question: what if Intel's MWC demo was arranged using a similar recipe as their Computex show of 28-core 5 GHz processor [Intel 5GHz]? In other words, an attempt to steal Qualcomm's fame, but contrary to Computex, not called out by anyone?
Consider this: Intel's 2-in-1 with 4-5 Gbit/s video streaming demo was all about 8000-series 5G modems to debut in the 2nd half of 2019 in hybrids as well as smartphones, but what was in that 2-in-1 to do 5G processing, even assuming 4-5 Gbps average and 5.3 Gbps peak reported speeds were true? Was it really early 5G silicon as the demonstration clearly suggested -- at least the Goldridge prototype, or possibly early 8060, as mid-2019 availability of such innovative product demands first silicon well in advance of launch? A good conspiracy theorist shouldn't default to this natural assumption after Intel's Computex 5 GHz 28-core processor demo last summer.

5.3 Gbit/s, Intel said? I suggest you should listen to this post-Computex interview with an Intel engineer https://www.youtube.com/watch?v=ozcEel1rNKM and as you watch it, replace mentally Computex with MWC, AMD with Qualcomm, 5 GHz with 5 Gbit/s, and Xeon 8180 with CEVA's FPGA dev board or a board taken straight out of Intel's own Mobile Trial Platform as long as it would fit in that 2-in-1 with low-height heatsinks for FPGAs and that's how they could pull it off. I briefly entertained the idea of contacting that creative person who wrote captions for the Youtube video to produce a vid for Intel's MWC demo for this writeup, as I'm not that creative, but he was out of reach via Youtube account.

But jokes aside, I'm not really convinced the demo was rigged in such a fashion, it's just that given how suspiciously little information was provided by Intel, it's not totally groundless to suspect another Computex-like demo from Intel. After all, just a couple of months earlier at CES Intel demonstrated 4K video streaming as well, but at CES they made it clear they used Mobile Trial Platform for the demo [IntelCES] -- that's the third reason why Intel's own 5G demo doesn't look like convincing proof of their success with 5G silicon.

And anyway, Intel's MWC demo does not even need to be rigged in an attempt to steal Qualcomm's fame, as such hideous power consumption renders Intel's 5G modem unsuitable even for 2-in-1s, much less smartphones, and Apple's recent 5G hiring spree is fully understandable -- that's the fourth reason.

Besides, Intel's 5G modem shown at MWC is not 5G NR compliant, requiring an FPGA for compatibility [Intel5Gnonstandard] -- count it as the fifth.
The sixth is that Intel has been suspiciously quiet about showing their 5G modem silicon -- whether Gold Ridge, which has been functional since 2017 according to Intel and some reports referring to Intel's representatives, or early 8060 -- to the extent of not even mentioning it in their press releases when reporting various 5G milestones including most recent ones, citing use of their Mobile Trial Platform instead [no5Gchip].

The seventh is recent leak of Apple's 5G debut coming in 2020 and using Intel 2nd-gen commercial 5G chip, 8161 or 8160 [apple5G2020], while reports of Apple and Intel's joint 5G work date back to 2017 [AppleIntel5G], which doesn't rule out Apple's possible plans back then to introduce 5G in 2019 iPhones with 8060 modem slated for mid-2019 availability.

And finally, combining 2020 5G iPhone target and Intel's 10nm process technology plans to have end products before Christmas 2019, which didn't keep slipping from one earnings call to another for the first time on my memory over the last two calls, suggests that 8160 will be fabbed using 10nm tech, and Fastcompany's source states the same [Apple5G2020]. My own take on Intel's 10nm process is that it doesn't suffer from low yields just like any other process Intel designed before -- that neverending "just a little bit more time, and we'll get 10nm yields right" story Intel has been telling for years -- but is broken irrepairably by bad design decisions [Intel10nm].

Lack of trust in Intel's ability to deliver process technology with good PPA and in time may be the eighth plausible reason why Apple set out to design its own 5G modem in earnest.

It's obvious Apple wants to move forward, keep creating the image of world's best smartphone, and not being world's first and best when it comes to celullar tech has been iPhone's proverbial spoonful of tar in a barrel of honey for years. And if Apple understandably regards Intel as uncapable of delivering 5G modem meeting Apple's power, performance and time targets -- and, again, what Intel announced was, in fact, mid-2019 availability in "commercial customer devices" [Intel5G2019] which would be in time even for 2019 iPhone launch -- and further doubts their ability to deliver on schedule and specs in 2020 with 8161/8160 on Intel's 10nm, then it shouldn't come as a surprize that Apple finally bit the bullet and took their 5G modem destiny in their own hands.

Indeed, it's about time. Apple has all it takes to succeed -- excellent silicon engineering talent they managed to collect over the years, reliance on cutting-edge process technology, and this amazing flawless execution with yearly cadence -- Intel clearly lacks (2) and (3), and even (1) starts to look somewhat questionable to me, to be honest.

On a side note, looking at how impressive Apple's Ax looks with each new release, I only shake my head in disbelief at the opportunities Apple keeps losing by not introducing Ax-based notebooks. As long as there's demand for this classical form factor from Apple, they could have a blast selling them for a few years now -- not constrained by the same power limit for the SoC as in iPad, especially when working from wall socket, and not even limited to single-die design either, to give some very real differentiation with iPad and a serious tool whose power-perf combo would make Intel-based notebooks look obsolete in comparison.

And not just for the sake of selling another cool product; it would allow them to do tighter binning on power efficiency without throwing dies away, with top crop going into iPhones, medium into iPads and the worst ones -- into notebooks.

A few remarks
Just in case anyone is curious, I'm not affiliated in any manner with any company mentioned herein, or their reps or affiliates, nor do I have any relevant financial interests. This writeup was originally meant to be just another post at RWT forum, then, as it kept growing almost against my will, it became an article, whose draft I pitched to a few sites and people only to hear back either nothing, something along the lines of "Thanks, but unfortunately we don't publish articles pitched to us by outsiders. Good luck seeking a publisher." or interest in publishing it, but alas terms and conditions offered to me were rather uninspiring.

After some pondering I decided to self-publish it rather than keep it to myself. After all, I enjoyed writing it up, and if you appreciate the contents and quality, or will benefit from it somehow -- be it by joining Apple's 5G+LTE modem development project while they're evaluating candidates and most of these offers are still available or in any other manner, I'll consider my time well spent.

Lifer

Diamond Member

You know, I see all that and I can summarize it quite simply: Apple likes to control the full stack whenever it thinks it can do a better job. And when it's facing both Qualcomm's steep royalties and Intel's general struggles... well, it believes it can do a better job.

Given that it's improving its SoCs so quickly that it's now the clear performance champion in mobile, I'm wondering if Apple might be right. There would be a certain amount of poetic justice if Apple ended up outperforming Qualcomm on both modems and processors.