Intel held its first earnings call after the resignation of former CEO Brian Krzanich and reported record second-quarter revenue of $17 billion, which weighs in at a 15% year-over-year gain. Intel also upped its guidance for the year by $2 billion, which would mark a 10.6% yearly growth rate.

Despite the record revenue, a 78% gain in profits to $5 billion, and increased guidance, the company's stock slipped 6% in after-hours trading due to slower-than-anticipated growth in its data center portfolio.

Intel's interim CEO Bob Swan conducted the call and was joined by Murthy Renduchitala, the company's chief technology officer, and Navin Shenoy, the head of the data center group. Both Murthy and Shenoy are considered top candidates for Intel's open CEO position, but the company stated that it is still evaluating both internal and external candidates.

Highlights included Intel's announcement of increased desktop PC revenue growth, along with literally every other segment of the company's portfolio, and that 10nm-based systems would come to market in the second half of 2019. Intel's continued revenue growth in the PC market is important considering AMD's stellar growth rate in the same market, but Intel's figures do come with caveats.

Intel contends that its 10nm is on schedule for the 2019 holiday season but did not elaborate on production volume. Intel's Coffee Lake launch last year was marred by shortages for several months, meaning the company was unable to meet demand, which resulted in price hikes. It's possible the 10nm rollout will follow a similar trajectory as the company brings OEM systems to market, likely Y-series chips for laptops, and the company also did not share a timeline for boxed processors for the desktop. Intel also said that it would continue to deliver "leadership 14nm products" over the course of next year, which gives it enough time to tweak the 10nm process. As such, we can expect new 14nm products to continue to come to market.

Intel also announced that it would face challenges in the second half of this year meeting the increasing demand for its 14nm products, and while the company didn't elaborate, this is likely a byproduct of the delayed 10nm ramp. Planning silicon production capacity is a multi-year process that involves getting the production facilities and tooling in place for mass production, but Intel fully planned to be in high volume production of 10nm processors at this point. As a result, the delayed 10nm process has pushed unanticipated demand for 14nm products back to fabs that likely haven't been expanded. Intel says it will work with its customers and fabs to address the issue.

02

01

09

10

02

01

09

10

Intel's CCG (Client Computing Group) touted its increased revenue for the PC market, which comes on the back of a 10% increase in the average selling prices for gaming and enthusiast chips. Intel's product mix has shifted to more expensive models, which helps smooth out lower sales. Desktop PC revenue grew 6%, but the company delivered 9% fewer processors compared to the same quarter in 2017 and sales are down 8% for the year. The desktop PC market has stabilized, and Intel says the PC market is poised for its first growth since 2011, so it is easy to assume that some of Intel's lost processor sales come at the hands of AMD. Intel continued to grow its notebook sales, which are up 3% for the year, but AMD reported that sales of its Ryzen Mobile processors have doubled, setting the stage for more competition in the future.

30

31

30

31

Intel's data center business revenue grew 27% to $5.5 billion during the quarter but missed expectations by 2%. That's especially concerning given that Intel had already revised its projection lower for the year due to "tougher competition in the second half," which is likely a reference to AMD's EPYC. The pricey Purley Xeon generation is gathering steam as data centers deploy new systems and refresh older ones, so average selling prices rose 14% year over year.

Intel has spent the last year transitioning to "data-centric" businesses, which are largely composed of data center products. This comes as the company reduces its reliance on the bread-and-butter PC segment. The data center group contributed 49% of Intel's revenue in Q2, which is an increase of 3%. Intel is working to decrease the traditionally long gap between new process nodes for the client segment and data center processors. As such, Intel will quickly follow the 10nm desktop models with new 10nm Xeons, but with no firm date given. Intel's goal is to ramp both segments simultaneously in the future.

12

13

11

Intel cited recent tax cuts as one of the key factors in increased data center spending by its customers. Intel also benefited from tax cuts and is now paying a much lower rate, which helped boost profit. Margins remained flat at an impressive 63%.

Intel's faster ramp of 10nm data center products is going to be a critical component in fending off AMD's 7nm EPYC Rome processors that arrive early next year. Even with a shorter gap between Intel's desktop and data center processors, AMD has a relatively large window it can exploit with 7nm processors. Recently leaked roadmaps imply that Intel won't have 10nm server products on the market until mid-2020, but company representatives sounded more bullish on the roadmap during the call. Intel is already focusing on applying the lessons it learned with the 10nm delays to its future 7nm node. Intel's Murthy said "we're focusing on an optimum balance point between density, power and performance, and schedule predictability. So I think what you'll see is a more balanced approach across those three vectors."

Intel's faster ramp of 10nm data center products is going to be a critical component in fending off AMD's 7nm EPYC Rome processors that arrive early next year. Even with a shorter gap between Intel's desktop and data center processors, AMD has a relatively large window it can exploit with 7nm processors.

This is all assuming, too, that Intel's first generation 10nm is actually tangibly better than the now extremely mature 14nm process. We saw a few years ago that Intel's first commercially available 14nm CPUs were inferior in raw clock speeds and power efficiency to products on the mature 22nm node. Even a year later the 14nm Broadwell-E chips couldn't clock as high as the 22nm Haswell-E CPUs with matching core counts. That latter example likely has more to do with heat dissipation and density than raw silicone performance/efficiency. Nevertheless, the point remains that it wasn't until Skylake that we actually saw meaningful node-related efficiency and clock speed improvements in consumer products.

There seems to be this assumption (not necessarily from Toms - more referring to the comment-sphere here) that when Intel finally releases 10nm we'll see meaningful performance improvements from Intel and they'll be ready to "compete" again. That's far from a given though! The 14nm process is so mature now and significantly better than it was at release. If 10nm folllows a similar trajectory we shouldn't be at all surprised if we have to wait until 2nd generation 10nm before we see parts that have equivalent (let alone better!) performance characteristics to the now mature 14nm products.

Of course, the 7nm process AMD is relying on is also unknown at this point, but the fabs are spruiking sizeable performance gains with 7nm. Interesting times for sure.

AgentLozen

Good analysis rhysiam.

rhysiam said:

f 10nm folllows a similar trajectory we shouldn't be at all surprised if we have to wait until 2nd generation 10nm before we see parts that have equivalent (let alone better!) performance characteristics to the now mature 14nm products.

Broadwell wasn't super duper, but it was immediately followed by Skylake a few months later. You may be right that we'll need to wait for a 2nd generation of 10nm chips to see a real benefit, but it may not be far away after the first generation launches.

InvalidError

496490 said:

Broadwell wasn't super duper

Broadwell wasn't even meant to be a desktop part. It wasn't until after outrage broke out in enthusiast circles about Broadwell being portable and embedded only product (middle finger to 90-series board owners who were expecting something more than Haswell-Refresh to put on there) that Intel announced a very limited selection of socketed variants with nearly nonexistent availability through most of its market life.

Cannonlake appears to be in a very similar situation: delayed multiple times, starts shipping to portable and embedded device manufacturers over a year ahead of any probable consumer launch and a hypothetical launch date on a collision course with the next-gen products beyond it.

If AMD's foundry partners meet performance targets with 7nm and Ryzen 3000, things are going to get real awkward for Intel.

jeremyj_83

"Recently leaked roadmaps imply that Intel won't have 10nm server products on the market until mid-2020"

The biggest issue for Intel is that by then AMD is supposed to be on the Zen 3 core. AMD has stated that the Rome processor was designed to compete with Intel's Ice Lake (new architecture with higher IPC) not Coffee Lake or Cannon Lake (current architecture). That means that the successor to Rome will be going against Ice Lake and AMD is shooting for 10-15% increases in performance each generation. We have already seen that with Zen+ AMD was able to increase performance by 10% via a 3% IPC boost and other enhancements to clocks. With the 3% IPC boost that means that Zen+ is only has a 2-7% lower IPC than Intel and Zen 2 is looking at a 10-15% IPC boost over Zen+. Needless to say this is going to be an interesting couple of years for the consumer.

Patrick_1966

As with the 14nm cores it will take more then a year for this to be resolved into products you can buy at Best buy off the shelf. OEMs are just getting samples now, they still need to design, test and manufacture at scale. So expect 10nm to be come commonly available around Late August 2020 or early spring April or May 2021.

jimmysmitty

125865 said:

496490 said:

Broadwell wasn't super duper

Broadwell wasn't even meant to be a desktop part. It wasn't until after outrage broke out in enthusiast circles about Broadwell being portable and embedded only product (middle finger to 90-series board owners who were expecting something more than Haswell-Refresh to put on there) that Intel announced a very limited selection of socketed variants with nearly nonexistent availability through most of its market life.
Cannonlake appears to be in a very similar situation: delayed multiple times, starts shipping to portable and embedded device manufacturers over a year ahead of any probable consumer launch and a hypothetical launch date on a collision course with the next-gen products beyond it.
If AMD's foundry partners meet performance targets with 7nm and Ryzen 3000, things are going to get real awkward for Intel.

Thats the real question though. Many times have companies promised performance gains only for it to be nothing really or possibly worse. We will have to wait for both to see the true performance gains.

Normally die shrinks don't increase performance in and of itself. Normally its clock speeds that can go up.

Guess we will see how th enext year unfolds. I am more interested to see if Intel continues on the same path or plans to push a new uArch out since Core seems to be hitting a pretty thick performance wall. Its been a good run but maybe its time to move past it.

InvalidError

149725 said:

Normally die shrinks don't increase performance in and of itself. Normally its clock speeds that can go up.

The main reason we aren't seeing substantial frequency bumps with die shrinks anymore is that CPU designers are choosing to use the faster transistors to cram more logic between synchronous latches instead of ratcheting clock frequencies at any cost. Doing more work per clock cycle is more power-efficient, clock frequencies get whatever bump leftover timing margins can afford. No generation transition shows this better than Prescott to Core where Core made on a more mature 65nm process than Prescott clocking ~800MHz lower but still destroying Prescott on performance while using significantly less power.

More work per clock is where everyone's focus is at because bumping clock frequencies at any cost has been a monumental failure for everyone who's tried it.

mlee 2500

That's an excellent point I hadn't considered, but you may absolutely be right.

Interestingly, even today, cores found on the very mature 14nm process you mention are only ~30% faster then those found on 22nm silicon from 2012. Sure, there are a couple more of those cores squeezed onto the chip, but for most desktop users that doesn't translate into noticeable value.

I'd hoped Cannon Lake would finally represent the *truly* generational leap in per-core performance that would make upgrading even Ivy Bridge era products worthwhile, but first-gen 10nm may still not be it.

1287211 said:

Quote:

Intel's faster ramp of 10nm data center products is going to be a critical component in fending off AMD's 7nm EPYC Rome processors that arrive early next year. Even with a shorter gap between Intel's desktop and data center processors, AMD has a relatively large window it can exploit with 7nm processors.

This is all assuming, too, that Intel's first generation 10nm is actually tangibly better than the now extremely mature 14nm process. We saw a few years ago that Intel's first commercially available 14nm CPUs were inferior in raw clock speeds and power efficiency to products on the mature 22nm node. Even a year later the 14nm Broadwell-E chips couldn't clock as high as the 22nm Haswell-E CPUs with matching core counts. That latter example likely has more to do with heat dissipation and density than raw silicone performance/efficiency. Nevertheless, the point remains that it wasn't until Skylake that we actually saw meaningful node-related efficiency and clock speed improvements in consumer products.
There seems to be this assumption (not necessarily from Toms - more referring to the comment-sphere here) that when Intel finally releases 10nm we'll see meaningful performance improvements from Intel and they'll be ready to "compete" again. That's far from a given though! The 14nm process is so mature now and significantly better than it was at release. If 10nm folllows a similar trajectory we shouldn't be at all surprised if we have to wait until 2nd generation 10nm before we see parts that have equivalent (let alone better!) performance characteristics to the now mature 14nm products.
Of course, the 7nm process AMD is relying on is also unknown at this point, but the fabs are spruiking sizeable performance gains with 7nm. Interesting times for sure.

bit_user

125865 said:

149725 said:

Normally die shrinks don't increase performance in and of itself. Normally its clock speeds that can go up.

And what makes those higher clock speeds possible? Shorter traces that reduce wire propagation delays, smaller transistors with smaller gate charge for faster switching, lower dynamic power draw from reduced parasitic capacitance and switching losses, etc. All affected quite significantly by process shrinks.
The main reason we aren't seeing substantial frequency bumps with die shrinks anymore is that CPU designers are choosing to use the faster transistors to cram more logic between synchronous latches instead of ratcheting clock frequencies at any cost. Doing more work per clock cycle is more power-efficient, clock frequencies get whatever bump leftover timing margins can afford.

Isn't leakage now supposed to be getting worse with each new generation? Is it conceivable that smaller nodes could even lose ground on power efficiency?

InvalidError

328798 said:

Isn't leakage now supposed to be getting worse with each new generation? Is it conceivable that smaller nodes could even lose ground on power efficiency?

Conventional leakage has been a concern for a long time already and much of it gets offset by lower voltages.

What is new is quantum physics becoming a concern. For example, enough insulation to limit leakage through insulation (conventional electrical leakage from imperfect insulation) isn't good enough when the probability function of electrons "teleporting" through the insulation (quantum tunneling) increases from distances getting smaller. Chip makers will either need to find a way to use materials that are less susceptible to tunneling or a way to exploit tunneling and other quantum effects that are undesirable in conventional circuit design.

While leakage and tunneling may be similar in that they cause an increase in static power due to letting some current to pass without doing any useful work, I suspect quantum tunneling is going to be much more difficult to solve if at all possible.

bit_user

125865 said:

328798 said:

Isn't leakage now supposed to be getting worse with each new generation? Is it conceivable that smaller nodes could even lose ground on power efficiency?

Conventional leakage has been a concern for a long time already and much of it gets offset by lower voltages.
What is new is quantum physics becoming a concern. ...

I read some knowledgeable-sounding speculation that chips would soon reach a point where it'd get so bad that much of the chip would have to sit idle (powered off?). While that might not be too bad for CPUs, which are often IPC-limited, it could be a bigger issue for chips like GPUs, which are architected to minimize blocking.

InvalidError

328798 said:

I read some knowledgeable-sounding speculation that chips would soon reach a point where it'd get so bad that much of the chip would have to sit idle (powered off?). While that might not be too bad for CPUs, which are often IPC-limited, it could be a bigger issue for chips like GPUs, which are architected to minimize blocking.

If you reach a point where you have to power down most of the chip to keep leakage+tunneling power manageable, then you may as well either pare down the chip or back off on the die shrinking - there is no point in making bigger chips on a smaller process if you can't actually use most of it.

As for GPUs being "designed to minimize blocking", this is a silly statement: GPUs have EMBRACED the fact that their individual threads will inevitably get starved for data a significant portion of the time. Unlike desktop CPUs which devote a large amount of resources to extracting all possible ILP out of one or two instruction streams per core because typical desktop code has limited threadability, GPUs leverage the fact that they are intended for embarrassingly parallel tasks to cover individual thread stalls by increasing the thread count per core so shader units almost always have other threads with eligible instructions to work on. If typical desktop software was heavily threaded by nature, it would be much simpler and more power-efficient for AMD and Intel to do 8-SMT (ex.: 4C32T) than deep out-of-order speculative execution to keep most of each core's execution ports busy as they do in some HPC and server CPUs. (Ex.: Knight's Landing, UltraSparc Tx and POWER7.)

Software is the main thing CPUs are 'blocking' on. A huge chunk of everyday desktop stuff simply doesn't multi-thread well.

SkyBill40

Based on things like leakage and tunneling and unless there's a discovery made to mitigate or minimize those effects though materials or design, I'm beginning to think we're getting close to approaching the limit to die shrinks. I'm guessing it's somewhere around 5nm. Since that's just a guess and based on current tech, how low can they effectively go? While I know AMD has already moved onto their 7nm node and Intel will likely do the same at some point, where's the end? How much smaller can they get?

bit_user

125865 said:

As for GPUs being "designed to minimize blocking", this is a silly statement:

You're just trying to find some way of disagreeing with me, again.

I stand by what I said, but I'll reword it for you: GPUs are designed to minimize stalling the hardware, by queuing up loads of work to do.

jimmysmitty

1442759 said:

Based on things like leakage and tunneling and unless there's a discovery made to mitigate or minimize those effects though materials or design, I'm beginning to think we're getting close to approaching the limit to die shrinks. I'm guessing it's somewhere around 5nm. Since that's just a guess and based on current tech, how low can they effectively go? While I know AMD has already moved onto their 7nm node and Intel will likely do the same at some point, where's the end? How much smaller can they get?

5nm is what most industry experts state Silicon will be limited to and Intel and others have already, for probably 10 years now or more, been researching new ideas such as Graphene, carbon nanotubes and other materials to eventually replace silicon to delve beyond 5nm.

SkyBill40

149725 said:

5nm is what most industry experts state Silicon will be limited to and Intel and others have already, for probably 10 years now or more, been researching new ideas such as Graphene, carbon nanotubes and other materials to eventually replace silicon to delve beyond 5nm.

I figured as much. There's going to be a point where the law of diminishing returns sets in, be that based on silicon, graphene, or what have you as a medium. I don't personally think going below 5nm is going to continue producing the kind of results we've seen with each shrink in process. The current limit may be 5nm based on the tech of today or the immediate future but even with new, I'm thinking 3nm is the relatively fixed end point. I suppose we'll see soon enough and likely within the next 10 years.

jimmysmitty

1442759 said:

149725 said:

5nm is what most industry experts state Silicon will be limited to and Intel and others have already, for probably 10 years now or more, been researching new ideas such as Graphene, carbon nanotubes and other materials to eventually replace silicon to delve beyond 5nm.

I figured as much. There's going to be a point where the law of diminishing returns sets in, be that based on silicon, graphene, or what have you as a medium. I don't personally think going below 5nm is going to continue producing the kind of results we've seen with each shrink in process. The current limit may be 5nm based on the tech of today or the immediate future but even with new, I'm thinking 3nm is the relatively fixed end point. I suppose we'll see soon enough and likely within the next 10 years.

At one point I assumed what might actually happen was transistor stacking, that we would stop shrinking and move up. Intel also has the idea to do multiple nodes on a single die, such as the core at say 7nm, the IMC at 14nm and the cache/IGP at 22nm/32nm as it would save on costs and efficiency as some parts see less returns at certain nodes but I have not sen much more on that as I assume it would take a lot of work.

I think we will hit a wall that wont be broken unless they find a way to implement graphene which looks to be vastly more promising than even carbon nanotubes.

bit_user

149725 said:

At one point I assumed what might actually happen was transistor stacking, that we would stop shrinking and move up.

Don't they already have on the order of 100 layers of material, per die? I know there's 3D wiring, to some extent, but are all the transistors still in the same plane?

Anyway, you can't stack very deep before heat dissipation becomes a problem (i.e. in terms of the W per mm^2 you're trying to dissipate). Another issue would seem to be the amount of fabrication time per wafer. Also, I wonder if nonuniformities in lower layers would probably compound to perturb higher layers, increasing the defect rate.

jimmysmitty

328798 said:

149725 said:

At one point I assumed what might actually happen was transistor stacking, that we would stop shrinking and move up.

Don't they already have on the order of 100 layers of material, per die? I know there's 3D wiring, to some extent, but are all the transistors still in the same plane?
Anyway, you can't stack very deep before heat dissipation becomes a problem (i.e. in terms of the W per mm^2 you're trying to dissipate). Another issue would seem to be the amount of fabrication time per wafer. Also, I wonder if nonuniformities in lower layers would probably compound to perturb higher layers, increasing the defect rate.

I am sure heat is the biggest issue and I am sure plenty of companies are still "looking" into it though as there may be some level of viability to it, possibly have two layers of cores etc.

I think the only viability is really in cache or memory stacking.

mlee 2500

328798 said:

149725 said:

At one point I assumed what might actually happen was transistor stacking, that we would stop shrinking and move up.

Don't they already have on the order of 100 layers of material, per die? I know there's 3D wiring, to some extent, but are all the transistors still in the same plane?
Anyway, you can't stack very deep before heat dissipation becomes a problem (i.e. in terms of the W per mm^2 you're trying to dissipate). Another issue would seem to be the amount of fabrication time per wafer. Also, I wonder if nonuniformities in lower layers would probably compound to perturb higher layers, increasing the defect rate.

Of course none of these dimensional approaches actually solve the limitations associated with the actual substrate...they just delay the inevitable. I too wonder when we will finally see different material used (whatever happened to Gallium Arsenide?), but I imagine it's fabulously expensive to retool a fab at that level, assuming you even had a viable alternative that doesn't involve exceedingly rare, expensive, or poisonous elements.

bit_user

1791309 said:

Of course none of these dimensional approaches actually solve the limitations associated with the actual substrate...they just delay the inevitable. I too wonder when we will finally see different material used (whatever happened to Gallium Arsenide?), but I imagine it's fabulously expensive to retool a fab at that level, assuming you even had a viable alternative that doesn't involve exceedingly rare, expensive, or poisonous elements.

I imagine chips built from precisely-arranged atomic structures. To build such a chip, the first thing to do is to crank out the set of nano-machinery that will assemble it. Each chip will need an army of such nanobots.

jimmysmitty

1791309 said:

328798 said:

149725 said:

At one point I assumed what might actually happen was transistor stacking, that we would stop shrinking and move up.

Don't they already have on the order of 100 layers of material, per die? I know there's 3D wiring, to some extent, but are all the transistors still in the same plane?
Anyway, you can't stack very deep before heat dissipation becomes a problem (i.e. in terms of the W per mm^2 you're trying to dissipate). Another issue would seem to be the amount of fabrication time per wafer. Also, I wonder if nonuniformities in lower layers would probably compound to perturb higher layers, increasing the defect rate.

Of course none of these dimensional approaches actually solve the limitations associated with the actual substrate...they just delay the inevitable. I too wonder when we will finally see different material used (whatever happened to Gallium Arsenide?), but I imagine it's fabulously expensive to retool a fab at that level, assuming you even had a viable alternative that doesn't involve exceedingly rare, expensive, or poisonous elements.

Gallium is rare and arsenide, well its poisonous.

mlee 2500

149725 said:

1791309 said:

328798 said:

149725 said:

At one point I assumed what might actually happen was transistor stacking, that we would stop shrinking and move up.

Don't they already have on the order of 100 layers of material, per die? I know there's 3D wiring, to some extent, but are all the transistors still in the same plane?
Anyway, you can't stack very deep before heat dissipation becomes a problem (i.e. in terms of the W per mm^2 you're trying to dissipate). Another issue would seem to be the amount of fabrication time per wafer. Also, I wonder if nonuniformities in lower layers would probably compound to perturb higher layers, increasing the defect rate.

Of course none of these dimensional approaches actually solve the limitations associated with the actual substrate...they just delay the inevitable. I too wonder when we will finally see different material used (whatever happened to Gallium Arsenide?), but I imagine it's fabulously expensive to retool a fab at that level, assuming you even had a viable alternative that doesn't involve exceedingly rare, expensive, or poisonous elements.

Gallium is rare and arsenide, well its poisonous.

Yeah, a quick google shows that it's also expensive, on the order of thousands of dollars more per wafer then sand. Looks like that makes it the purview of niche or military applications.

InvalidError

1791309 said:

Yeah, a quick google shows that it's also expensive, on the order of thousands of dollars more per wafer then sand. Looks like that makes it the purview of niche or military applications.

GaAs has been used in many consumer goods for things like microwave RF amplifiers. We're talking tiny chips with die areas in the single digit sqmm here, not the 100+sqmm of a typical consumer CPU.

Another reason why GaAs isn't particularly popular in consumer electronics is that GaAs tends to have higher leakage current than silicon, which would be bad for power efficiency and cooling.

Also, where mass-manufacturing is concerned, silicon wafers are available in sizes up to 450mm diameter while GaAs only goes up to something like 200mm. You'd end up with much higher wafer handling overheads and wafer boundary losses.

At any rate, whatever might come after silicon will still be butting heads against quantum mechanics at sub-10nm and I suspect that's what ultimately will decide whether silicon will ever get replaced by anything else as the default substrate of choice for bulk logic circuitry. If new materials can't mitigate undesirable quantum effects better and/or cheaper than what can be achieved on silicon, then silicon won't be going anywhere.