Regular

Personally I think at some point it will make more sense to go multi-layer CPU instead of going for finer processes, just like they did for flash.
How challenging (economical) this would be from a production point of view I'm not quite sure.

Click to expand...

The issue isn't economics, it's that the performance of the upper layer of transistors is utter crap, roughly on the level of what was common 30 years ago. The reason for this is that implanted silicon has terrible performance compared to the perfect crystal of the substrate. For flash, this isn't much of a problem -- no-one cares that much about transistor performance in their flash memory. For logic it's just a complete non-starter.

In order for 3d integrated features (as opposed to single features with 3d elements, or multiple planar dies stacked together) to become feasible, we need to move from silicon to some other material that can be implanted better.

Legend

Personally I think at some point it will make more sense to go multi-layer CPU instead of going for finer processes, just like they did for flash.
How challenging (economical) this would be from a production point of view I'm not quite sure.

Click to expand...

Not a good idea considering that the power consumption (and thus heat) of CPU designs and their usage patterns (NAND transistors are accessed orders of magnitude less per second than CPU transistors) is orders of magnitude higher than that of Flash or even DRAM.

How are you going to dissipate all of that heat if you have layers of the CPU insulated by other layers of CPU, all of which serves to not only insulate the heat, but each layer then contributes heat to neighboring layers.

Ye it's not coming for first gen at GF that's for sure, but 2nd gen should be like your first link suggests.
AFAIK same goes for TSMC and Samsung and as you said, Samsung is already on their first gen (LPE and LPP are counted as one gen though I think?)

Regular

Not a good idea considering that the power consumption (and thus heat) of CPU designs and their usage patterns (NAND transistors are accessed orders of magnitude less per second than CPU transistors) is orders of magnitude higher than that of Flash or even DRAM.

How are you going to dissipate all of that heat if you have layers of the CPU insulated by other layers of CPU, all of which serves to not only insulate the heat, but each layer then contributes heat to neighboring layers.

As tunafish mentions, we need some radical changes.

Regards,
SB

Click to expand...

What about 3D micro fluid channels for cooling ?:

Microfluidic cooling has existed for years; tiny microchannels etched into a metal block — to cool the SuperMUC supercomputer. Now, a new research paper on the topic has described a method of cooling modern FPGAs by etching cooling channels directly into the silicon itself. Previous systems, like Aquasar, still relied on a metal transfer plate between the coolant flow and the CPU itself.

Here’s why that’s so significant. Modern microprocessors generate tremendous amounts of heat, but they don’t generate it evenly across the entire die. If you’re performing floating-point calculations using AVX2, it’ll be the FPU that heats up. If you’re performing integer calculations, or thrashing the cache subsystems, it generates more heat in the ALUs and L2/L3 caches, respectively. This creates localized hot spots on the die, and CPUs aren’t very good at spreading that heat out across the entire surface area of the chip. This is why Intel specifies lower turbo clocks if you’re performing AVX2-heavy calculations.

Legend

Microfluidic cooling has existed for years; tiny microchannels etched into a metal block — to cool the SuperMUC supercomputer. Now, a new research paper on the topic has described a method of cooling modern FPGAs by etching cooling channels directly into the silicon itself. Previous systems, like Aquasar, still relied on a metal transfer plate between the coolant flow and the CPU itself.

Here’s why that’s so significant. Modern microprocessors generate tremendous amounts of heat, but they don’t generate it evenly across the entire die. If you’re performing floating-point calculations using AVX2, it’ll be the FPU that heats up. If you’re performing integer calculations, or thrashing the cache subsystems, it generates more heat in the ALUs and L2/L3 caches, respectively. This creates localized hot spots on the die, and CPUs aren’t very good at spreading that heat out across the entire surface area of the chip. This is why Intel specifies lower turbo clocks if you’re performing AVX2-heavy calculations.

Click to expand...

Sure, but all of that increases cost. That's fine when we're talking about server implementations and even specialized professional applications but isn't nearly as applicable for consumer applications.

Not only would the cost of the CPU increase due to increased manufacturing complexity, but cost of integration into a system goes up considerably.

VeteranSubscriber

Meh, in 2017 everyone would've said Intel's still ahead in manufacturing.
To even be talking that they're slightly behind, that's quite the change//

Click to expand...

Change in reality? Or the change in the other thing?

Hugely complex multi-dimensional optimisation problem reduced to a single number (mostly by the marketing dept). People judge reality based on single number, come to conclusions. Well maybe that hasn't changed.

Newcomer

Of course it's just a marketing term, it's been said enough times that Intel's 10nm = TSMC's 7nm. If Zen 2 is released on 7nm in 2019 and is clearly superior to Ryzen or Coffee lake on 14nm, and Intel are still trying to sort out their 10nm process, which CPU are customers going to buy?

If true, we should expect further delays and even more process rebrandings.

My opinion is that it would fit to much of the available evidence given current intel PR (reorganization of manufacturing unit, anaemic confidence in 10 nm parts) and execution (14nm++ less dense than their previouse 14nm class process, current manufacturing shortages of 14nm parts)

Veteran

When someone else has higher performing chips then I will start to think gee there is a point to this narrative. Till then meh... I actually am very happy AMD is challenging intel a bit, but they still have nothing for the top end. They are more focusing on value, cores/$ and performance per $ instead of absolute performance. Still it has been a great boon already to consumers now that core chips are out at a reasonable price.

VeteranRegularSubscriber

Not sure why you tie this with performance of chips. Performance is heavily influenced by architecture too.

I think we will know if we see more delays. H1 2020 we were supposed to be consumer Cannonlake in significant volume?
If we do get that , than either Charlie was flat out wrong or he grossly overestimated some incremental (planned or not) changes to the process

About Us

Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!