AMD has announced via its Twitter feed that the Vega die shrink from current 14 nm down to 7 nm has actually coalesced into a hardware product that can be tested and vetted at their labs. Via a teaser image, the company said that "7nm @RadeonInstinct product for machine learning is running in our labs."

Of course, working silicon is only half the battle - considerations such as yields, leakage, and others are all demons that must be worked out for actual production silicon, which may thus be some months off. Only AMD and TSMC themselves themselves know how the actual production run went - and the performance and power efficiency that can be expected from this design (remember that AMD's CEO Lisa SU herself said they'd partner with both TSMC and Globalfoundries for the 7 nm push, though it seems TSMC may be pulling ahead in that field). Considering AMD's timeline for the die-shrunk Vega to 7 nm - with predicted product launch for 2H 2018 - the fact that there is working silicon being sampled right now is definitely good news.

they would make more money from nvidia so of course they have 7nm, just like how they had all the hbm2 even though they were late to the hbm party, im pretty sure they even backed micron yet samsung gave them the hbm supply they needed

they would make more money from nvidia so of course they have 7nm, just like how they had all the hbm2 even though they were late to the hbm party, im pretty sure they even backed micron yet samsung gave them the hbm supply they needed

they would make more money from nvidia so of course they have 7nm, just like how they had all the hbm2 even though they were late to the hbm party, im pretty sure they even backed micron yet samsung gave them the hbm supply they needed

Say what? AMD makes their console chips from TSMC, Nvidia also has a few products made at Samsung.
The idea that TSMC make more $ from Nvidia, than AMD, is highly suspect. Apple is the only one that has priority access, for many different reasons.
Yeah right, so there were no Intel, Google, Xilinx who gobbled up tonnes of HBM2 either? Not to mention the volume of consumer Vega is anywhere between 2x-5x the Tesla cards with HBM2, also miners!

That's a dual slot card with a long cover. Bearing in mind the die will be smaller and the memory is on die, that's a large space under the hood. It could be water cooled or there could be a fan with an intake at the rear. It will not be passive.

EDIT: it's also a render. The real card in their lab probably looks like Frankenstein.

That's a dual slot card with a long cover. Bearing in mind the die will be smaller and the memory is on die, that's a large space under the hood. It could be water cooled or there could be a fan with an intake at the rear. It will not be passive.

EDIT: it's also a render. The real card in their lab probably looks like Frankenstein.

For this HPC/AI market all AMD said is they have something running and correct it at best looks like the "Bride of Frankenstein" AMD is on track for engineering samples "a half ready product" by sometime later but supposedly in this year. That kind of says to me the me the throughput/power of the TSCM parts are inline with what they hoped to get. At this point "hearing" anything in regards 7nm consumer parts are in the works is probably best "tempered" till a year from now.

It depends how many cards per rack. It's also machine learning, more nuanced than sheer number crunching power. Good chance when in a rack it will have cooling on board. My brother makes HPC FPGA cards that use cooling (not passive).

It depends how many cards per rack. It's also machine learning, more nuanced than sheer number crunching power. Good chance when in a rack it will have cooling on board. My brother makes HPC FPGA cards that use cooling (not passive).

You couldn't fit a fan on the card with more cooling capacity than you'll find in your average rack I'd wager. You can employ various strategies to cool a rack but ultimately you'd most likely end up with worse if you tried to cool individually.

It depends how many cards per rack. It's also machine learning, more nuanced than sheer number crunching power. Good chance when in a rack it will have cooling on board. My brother makes HPC FPGA cards that use cooling (not passive).

For large scale compute ASICs it really doesn't depend on count per rack. AMD, nVidia, and Intel all use much larger heatsinks under a plain shroud rather than a smaller heatsink with integrated fan. Fin size and density matter most all day, every day. None of the current generation Radeon Instinct cards have integral fans, that includes cards with Polaris, Fiji, and Vega. Same goes for nVidia's Tesla series, they did away with integrated fans nearly 4 years ago.