Sure you have GigaRays, but can you power 32 VMs per card? Meet the Radeon Pro V340

The new Radeon Pro V340 is designed in a way we haven't seen from AMD in a while, with dual Vega 56 GPUs sharing the same PCB with 32GB of HBM2. The card is not aimed at the same market segment as NVIDIA's new RTX cards, instead AMD is talking up its virtualisation abilities. One card can support 32 VMs, which means AMD could have perhaps picked a better name, but that does not detract from this impressive ability. This could position AMD to compete effectively against NVIDIA's GeForce Now game streaming service and offer their own service to allow you to play games over the net, independent of your own hardware. You can check out the announcement video over at The Inquirer to see how AMD is planning on positioning themselves.

"Essentially squashing two Vega 56 graphics cards together, the Radeon Pro V340 sports 112 compute units and 7,168 stream processors. It also makes use of high-bandwidth memory totalling in 32GB of HBM2, which touts a bandwidth of 512GB/s"

"This could position AMD to compete effectively against NVIDIA's GeForce"

GeForce is a gaming brand and these Radeon Pro V340 cards are there to compete against the Nvidia Grid/Tesla variants and not GeForce.

Look at that photo, there is no fan/s to be seen and probably not much in the way of video outputs also. It's going to be cooled by some server rack fans and it sure not going to be clocked with gaming in mind using that passive/indirect type cooling solution. More like CAD/Pro Rendering tasks with the GPU cores/nCUs able to be hardware virtulized and shared among up to 32 VMs/clients with that specilized Hardware ASIC(MxGPU) on that GPU doing the SR-IOV work on via ASIC's hardware.

And the Big selling point of this card is that SR-IOV open standard that's an extention of the PCIe specification and no expensive licensing required for Radeon Pro V340 SKUs compared to Nvidia's COSTLY proprietary solutions.

Jeremy why do you continue to plug the Inquirer's articles when they have no actual quality remaining in their content.

"Now game streaming service" was not there when the post was made AFAIK. Also AMD does not currently offer such a cloud based gaming service itself like Nvidia does to my knowledge. So make of that what you will!

AMD did have at around the 2013-14 date an AMD Radeon™ Sky Series of server/gaming oriented GPU offerings(2) but that's not in use currently as far as I can tell but that was created under a previous AMD management team. There was an announced AMD partnership in 2017 with the LiquidSky folks but that's a third party service that's making use of Vega and that's not as widely adopted currently.

Geforce Now is a service of Nvidia yes but AMD's not really offering such a service currently and AMD is more than likely not going to offer one itself. Lisa SU wants to focus on the Professional markets so that where the hardware markups are the highest. Offering a Cloud Based game streaming service is a costly undertaking from a server hardware standpoint and a software ecosystem standpoint also.

I'd think that Vega 20 in some dual die configuration is what will be more likely made use of for Cloud Gaming from AMD's Vega 20 via some third party providor/s. And Amazon does have its own gaming engine/SDK called "lumberyard" that has some form of Amazon web services hosting requirement for that gaming engine/SDK's continued free usage(1).

"As always, Amazon Lumberyard is free. Completely free. The catch is that you’ll need to use Amazon Web Services for your servers (unless you roll you own servers) if you have any online element, such as multiplayer, online leaderboards, and so forth." (1)

I'd think that it will have to wait for Vega 20 at 7nm as Vega 20/7nm will be a very attractive price/performance and TFlops/Watt leader for AMD that will probably attract enough interest from any potential cloud gameing customers.

One of the issues with Nvidia's Grid solution is they charge a license fee for each VM (even though you already purchased the card and paid the license to use it for Gride on the host OS)

Amds solutions is a little different in how it works it basicly makes the GPU appears to be multiple separate PCIe devices so you can use classical PCIe passthrough so that the guest os has no idea it is running on the same GPU as another gest.

If you are kitting out a load of servers to run VMs (something that is getting more and more common in the corporate space for security reasons) these cards could be a really good option (well we need to wait for prices of course.. but i don't expect them to cost more than a V100)

Vega 20 at 7nm and all the power efficiencies and price/performance in addition to the no expensive licensing fees is what's going to really compete. That Hardware ASIC(MxGPU) managed SR-IOV with added security will make a big difference. Pairing these Offerings with AMD's Epyc CPUs and their relatively safer(No Meltdown, easily patched Spectre issue without much performance loss) will add to that level of trust among any third party game streaming providors looking to use AMD.

It's not currently known if AMD will increase the shader core counts on Vega 20's Die relative to Vega 10. But at 7nm, even if AMD does not increase the shader core counts on any Vega 20 die relative to Vega 10's 4096 shader cores, at least AMD could offer a dual Vega 20 solution in a passive/indirect cooling solution design that can make use of all the Vega 20 DIEs shader cores and make the thermal headroom requirements necessary for server rack usage.

This current Radeon Pro V340(Vega 56 shader complement) is not making use of the Full Vega 10 Base die complement of shader cores for obvious thermal reasons. Also There could be some Liquid Cooled full Vega 10 Shader core offerings but that's more costly and will require more power also. AMD is too close to Vega 20's introduction and it's very likely that AMD is already testing some Dual Vega 20 die variants also for introducction on that TSMC 7nm node.

The big question is use-case: if you're going to go to the trouble of adding hardware passthrough to a physical GPU for your VMs, if a small slice of an OK-I-guess GPU going to be of any value? i.e. are there actually any virtualised GPGPU use-cases where you don't need a dedicated high-performance compute card, but a shared GPU somehow does not cut it?