Developers

We often talk about hybrid cloud business models, but virtually always in the context of traditional processor-bound applications. What if deep learning developers and service operators could run their GPU-accelerated model training or inference delivery service anywhere they wanted? What if they could do so without having to worry about which Nvidia graphics processor unit they were using?