Nvidia: This Could Work Out Great, Says Bernstein

Bernstein analyst Stacy Rasgon started coverage of GPU chip maker Nvidia with the equivalent of a Buy rating, and a $165 price target, arguing its market for self-driving cars is just starting, and its market in A.I. could be worth $8.74 billion.

By

Tiernan Ray

May 19, 2017 2:02 a.m. ET

Text Size

Regular

Medium

Large

Bernstein’s Stacy Rasgon late Thursday initiated coverage of graphics-chip maker Nvidia (NVDA) with an Outperform rating and a $165 price target, writing that after a sixfold rise in the stock price in the past two years, “We believe the story is not yet over."

While the company’s traditional video game market is “sustainable,” Rasgon is enthusiastic about the company’s roles in machine learning and self-driving cars.

The data center chips or machine learning "is likely somewhere between 'big,’ and ‘huge’,” writes Rasgon, "driven by the mainstreaming of AI and the rise of accelerated computing."

The Automotive use "has not even gotten ‘good’ yet (still dominated by infotainment),” he writes.

Rasgon notes a comment from CEO Jen-Hsun Huang at the company’s analyst day a week ago, "one of the boldest, and honest, statements we've ever heard senior management of a company make in public,” he writes, "This is all going to work out great, or terribly, for us because we're all-in."

"We're voting for great,” concludes Rasgon.

"While the stock has run, and could be volatile,” concedes Rasgon, "we believe market dynamics in the long term are still more likely to inflect upward rather than down."

Rasgon’s estimates for this fiscal year ending in January are slightly higher than consensus, at $8.25 billion and $3.54 in net income per share, versus consensus for $8.23 billion and $3.10.

For 2019, he’s modeling $9.67 billion and $4.30 per share, versus consensus $9.24 billion and $3.51. And for 2020, Rasgon’s numbers go all the way to $11.43 billion and $5.26 versus $10.87 billion and $4.64.

Rasgon sizes the GPU market by gleaning clues from Nvidia’s statements, coming up with an addressable market of $3 billion to $4 billion by 2020:

At least we have a good idea of today's starting point. For example, NVIDIA has indicated that ~50-60% of their 2016 datacenter revenue went toward deep learning / AI, which puts it somewhere in the ballpark of ~$500M (up from virtually nothing a couple of years ago). Additionally, INTC offered a perspective recently, stating that they believe ~7% of 2016 servers shipped were used for AI workloads (which would be ~700K servers out of a total of ~9.9M shipped), with ~3.4% of that 7% containing a GPU (~24K systems, presumably used for training, leaving ~670K systems without a GPU, therefore likely more used for inference). Assuming the bulk of these AI-based servers are in the cloud rather than enterprise (probably not exactly true but close enough for our purposes), this suggests that perhaps 16% of cloud servers deployed in 2016 were used for AI-type workloads (which doesn't seem unreasonable to us) […] Playing with some numbers that we believe are reasonable (say, 20% of cloud servers used for AI, 10% of those for training, half that penetration for enterprise, and just a bit of inference moving to GPUs), it is not hard to get GPU datacenter TAMs on the order of $3-4B+, or even more, in a few years.

What if it’s bigger? he ponders. Rasgon assembles a more-ambitious model, one that arrives at a market of $8.74 billion by 2020:

Among interesting tidbits, Rasgon likes the company’s programming environment for its GPUs, “CUDA”:

NVIDIA's CUDA is a hugely important differentiator for NVIDIA in the datacenter. In addition to using proprietary CUDA software to make GPUs more programmable, perhaps the greatest source of sustainability for NVIDIA is the libraries they have built on top of the language – NVIDIA has worked closely with end users to further build out its library and tool sets across many different verticals and needs. Thus, on the deep learning side, NVIDIA appears to have a substantial advantage over the only other GPU provider, AMD (who by their own admission is late to the party). It is generally viewed as faster, better supported via extensive libraries / tools, and generally appreciated as a much more mature platform with a wider user base (vs OpenCL), though a disadvantage is its lack of portability given it only works on NVIDIA's chips. We note that CUDA appears to have much more traction within the deep learning software community and is a more attractive skillset in job seekers overall. It is the only standard supported by Google's TensorFlow and Microsoft's CNTK, and the primary one for most other deep learning frameworks.

As far as valuation, it’s not so high if one believes things will work out:

It is obviously difficult to buy a stock with a chart that looks like NVIDIA's, and the shares are scarily expensive compared to semiconductor peers. However, growth profiles like this are rare in semis too. Compared to a basket of high-tech growth stocks (possibly a better compare?) the shares are in fact not egregiously valued. While we are cognizant of high expectations (and fully admit there could be better entry points to come) we are positive on the company's drivers, are above consensus, and believe in the long term upside still exists.

This copy is for your personal, non-commercial use only. Distribution and use of this material are governed by our Subscriber Agreement and by copyright law. For non-personal use or to order multiple copies, please contact Dow Jones Reprints at 1-800-843-0008 or visit www.djreprints.com.