Strong selling during June affected NVDA and the FANG stocks after strong performance gave these stocks outsize gains during first half of 2017. Will follow performance to see if investors still believe the growth story. If so, upside performance will return.

Intel's dominance of the data center is at risk as the computing industry moves to a "fourth wave," according to Jefferies & Co.'s Mark Lipacis, of which the poster child is Intel's rival, Nvidia.

By Tiernan RayBarron's July 10, 2017 9:19 a.m. ET

Shares of Intel ( INTC) are down 68 cents, or 2%, at $33.20, in early trading, after Jefferies & Co.’s Mark Lipacis cut his rating on the stock to Underperform from Hold, and cut his price target to $29 from $38, warning that the company is the most in danger of a new wave of computing, parallel processing.

Lipacis makes the case that parallel processing computing, best exemplified by Nvidia (NVDA), on whose shares he has a Buy rating, is the fourth big changing in computing, following mini computers, PCs, and smartphones:

There Have Been 3 Tectonic Shifts in Computing. Every 15 years, an accumulation of technical innovations translates to tectonic shifts in the computing model. In the 60s the industry shifted from Mainframes to Mini-Computers, in the early 80s it shifted to PCs, and in the late 90s it shifted to a cell phone / datacenter model. Each computing model shift brought a shift in the beneficiaries: IBM in Mainframes to DEC in MiniComputers, to INTC/ MSFT in PCs, to AAPL/Samsung/INTC/MSFT in the cell phone / datacenter model. We Believe We are at the Start of the 4th Tectonic Shift Now, to a parallel processing / IoT model, driven by lower memory costs, free data storage, improvements in parallel processing hardware and software, and improvements in AI technologies like neural networking, that make it easy to monetize all the data that is being stored.

He writes that Nvidia’s chips are gaining on Intel’s in the data center:

NVDA's datacenter business has grown 200% YY over the past 3 quarters and is now 10% the size of Intel's DCG. At the same time, INTC's server business has decelerated to 6% in 1Q17 from double digit 3-yr and 5-yr CAGRs. We model DCG revenues to decelerate to 5% in 2018 from 7% in 2017. Near term, we expect INTC to benefit from the official ramp of its Purley server platform--we note that it has already been shipping with the largest hyperscale cloud providers (Google et al).

There’s also competition from Advanced Micro Devices ( AMD), Cavium ( CAVM), and Xilinx ( XLNX), any of whose shares he would recommend over Intel’s:

AMD has reentered the x86 server market for the first time in years, and MSFT's announcement for Windows support on ARM is a watershed, making CAVM's ARM-based ThunderX MPU a scalable alternative to Xeon in the datacenter. In FPGA, we think XLNX's SDAccel adds a capability that INTC's Programmable Solutions Group (Altera) has yet to build. Finally, we think AMD's Zen desktop MPUs chip away at INTC's core solutions.

Lipacis’s note comes a day before Intel holds a media event in Brooklyn, New York for the unveiling of its latest server processor chip, “Purley."

Audi announced Tuesday that its forthcoming A8 would be the first production vehicle to ship with a Level 3 self-driving feature onboard when it goes on sale next year, and now we know that Nvidia’s technology will be helping power the vehicle’s ‘traffic jam pilot’ autonomous capabilities. Nvidia’s going to be powering a lot in the new A8, in fact – the car has six Nvidia processors helping power not only traffic jam pilot, but also its infotainment system, virtual cockpit instrumentation and headrest tablets for backseat passengers on fully equipped models.

The introduction of Level 3 autonomy on the A8 will mean that drivers don’t have to pay attention to the road in certain conditions – specifically in this case when the car is driving 37 mph or under on a highway with a physical divider. If the vehicle meets those conditions, and local laws allow, drivers can do whatever else is legally allowed behind the wheel, and the system will let them know when it’s time to resume manual control.

That’s a step further than current highway driving assistance features like Tesla’s Autopilot, which is classified as a Level 2 system and requires a driver to be paying attention and ready to resume control at all times. But it’s also designed primarily for sitting in traffic, whereas Autopilot is designed for a range of speeds in highway driving scenarios.

Nvidia’s processor is the “brain” of Audi’s zFAS system, which is the computer that handles driver assistance onboard the A8, and that takes sensor data gathered from the vehicle’s radar, camera, laser scanning and ultrasound sensors to create a fused picture of the road with a range of different types of data. The zFAS decides how the car behaves when traffic jam pilot is engaged, processing data at a rate of 2.5 billion inputs per second.

Level 3 autonomy is somewhat controversial in the self-driving world because it both allows a driver to relax their attention and yet also can’t handle driving operations of the car entirely, as a Level 4 vehicle could. Audi must be very confident in the A8’s abilities with traffic jam pilot to bring this to market, and Nvidia’s tech has a lot riding on a smooth deployment once it does go to market.

In a properly working capitalist economy, innovative companies make big bets, help create new markets, vanquish competition or at least hold it at bay, and profit from all of the hard work, cleverness, luck, and deal making that comes with supplying a good or service to demanding customers.

There is no question that Nvidia has become a textbook example of this as it helped create and is now benefitting from the wave of accelerated computing that is crashing into the datacenters of the world. The company is on a roll, and is on the very laser-sharp cutting edge of its technology and, as best as we can figure, has the might and the means to stay there for the foreseeable future.

Nvidia has every prospect of being able to get a very large portion of the $30 billion in addressable markets that it is chasing with its Tesla accelerators in its datacenter product line – mainly traditional HPC simulation and modeling and machine learning training and inferencing – and is well on its way to fomenting new virtual workstation markets with its GRID line and instantaneous video transcoding and GPU accelerated database with its Tesla line, all of which are outside of that $30 billion total addressable market and which represent other very large opportunities.

It is no wonder that a bunch of executives from Cisco Systems have come to Nvidia – John Chambers, the former CEO at the company, is legendary in taking a router company that was at the beginning of the commercial Internet revolution to new heights by finding adjacencies and expanding into them. Cisco did it through a combination of internal investment and acquisition, while Nvidia is doing it mostly through finding new markets and creating them. Cisco essentially outsourced the job – sometimes quite literally with its UCS servers and Nexus switches – and it is interesting to contemplate if Nvidia might do some acquisitions of its own to help build out its datacenter business. To have a true platform, Nvidia needs a processor and networking, but it only has $5.9 billion in cash as of the end of its second quarter of fiscal 2018 ended on July 1. We have suggested that if IBM had any sense, it would have bought Nvidia, Mellanox Technologies, and Xilinx already. But maybe these four vendors should create a brand new company, each with proportional investment, that has all of their datacenter elements in it and call it International Business Machines, leaving all the rest of the IBM Company to be called something else.

It’s a funny thought, but the world needs a strong alternative to Intel, and even though AMD is back in the game, it is not yet making money even if it is making waves.

Before we get into the detailed numbers for the second quarter, we wanted to outline the addressable markets in the datacenter that Nvidia has identified and quantified. It all started with gaming with Nvidia, and this is the revenue stream that allowed the company to branch out into the adjacencies in the datacenter, much as Microsoft jumped from the Windows desktop to the Windows Server and Cisco jumped from routers to switches and collaboration software and video conferencing and converged server-network hybrids. The gaming business was just north of an $80 billion market, with 2 billion gamers worldwide and 400 million PC gamers who buy rocketsled PCs with fast CPUs and GPUs; this market is expected to be around $106 billion by 2020, growing at 6 percent annually. This is not huge growth, but it is a very large business and it has the virtue of not sinking. Nvidia’s GeForce GPU graphics card business is far outgrowing the business, with revenues expected to grow 25 percent per year between 2016 and 2020, and average selling prices rising 12 percent and units rising 11 percent.

So far, the installed base on gamer PCs is about 14 percent on the “Pascal” generation of GPUs that came out in 2016, and another 42 percent are on earlier “Maxwell” GPUs, with the remaining 44 percent on earlier “legacy” GPUs, as Nvidia calls them. This is a huge installed base that will gradually upgrade to new iron, and with the Pascal GPUs being very good compared to the AMD alternatives, there are good reasons why Nvidia is not rushing GeForce cards based on the current “Volta” GPUs to market. For one thing, as Nvidia co-founder and CEO Jensen Huang said during his call with Wall Street analysts that it costs about $1,000 to make the hardware in a Volta GPU card, give or take, and this is very expensive compared to prior GPU cards – all due to the many innovations that are encapsulated in the Volta GPU, from HBM2 memory from Samsung to 12 nanometer FinFET processes from Taiwan Semiconductor Manufacturing Corp to the sheer beastliness of the Volta GPU. This Pascal upgrade cycle on GPUs for gamers and workstations, plus the regular if somewhat muted hum of the normal PC business, is an excellent foundation from which Nvidia can invest in new markets, as it has in the past.

Here is the customer mix that Nvidia’s datacenter business, which sells the Tesla and GRID accelerators as well as the DGX-1 server line and now workstations, for fiscal 2017:

It is important to not confuse customer share with revenue share or aggregate computing share. We think, based on statements that Nvidia has made in the past, that the hyperscalers are spending a lot more dough on Nvidia iron than the supercomputing centers of the world, so that pie chart above shifts a bit when you start talking about money. The important thing is that over 450 applications have been tweaked to run on the CUDA parallel computing environment and can be accelerated by GPUs, and there are myriad more homegrown applications that have been ported to CUDA as well. The Tesla-CUDA combination is a real platform in its own right, and it is at the forefront of high performance computing in its many guises.

An aside: Nvidia is now shipping DGX systems that employ the Volta GPUs, which we profiled here, and has shipped DGX iron using either Pascal or Volta GPUs to over 300 customers and has another 1,000 in the pipeline.

Here is how Nvidia cases out the three core markets for its Tesla compute efforts:

Nvidia thinks that the amount of compute aimed at traditional HPC workloads will grow by more than a factor of 13X between 2013 and 2020, reaching 8 exaflops in aggregate by the end of that period and representing a $4 billion opportunity. One could argue that it is very tough to create exascale machines without some kind of massively parallel, energy efficient processor, and the reason is that CPUs that are good at serial work burn too much energy per unit of work, so you have to use them sparingly. It takes a lot less money to build a fat server node with lots of flops, and provided the applications can be parallelized in CUDA, you can see more than an order of magnitude in node count contraction and an order of magnitude lower hardware spending by moving to hybrid CPU-GPU cluster over a straight CPU cluster.

Intel knows this, and that is why the Knights family of Xeon Phi processors exist.

As you can see, the TAM for deep learning – for both training and inference – is expected to be a lot larger than the TAM for traditional HPC by 2020, according to Nvidia. And the amount of computing deployed for these areas is also expected to growing a lot faster than for HPC. By Nvidia’s reckoning, the amount of aggregate computing sold in 2020 dedicated to deep learning training will hit 55 exaflops in 2020 and will drive $11 billion in revenues, up from 1.4 exaflops in 2013 and an untold (but probably well under $100 million) in revenue that year. Deep learning inference, which is utterly dominated by Intel Xeon processors these days, will be an even larger market by 2020, with around 450 exaiops (that’s 450 quintillion integer operations per second of capacity, gauged using 8-bit INT8 instructions that are commonly used these days for inference) and racking up $15 billion in sales. Nvidia is now demonstrating an order of magnitude in savings over CPU-only solutions using its Pascal GPUs, but it remains to be seen if it can knock the CPU out of deep learning inference.

We think that virtual workstation clusters and database acceleration represents billions of dollars more TAM for the Nvidia datacenter business, and we also think that IoT will drive some more, at both the edge of the network and in the physical datacenter itself.

“The number of applications where GPUs are valuable, from training to high-performance computing to virtual PCs to new applications like inferencing and transcoding and AI, are starting to emerge,” Huang explained on the call. “Our belief is that, number one, a GPU has to be versatile to handle the vast array of big data and data-intensive applications that are happening in the cloud because the cloud is a computer. It is not an appliance. It is not a toaster. It is not a lightbulb. It is not a microphone. The cloud has a large number of applications that are data intensive. And second, we have to be world-class at deep learning, and our GPUs have evolved into something that can be absolutely world-class like the TPU, but it has to do all of the things that a datacenter needs to do. After four generations of evolution of our GPUs, the Nvidia GPU is basically a TPU that does a lot more. We could perform deep learning applications, whether it’s in training or in inferencing now, starting with the Pascal P4 and the Volta generation. We can inference better than any known ASIC on the market that I have ever seen. And so the new generation of our GPUs is essentially a TPU that does a lot more. And we can do all the things that I just mentioned and the vast number of applications that are emerging in the cloud.”

Having said all of that, Nvidia’s second quarter was hampered a little bit by the transition to the Volta GPUs and by Intel’s delayed rollout of its “Skylake” Xeon EP processors and their “Purley” server platform. We and Wall Street alike had been thinking the datacenter business would do better than it did – we expected around $450 million based on past trends and the explosive uptake of Pascal accelerators – but the datacenter business only brought in $416 million in the second fiscal quarter.

That said, the Datacenter division saw 175 percent growth year on year and managed to grow sequentially by 1.7 percent sequentially, which is no mean feat with two product transitions under way. (Three if you count the Power9 processor from IBM that Nvidia is tied to in some important ways, namely NVLink.) Nvidia does not break out Tesla and GRID sales separately, but we reckon GRID revenues were flat sequentially at around $83 million, and that the rest was Tesla units, up a smidgen sequentially at $333 million. If these estimates are close to reality, then the Tesla line alone now accounts for 15 percent if Nvidia’s revenues, and certainly a much higher proportion of its profits.

For the quarter, Nvidia sold just under $1.9 billion in GPU products, and $333 million in Tegra CPU-GPU hybrid products that are used in handheld gaming devices, drones, and autonomous driving platforms and other things we either don’t care about or hate here at The Next Platform. The Professional Visualization division of Nvidia, which we do care about and which sells Quadro workstation GPUs, posted $235 million in sales, up 10 percent year on year, and the core gaming business, driven by the GeForce GPU cards for PCs that make Nvidia’s ever-expanding business possible, had $1.19 billion in sales, up 52 percent. The OEM and IP business was a mixed bag, with the quarterly $66 million in royalty payments from Intel gone now, but with specialized GPU sales aimed at blockchain and cryptocurrency applications coming in at around $150 million (of the $251 million of OEM and IP revenue in the period), the overall Nvidia business made up for the what has to be called a minor slowdown in the Datacenter division (despite its growth) and the absence of Intel cash.

A lot of people think that blockchain and cryptocurrency is a fad, but it isn’t, and frankly it will be part of people’s platform and we will be looking into it in some detail without getting all gee-whiz about it.

“Cryptocurrency and blockchain is here to stay,” Huang said on the call. “The market need for it is going to grow, and over time it will become quite large. It is very clear that new currencies will come to market, and it is very clear that the GPU is just fantastic at cryptography. And as these new algorithms are being developed, the GPU is really quite ideal for it. And so this is a market that is not likely to go away anytime soon, and the only thing that we can probably expect is that there will be more currencies to come. It will come in a whole lot of different nations. It will emerge from time to time, and the GPU is really quite great for it. Our strategy is to stay very, very close to the market. We understand its dynamics really well. And we offer the coin miners a special coin-mining SKU. We know this market’s every single move and we know its dynamics.”

It is not clear if these mining SKUs are based on Pascal or Volta GPUs. Huang said that the Volta GPUs were fully ramped, but what that really means is that the Volta-based Tesla V100 accelerators as used for very high end HPC and AI gear are shipping, but you can’t really say that the Voltas are fully ramped until there are desktop, workstation, and Tegra units in the market. That could take some time, and it is significant that at the GPU Technical Conference back in May Nvidia did not provide even a hint of a product roadmap. The transition to Volta could take a long time, and a lot depends on what the competition – particularly that from AMD and Intel – does.

We know one thing for sure. Nvidia is on track to be a $10 billion company this fiscal year, with maybe 25 percent of that coming in as net income, and that is quite a feat. Nvidia’s Datacenter division is one of the reasons it can grow that much and be a respectably profitable company. Raking in $2.23 billion in sales and bringing $638 million of that to the bottom line is the second in four steps to get there.

Nvidia to Play Big Role in ‘Huge’ Wal-Mart Cloud Push, Says Global Equities

Wal-Mart is building out its cloud computing network, OneOps, to one-tenth the size of Amazon's AWS, writes Trip Chowdhry of Global Equities, and Nvidia GPU chips will play a major role, he believes.

By Tiernan Ray Barron's Aug. 29, 2017 3:26 p.m. ET

Trip Chowdhry of the boutique firm Global Equities Research today reiterates his upbeat view of Nvidia ( NVDA), after gathering details about what he expects is Wal-Mart’s (WMT) increasing usage of the company’s graphics chips, or GPUs, for machine learning.

"Within [the] next 6 months or so, Walmart is going full steam with DNN (Deep Neural Networks) and will be creating its own NVDA GPU Clusters on Walmart Cloud,” writes Chowdhry, referring to a cloud computing network the company acquired in 2013 called “OneOps."

Walmart’s NVDA GPU Farm will be about 1/10 th the size of AMZN-AWS GPU Cloud, which is huge!! Walmart NVDA GPU Clusters will run Ubuntu Linux and not Red Hat Linux Walmart is working with Anaconda to create custom DNN (Deep Neural Network), and Jupyter Notebook. Walmart will be running a hybrid of CNN (Convolution Neural Network) and RNN (Recurrent Neural Network).

Chowdhry advises that “investors should not underestimate the Technology acumen of Walmart Software Developers,” adding, "we have seen them present at various conferences, they are as smart as Google or Facebook engineers."