Nvidia rushes to ARMs

Nvidia’s Analyst Day at its Santa Clara, CA headquarters on March 8 gave me new insight into the company and how it see the markets.

Jen-Hsun Huang, co-founder, president and CEO, kicked off the conference by baldly stating: “Creativity matters. Productivity matters,” This shouldn’t provoke much argument these days.

Huang then outlined Nvidia’s three businesses: personal/mobile computing; design/visualization; and cloud/high performance computing (HPC). In turn there are four product lines: GeForce and Tegra cover the PC/mobile space, Quadro the design segment, and Tesla takes on HPC. The rest of his talk described three seismic shifts in the industry that shape the company's strategy.

The first is mobile computing and its demand for much more efficient devices that can still deliver a great user experience. Huang latched onto a guy in the crowd who had a full-size laptop, and used his system as an example of power consumption: his MacBook consumed 20 watts at full bore, 10 watts in normal use, and probably 2-3 watts at idle.

Huang thinks that we will see fully functional systems with milliwatt idle draws and single-digit maximum power consumption rates. His point about mobile computing is well taken, but perhaps a bit optimistic. Of course, he didn’t specify a time scale, and no one asked …

For Huang, parallel processing is the key to the mobile market - and this plays to Nvidia's strengths. Only by going parallel can we satisfy and improve energy utilization and performance goals. Nvidia’s CPU strategy in mobile is based on the ARM processor and not the traditional Intel/AMD x86 standard.

This is an easy decision to make: billions of devices run ARM processors now and billions more are on the way. In mobile computing, ARM is the center of gravity for developers

Seismic shift two is that Microsoft has bought into ARM. According to Huang, Microsoft doesn’t just want to be on ARM – it has to be on ARM. He’s right.

Microsoft is not in the business of building software that runs on x86 desktops or laptops”: it is in the business of providing software solutions to consumers and business. It has to be on ARM because this is the fastest growing computing platform in the world.

64-bit elephant

This brought up a question from our buddy Nathan Brookwood, full-time chip guru and part-time curmudgeon, who asked about the 64-bit elephant in the room. ARM is 32 bits and can address only a few GB of RAM. The server and PC world has moved on to 64 bits and the ability to address thousands of times more memory.

Huang addressed the question head-on and slipped it at the same time. He acknowledged that ARM will have to go to 64-bit to make it in the server, PC, and even mobile worlds, long-term. He continued by saying that Nvidia isn’t announcing or saying anything about this move … but Nvidia and Microsoft both betting on ARM should give others the motivation and courage to push ARM to 64 bits.

I’m not privy to any details, and I don’t have any inside info on this topic, and will deny knowing anything about anything until my last breath … but I’d bet that Nvidia is highly interested and involved in extending ARM into 64 bit-dom. It’s important to the success of its HPC effort – critical, even. And I don’t see Nvidia as the sort of company that would just stand on the sidelines and root for someone else to do the work.

Call to ARMs

The third and last seismic shift also concerns ARM and its status as an open standard. If you want to make your own ARM variant, it’s as easy as getting a license and having TSMC or someone else fab them up. Nvidia now has a license and will be churning out its own server-optimized ARM chips in the next two years.

This is the ‘Project Denver’ initiative announced earlier this year. A chart in the Nvidia announcement presentation shows the scale of ARM production vs. x86.

The difference is stark, as are the potential implications. In 2005, there were about 1.75 billion ARM chips shipped compared to maybe 250 million x86 CPUs. In 2009? The scoreboard on the x86 side reads maybe 400 million, but ARM has grown to four billion.

That’s some serious growth and volume. Volume plus growth equals pervasiveness. (Actually, it equals ‘volume growth,’ but you get my point.) The x86 platform has volume, but not on nearly the scale that we see with ARM, and volume drives production costs and ecosystem evolution.

Huang didn’t talk much about ‘Server ARM’; he mainly talked about mobile and consumer applications. But this is an interesting topic. Is Server ARM the future of computing? Is it inevitable?

I could argue that it is, and point to the way high volumes of low-cost RISC chips and workstations led to RISC-based servers, which took away a huge chunk of the industry from mainframes and minicomputers. Just a few years later, we saw servers based on low-cost x86 processors do the same thing to the RISC-based systems. Will the same thing happen with ARM? Let me know what you think in the comments section …

On with the show

The contrast between Huang and other high tech chieftains is interesting.

Most CEOs of established tech companies seem to be business people first and technologists second. Even if their formal education and experience is highly technical, they seem to change when they reach the lofty heights.

They are enthusiastic about their companies, their products, and what they do for their customers. But they don’t seem to give off that “This is soooo cool!’ vibe. This doesn’t make them bad leaders or managers, and I’m not sure that it even means all that much in the long run.

But it is disconcerting when you see a chief executive give a presentation and you think: ”He/she could be leading a company in a completely different industry and still be using the same words. You’d just swap out the pictures and charts.”

Watching Huang, I never get that impression. He’s a technologist first and foremost. His love of the technology and what it can do comes through loud and clear. He can speak authoritatively and accurately on technical topics ranging from the chip level all the way up to the current state of the art in seismic processing or molecular biology.

Huang went way over his allotted time at Analyst Day, fueled by his enthusiasm for the topics and questions from the gathered query of analysts. (A tech analyst runs in queries, just as wolves run in packs and lions gather in prides.)

His off-the-cuff presentation did not require much in the way of slides; just a few as a graphical backdrop or to drive home a point.

It wasn’t polished and pat; the presentation wandered and veered a bit. But I enjoyed every minute. It isn’t often that I get to see someone like Huang who is knowledgeable and enthusiastic about the technology that his company is bringing to the market, and more concerned about the substance of his presentation than form and appearance. ®