Blurred lines

The distinction between driving a display and general-purpose programming is blurring. As game visuals become more advanced, more of the code is devoted to simulating real-world physics. "The combination of simulation and visualisation is going to transform how people enjoy games," Huang says.

In the same way, designers and engineers with workstations can use GPU accelerators to render accurate simulations of their designs. NVIDIA Maximus uses two GPUs, one from its Tesla line for general purpose programming and the other a Quadro for the display. "Now the workstation is completely changed because it can combine the workflow of two parts of the design, the design part, and the simulation part," claims Huang.

Huang is looking forward to Windows on ARM. He talks about the Asus Transformer tablet and its long battery life, and then says: "Imagine Windows on ARM on that device, and next-generation versions of that device. It's a foregone conclusion that the PC industry will be revolutionised. I'm anxious to see Windows on ARM come to market and I think Microsoft is going to be very successful with it."

There are a few clouds on NVIDIA's horizon. One is that ARM, which dominates the world of mobile CPUs, is now also designing mobile GPUs, under the brand Mali. That could undermine NVIDIA's Tegra business, a SoC (System on a Chip) which combines an ARM CPU with an NVIDIA GPU. Huang does his best to dismiss Mali as having only "basic capabilities". He adds, "We have to continue to find our value-add, if we don't then we don't have a role in the world."

Huang will not be drawn on the subject of Kepler, his company's next generation GPU family, which seems to be delayed though only in a notional sense since no date has been announced.

"We don't have the extra power-sucking silicon wasted on graphics functionality when all we want to do is compute in a power efficient manner, and - second - we can dedicate our design to being highly programmable because we aren't a GPU - we're an x86 core, a Pentium-like core for "in order" power efficiency - every algorithm that can run on GPGPUs will certainly be able to run on a MIC co-processor.

"MIC used to be a GPU," says Huang when asked about Intel's co-processor. "MIC is Larrabee 3, and Larrabee 1 was a GPU. So there is no difference, except of course that we care very much about GPU computing, and we believe this is going to be the way that high performance computing is performed."