For Jen-Hsun Huang, the smooth talking silicon scion that presides over NVIDIA, his company shouldn't be constrained to just building progressively faster GPUs for progressively better looking videogames. With great GPU computing power comes great possibilities; a hyper-efficient transistor packed chip shouldn't be limited to making games look better through dynamic shaders and more complex ray-tracing.

One of the big ideas emphasized at this year's GTC 2012 keynote was the power of Kepler in cloud applications, and Mr. Huang demonstrated some of these applications in ways that are sure to be the iconic takeaways of the conference: Windows 7 running on an iPad; 100 nodes running on a single sever; a Macbook Air running a high-end edition of the 3D animation software Maya; a shooter with current-gen graphics being played on both a television as well as a tablet ' delivered to both platforms via the cloud.

For NVIDIA the future of computing exists in the cloud, a place where computing will be heterogeneous and applications hardware agnostic. Of course parts of this concept aren't entirely new, as the first instances of enterprise computing were based on the same paradigm of a powerful servers and low powered terminals. The obvious change since the early days of terminals and mainframes is the incredible advancement in processing speed that led to the modern P.C era, and now more recently the 'post-P.C' era that some, most notably Apple, have been prophesying.

In an enterprise environment, the logical conclusion of the 'post-P.C' era is the beginning of the 'bring your own device' era. The company P.C, much like the company car, as Mr. Huang pointed out during his keynote, is quickly fading into obsolescence' a technological relic of the past.

The solution to the obsolescence of the P.C is a powerful, scalable cloud computing system. As was demonstrated during last Tuesday's keynote, this system is the so called VGX: a GPU for cloud computing based on NVIDIA's Kepler architecture.

'Kepler is a big deal for computer graphics, but an even bigger deal for high-performance computing; we're going to put the GPU in the cloud.' Mr. Huang remarked about the VGX. '[NVIDIA VGX] is a technology that virtualizes the computing environment such that irrespective of your computing device, we can provide access to the corporate technologies and data that you need.'

As explained on stage, the core of the NVIDIA VGX technology is Kepler's ability to create a virtualized GPU. Previously, GPUs had to be tied to a specific application. Now, in a Kepler world, GPUs don't have to be tied to a display nor dedicated to a particular application.

The obvious enterprise application of such a GPU is life on cloud Kepler: a change to the way that enterprise networks embrace the 'post- P.C' world, and make personal computing nothing more than a software layer is virtual desktop infrastructure (VDI). Citrix has been a provider of VDI solutions of yore, but these have suffered from the ills of high latency and relatively low processing power.

With Citrix's next generation virtual network computer ' or 'receiver' ' application on a Kepler based virtual GPU datacenter, it will be difficult to discriminate between using a virtual or local computer ' save for the local hardware. In a demo onstage, NVIDIA demonstrated how an iPad could easily run Windows 7 (an extraordinarily bizarre sight), and how a low powered Macbook Air could run a high-powered animation application used on Industrial Light and Magic render farms to make changes to scenes in real time.

As NVIDIA is more commonly known for video cards that drive P.C games, an obvious application of the VGX technology is gaming. NVIDIA is implementing this with the GeForce GRID, a cloud based system of game delivery that has an average latency dramatically lower than the competing service OnLive.

'It will now be possible for game-service operators to offer bundles of games for approximately $10 a month, similar to movie-streaming services,' Mr. Huang said onstage.

NVIDIA claims that this latency is somewhere within the range of 50ms: 10ms for capturing and encoding input, 30ms of network latency, and up to 10ms of decoding the video stream.

The only hardware gamers will require is a device capable of decoding an H. 264 video stream, as NVIDIA executives explained at a post-keynote press conference.

'Where we used to render from frame buffer and copy to the CPU for compression and streaming, here it's already streaming right out of the GPU, saving encode time, not to mention copy time,' Mr. Huang explained at the post-keynote press conference. 'By compressing and streaming in parallel we've taken maybe a couple hundred milliseconds of lag, and reduced it to something that's the same performance and snappiness as a game console.'

'Anything that can play YouTube can run GeForce Grid-streamed games,' Mr. Huang told the crowd.

On stage, two gamers competed against each other in the MechWarrior-esque shooter Hawken: one gamer was using an ASUS Transfomer tablet, while the other had a controller connected via USB into a smart T.V. NVIDIA's executives made a big deal about this, emphasizing that 'the cord was the console' when the gamers took to the stage.

The advantage of the Kepler powered GeForce GRID is that the capture and compression happens on the GPU as opposed to the CPU, reducing latency. And on stage, latency was an apparent non-issue as the game looked as smooth as a console or P.C games save for some compression artifacts.

Of course, the only downsides of life on cloud Kepler are the constraints of bandwidth and data caps. While the demo on stage was being powered by a datacenter 10 miles away, what would happen when that datacenter was 100 miles away on a network that constantly dropped packets?

This cloud technology from NVIDIA looks promising, and Mr. Huang could certainly fashion his company into a 'great-power' in the technology world instead of a mere 'middle-power' if he executes this play properly. However, without further details of when VGX, as well as the GeForce Grid will be available for implementation on mainstream systems and definite answers on how they will deal with the bottleneck of bandwidth this is only promising technology ' not a real world solution.