Optimus does this by using monitoring software to see what you're running and, according to a pre-determined policy, switch GPUs on the fly. Checking your email? Stick with the integrated core. Fire up Far Cry, though, in the discrete GPU immediately takes over.

Without any interruption to the display.

Under Optimus, the IGP always manages the display

Here's how. Past dual-GPU solutions used a mutliplexer chip to relay the feed from each GPU to the display. It was the mux chip's switch from source to source that briefly stopped the flow of data from the display engine to the screen.

Optimus pulls the mux chip right out of the picture. Instead, it uses the integrated GPU as the sole source of screen data, and since the IGP is never turned off, there's no break in the what you see on the screen.

Rather than drive the display directly, Optimus uses the discrete GPU - it has to be an Nvidia part, of course - to tap into the IGP's frame buffer and render the off-screen image on the IGP's behalf. The rendered image travels from GPU to IGP's main memory-based frame buffer over the laptop's PCI Express bus.

It's a trick made possibly by Windows 7, which allows a GPU driver to route rendering jobs to other processors when it knows they're available. It runs both drivers simultaneously, even though one may not be doing any work.

The reliance on Windows 7 tech means there's no reason why Nvidia's rivals, like AMD's ATI division, can't create an Optimus of their own. Nvidia is patenting the software techniques it uses to decide which app runs on which GPU, and says it owns the technology used to blit the rendered frame out across the PCIe bus to the IGP frame buffer.

The discrete GPU is only powered up when it's needed

Haas says that the process of engaging the discrete GPU takes "just a few" CPU clock cyles. Certainly, when he demo'd the technology to Reg Hardware the jump from IGP to GPU was entirely smooth.