There was a time when no PC could play a decent game unless it was outfitted with a discrete graphics processor. Today, most off-the-shelf desktop rigs—and nearly all notebook PCs—rely entirely on the CPU for video and graphics processing. And yet the market for discrete graphics continues to thrive. If you don’t give a flying joystick about playing AAA PC games, is a video card a worthwhile upgrade? Let’s compare the performance of integrated and discrete graphics processors to find out.

AMD and Intel have significantly improved the graphics technologies integrated into their respective CPUs. AMD’s Kaveri-class Accelerated Processing Units (APUs) incorporate the same powerful Graphics Core Next (GCN) architecture of its best discrete Radeon-series graphic processors.

Intel has likewise updated the features and capabilities of its HD-series graphics engines, which are integrated into its fourth-generation Core processors (codenamed Haswell). They now deliver broader support of Microsoft’s DirectX 11.1 API (an application program interface originally developed for Windows games), they can easily support multiple displays (as well as 4K models), and they’re compatible with many more games.

To determine the difference discrete graphics could make, we assembled two computers. One is powered by an AMD A8-7800 (a Kaveri-class APU with an integrated Radeon R7-series graphics processor), and the other has an Intel Core i7-4670 (a Haswell-class CPU with an integrated Intel HD 4600 graphics processor). We then ran a series of benchmarks with and without a discrete video card onboard each system. You might be surprised by the results.

The argument for discrete graphics

Champions for discrete graphics usually tout one major benefit: performance. All but the lowest-end video card will have a far more powerful GPU than what you’ll find inside a CPU. What’s more, the card will provide its GPU with a dedicated pool of high-speed memory. An integrated GPU, in contrast, must share both system memory and the data bus. You can usually crank up a game’s visual quality settings with a discrete GPU, and it will still run circles around integrated graphics.

There are other benefits to using discrete graphics cards, too. With Nvidia’s current-gen graphics cards, for example, users can take advantage of proprietary features such as ShadowPlay and PhysX. ShadowPlay leverages the video-encoding engines built into Nvidia’s GPUs to record and stream live gameplay videos in real time, with negligible impact on frame rates. It’s a key feature for Nvidia’s Shield handheld gaming device.

PhysX is Nvidia’s proprietary physics-simulation technology, which can make the objects in games behave more like they would in the real world (cloth ripples and tears, liquids flow and splatter, buildings explode into small shards, and more). PhysX isn’t universally supported, but it can have tremendous visual impact in games where it is.

Video cards based on Nvidia’s GeForce GPUs support Nvidia’s proprietary physics-simulation technology, PhysX, which lend considerable realism to games that take advantage of it.

Games aren’t the only applications that benefit from the power of a discrete GPU. AMD’s and Nvidia’s GPUs are made up of thousands of processors that can carry out multiple operations simultaneously. Any application that benefits from such parallel processing—be it an image-editing program like Photoshop, data-encryption software, or a distributed-computing project like Folding@Home or Seti@Home—will run faster with the assistance of a more powerful GPU.

Discrete GPUs can also accelerate crypto-currency mining to produce Bitcoins, Litecoins, and other virtual currencies. Miners have been gobbling up graphics cards based on AMD’s latest GPUs, because AMD’s Radeon architecture has proved more effective at the task than Intel’s Core processors and Nvidia’s GeForce technology. Where Intel’s fastest Haswell-based CPU—the Core i7-4770K—is capable of processing about 93 thousand hashes per second, AMD’s Radeon R9 290X can process about 880 thousand hashes per second.

The argument against discrete graphics

There are drawbacks to discrete graphics cards, with cost being the most obvious. Buying a video card at retail will set you back anywhere from $50 to $1000 or more at the extreme high end (although you can buy a very fast card for less than $400). AMD announced the world's fastest video card just yesterday. The Radeon R9 295X2 features two of the company's fastest GPUs on a single card. It's priced at $1500.

Meanwhile, AMD and Intel essentially give away graphics with their current-gen processors (only AMD’s FX series and Intel’s Ivy Bridge-E chips don’t have integrated GPUs), and the motherboards that support those CPUs have display outputs built right in.

Nvidia’s GeForce GTX Titan Z is priced at a cool $2999, but the vast majority of gamers buy cards in the $300 to $500 range. That’s still a lot of lettuce.

A discrete graphics card also adds complexity to a system. Your motherboard must have an available PCIe x16 slot to host the card, for instance. While this isn’t usually a concern for a DIY-er, some off-the-shelf systems might not have such a slot. Or the card might not fit inside the case. Or the existing power supply might not be capable of supporting the card’s electrical requirements. All because the PC manufacturer didn’t anticipate—or just didn’t care—that the end user might want to make such an upgrade.

Installing a discrete graphics card in Intel-based systems can also complicate the use of technologies like Intel’s Quick Sync video-encoding engine. Quick Sync is linked to Intel’s integrated graphics core, and installing a discrete card might disable it. If Quick Sync is something you can’t live without, you might be able to re-enable the integrated GPU, but there’s no truly elegant way of pulling that off.

There is no free lunch, of course. Adding a discrete video card will cause your system as a whole to consume more electrical power. That card will also generate heat, which will typically need to be evacuated using a cooling fan that might add a modicum of noise to your environment (there are some passive cooling solutions for lower-end GPUs, but they tend to be more expensive than fan-based cards).

Power consumption is one of the downsides of adding a discrete graphics card to your PC.

And now for the numbers

Here’s where the rubber meets the road: We assembled two systems, the first of which had an AMD A8-7600 APU with Radeon R7 series integrated graphics in an Asus A88X-Pro motherboard. The second system featured an Intel Core i5-4670 processor with Intel HD 4600 integrated graphics in a Gigabyte Z87X-UD5 TH motherboard. Both systems were outfitted with 16GB of memory, a Samsung 840 Pro SSD for storage, and a 1000-watt Silverstone power supply. The 64-bit version of Windows 8.1 Pro x64 was installed on both systems.

We ran a series of benchmarks—some gaming-oriented, some focused on productivity and content creation—using just the graphics processors integrated into the respective CPUs. We then installed a Radeon R9 280X video card (this particular model was from XFX) in each system and reran all the benchmarks.

As you can see from the charts (we didn’t create one for every benchmark we ran), adding a discrete graphics card improved performance nearly across the board—and it wasn’t only games that benefited. In PCMark 8, for instance, we ran the OpenGL-accelerated versions of the Home and Work suiters. This API leverages all of the PC’s available compute resources, both its CPU and its GPU. Adding a discrete GPU to the equation boosted the system’s performance on this one benchmark between 3 percent and 19 percent.

Adding a discrete GPU to your PC will improve its performance with productivity apps, not just games.

Adding a GPU had very little impact on the Cinebench multi-threaded CPU benchmark scores, but it boosted the Intel-based system’s performance with the Cinebench OpenGL benchmark by a staggering 79 percent, and the AMD-based system’s performance on this benchmark by 42 percent.

With apps that are programmed to tap the compute resources a GPU can deliver, you’ll see big performance gains with a discrete video card.

People often assume that casual gamers—folks who play Farmville, Angry Birds, and other “simple” games—will see no benefit from discrete graphics. But when we added a discrete GPU to each system, we saw significant performance gains with Microsoft’s browser-based HTML5 benchmark, Fishbowl. This particular test is capped at 60 frames per second (the refresh rate of most monitors), and it hit that cap in three of the four tests we ran with the discrete graphics card installed. As casual games become more complex, so will their need for GPU horsepower.

Do you play casual games? Those based on HTML5 will benefit from having a discrete GPU in your system.

Speaking of complex games, adding a discrete GPU delivered a major shot in the arm to our test systems when it came to delivering BioShock Infinite (at resolution of 1920 by 1080 pixels) and the synthetic gaming benchmark 3DMark Fire Strike.

No surprise here. Adding a video card made BioShock (at resolution of 1920 by 1080 pixels) run considerably faster.

But there is one application where adding a discrete video card did not have a significant impact: Video playback. We saw very little impact on CPU utilization while streaming both YouTube videos (HTML5) and video files encoded using the h.264 codec and placed inside MKV containers.

The bottom line is that nearly every desktop PC user can benefit from the addition of a discrete graphics processor. Video cards aren’t just for gamers, but the benefits for gamers far outweigh the benefits delivered to mainstream users.

To comment on this article and other PCWorld content, visit our Facebook page or our Twitter feed.