The GPU Flashback Archive: NVIDIA GeForce 200 Series and the GeForce GTX 260

We are treated this week to a look at the NVIDIA 200 series of graphics cards. As well as rejigged product nomenclature, the 200 series represents a new and improved architectural approach to the GPU design from NVIDIA who managed to come up with their largest graphics chip ever. The 200 series was the latest weapon in the fight against ATI and one that proved to be fairly potent in terms of raw frame-rates. Let’s take a look at the new architecture, the graphics cards that were popular at the time with overclockers on HWBOT and of course, some of the more notable scores that have been made its introduction.

NVIDIA GeForce 200: Overview

We mentioned in the previous GeForce 9 series article how this period of history shows plenty of overlap in terms of GPU series. In April 2008 NVIDIA launched the 9 series and the G92 GPU (read all about the 9 series here ) which was based on an improved but largely identical Tesla design. The 9 series served a purpose by bringing to market cheaper high-end enthusiast cards that could compete with ATI. It also eventually gave NVIDIA a chance to test out the 55nm manufacturing process from TSMC using a more familiar architecture. The GeForce 200 series initially launched on 65nm silicon with later revisions taking advantage the 55nm process.

The NVIDIA GeForce 200 series arrived on shelves in June of 2008, just two months after the GeForce 9800 GTX was launched. Again we find the new series launch with two new flagship cards; the GTX 280 and the GTX 260. The GPU at the heart of these two new cards was the GT200, arguably the biggest graphics chip that had ever been produced. Let’s take a moment to recall the GT200 GPU and what it brought to table.

The headline feature for many tech reviewers was the actual size of the GT200, both physically and in terms of transistor count. The original 65nm G80 packed a very impressive 754 million transistors into its 324 mm² die and was seen as pretty much the bleeding edge of IC design in many quarters. When the GT200 arrived packing 1.4 billion transistors into a larger 576 mm² die it was heralded as beast, and rightly so. By comparison, in 2008 Intel were producing dual-core server CPUs that had fewer transistors, so the GT200 really was exceptional. The pricing of the first GTX 200 cards may well also have reflected the lower yield rate per wafer considering the sheer size of these things.

So why was the GT200 so big? Essentially NVIDIA (and to some extent ATI also ) knew that modern games were going to need a massive amount of pure computation power. The GT200 packed more compute power than any previous GPU and did so via its 240 streaming processors. In 2008 Vertex and Pixel shaders allowed software developers to make everything look much more realistic with improved transformation and lighting thanks to small individual shader units dealt with by individual floating point processors in the GPU. This was improved upon with DirectX 9 with way longer shader programs and shader version 2.0. With DX10 we find a new level of efficiency with Unified Architecture and Shader Model 4.0.

With the GT200 we find the shaders re-branded as Stream Processors and a new architecture with much better efficiency. The GTX 280 arrives with 240 Stream Processors, an 85% increase in shader count compared to the previous generation. The GTX 280 also has 80 TMUs (up from 64) and 32 ROPs (up from 16). In terms of memory were looking at 1GB of GDDR3, which is twice that of the 9800 GTX. The card uses a memory 512-bit bus, which is again double that of the previous generation. All in all, the GeForce GTX 280 and the GT200 were a billion dollar R&D gamble from NVIDIA that aimed to create the world’s most powerful GPU. It’s fair to say that they succeeded.

In terms of pricing, the NVIDIA GeForce GTX 280 arrived with flagship price tag of $649 USD, a pretty premium for any graphics card in 2008. It’s brother the GeForce GTX 260 retailed for $449 USD. Thanks to ATI these prices would drop however. The GTX 280 boasted a default GPU clock of 602MHz compared to the GTX 260 at 576MHz, a Shader clock of 1,296MHz compared to 1,242MHz on the cheaper card, and a memory clock of 1,107MHz (2,214MHz effective) compared with 999MHz on the GTX 260. It’s actually easy to see why the GTX 260 remains the most used 200 series card on HWBOT, due to its goldilocks sweet spot placement. It was also fairly good to overclock to levels somewhat similar to its more expensive brother.

In nomenclature terms, the 200 series is the first to see GTX, GT and other tier lettering go before, not after the model number. The GT200 chip itself is the first to see the architecture mentioned in GPU naming – the T stands for Tesla, just as today with GPx chips, P stands for Pascal. So why the 200 Series? What happened to the 100 Series? NVIDIA applied the 100 Series model naming scheme to its rebranded 9 Series cards that appeared in early 2009, completing the shift to a new naming scheme that still exists today with the 1000 series – perhaps the last of its kind.

The 200 series was expanded in December 2008 with the GTX 285 which used GT200 chips made using the now mature 55nm process. It arrived with a more reasonable $359 USD pricing (again thanks to a competitive ATI) and higher clocks all round. This was followed up in January 2009 with the GeForce GTX 295 card, a dual PCB, dual GPU card that launched at $500 USD.

Here’s a shot of a single PCB GTX 295 card from Gainward:

The Most Popular NVIDIA GeForce 200 Card: The GeForce GTX 260

It’s time to take a look at the most popular NVIDIA 200 series cards in terms of submissions to the HWBOT database:

-GeForce GTX 260 (216SP) – 18.56%

-GeForce GTX 285 – 15.25%

GeForce GTX 295 – 13.77%

-GeForce GTX 280 – 12.28%

-GeForce GTX 275 – 8.55%

-GeForce GTS 250 – 8.31%

-GeForce GTX 260 (192 SP) – 6.11.%

-GeForce 210 (DDR3, 64bit, GT218) – 4.33%

-GeForce GT 220 GDDR3 – 1.96%

-GeForce 210 (DDR2, 64bit, GT218) – 1.68%

As with several series in the past we find that the flagship model is not the most popular in terms of HWBOT submissions. The GTX 260 card certainly seems to fit the goldilocks analogy – plenty of performance for not too many dollars. In fact the GTX 260 that sits at the top of our list is a second revision of the card that uses an updated 55nm G200 GPU and a more generous 216 Stream Processors, up from 192 on the original (hence the 216 SP notation in our database naming scheme). The card itself arrived in November of 2008 for around $279 – $299 USD and proved to be popular, offering very close to flagship performance for not quite flagship pricing.

Check out this version of the GTX 260 (216 SP) from EVGA (below). It boasted a GPU clock of 626MHz, (up from stock card of 576MHz), a Shader Clock at 1,350MHz (up from 1,242MHz) a memory clock of 1,053MHz (up from 999MHz). As with all GTX 260 cards it used a pair of 6-pin power ports (unlike the GTX 280 which needed 1x 6-pin and 1x 8-pin) and featured dual-link DVI ports with HDMI supported via a dongle.

The GTX 260 card tops the table but in fairness, the GTX 285 with 15.25%, the GTX 295 with 13.77% and the GTX 280 with 12.28% all prove that high-end 200 Series cards have been popular on HWBOT. Some cards will have doubtlessly been used in a retrospective manner with the G200 GPU being revisited perhaps as some second-hand acquired fun. It’s certainly valid to point out that the HWBOT database was very young in 2008-2009.

NVIDIA GeForce 200 Series: Record Scores

We can now take a look at some of the highest scores posted on HWBOT using NVIDIA GeForce 285 card, the fastest single GPU in the 200 Series lineup.

Highest GPU Frequency

Although technically speaking, GPU frequency (as with CPU frequency) is not a true benchmark, it does remain an important metric for many overclockers. Looking through the database, we find that the submission with the highest GPU core frequency in the HWBOT database using a GeForce 285 card comes from legendary Brazilian overclocker Rbuass. He pushed a GeForce GTX 285 to 1,701MHz, a massive +162.50% beyond stock settings. His graphics memory was configured at 1,321MHz (+6.53%). The rig used also included an Intel Core i7 920 ‘Bloomfield’ processor clocked at 4,551MHz (+70.45%).

The highest 3DMark06 score submitted to HWBOT using a single NVIDIA GeForce 285 card was made by dhenzjhen from the Philippines. He pushed an ASUS Matrix GeForce GTX 285 card to 1,100MHz (+69.75%) on the GPU core and 1,500MHz (+20.97%)on the graphics memory. With this configuration he managed a hardware first place score of 37,328 marks. The submission was actually fairly recent, happening in October 2016 and was helped by an Intel Core i7 6700K ‘Skylake’ chip clocked at 6,268MHz (+56.7%)..

Here’s a close up of the LN2 cooled card in action with plenty of frost going on.

In the classic Aquamark benchmark we find that ikki from Japan has the highest score with a GeForce GTX 285 card with the GT200 GPU clocked at 985MHz (+52.01%) with memory boosted to 1,242MHz (+16%). This configuration allowed ikki to hit a score of 547,184 marks. The score was made just in January of this year and will have surely benefited from the fact that the card was joined by a Intel Core i7 7700K ‘Kaby Lake’ chip clocked at 6,900MHz (+64.29%).