The first major benefit to NVIDIA comes in the form of die size. The original G70 is a 334mm^2 chip, while the new 90nm GPUs are 196mm^2 and 125mm^2 for G71 and G73 respectively. Compare this to the G72 (the chip used in the 7300) at 77mm^2 and the R580 at 353mm^2 for a good idea of the current range in sizes for 90nm GPUs and you will see that NVIDIA hardware is much smaller than ATI hardware in general. The reasons behind the difference in die size between the high end ATI and NVIDIA hardware comes down to the design decisions made by ATI and NVIDIA. ATI decided to employ full-time fp32 processing with very good loop granularity, floating point blend with anti-aliasing, a high quality anisotropic filtering option, and the capability to support more live registers at full speed in a shader program. These are certainly desirable features, but NVIDIA has flat out told us that they don't believe most of these features have a place in hardware based on current and near term games and the poor performance characteristics of code that makes use of them.

Of course, in graphics there are always chicken and egg problems. We would prefer it if all companies could offer all features at high performance for a low cost, but this just isn't possible. We applaud ATI's decision to stick their neck out and include some truly terrific features at the expense of die size, and we hope it inspires some developers out there to really take advantage of what SM3.0 has to offer. At the same time, a hardware feature that goes unused is useless (hardware is only as good as the software it runs allows it to be). If NVIDIA is right about the gaming landscape, a smaller die size with great performance in current and near term games does give NVIDIA a clear competitive edge. Also note that NVIDIA has been an early adopter of features in the past that went largely unused (i.e. fp32 in the FX line), so perhaps they've learned from past experience.

The smaller the die, the more chips can fit on one silicon wafer. As the wafer costs the same to manufacture regardless of the number of ICs or yield, having a small IC and high yield decrease the cost per die to NVIDIA. Lower cost per die is a huge deal in the IC industry, especially in the GPU segment. Not only does a lower cost to NVIDIA mean the opportunity for higher profit margins, it also gives them the ability to be very aggressive with pricing while still running in the black. With ATI's newest lineup offering quite a bit of performance and more features than NVIDIA hardware, this is all good news for consumers. ATI has a history of being able to pull out some pretty major victories when they need to, but with NVIDIA's increased flexibility we hope to see more bang for your buck across the board.

We haven't had the opportunity to test an X1800 GTO for this launch. We requested a board from ATI, but they were apparently unable to ship us one before the launch. The ability of ATI to sustain this product as well as it did the X800 GTO is certainly questionable as well (after all, the X800 GTO could be built from any of three different GPUs from different generations while the X1800 GTO has significantly fewer options). However, we are hopeful that the X1800 GTO will be a major price performance leader that will put pressure on NVIDIA to drop the price of their newest parts even lower than they already are. After all, in the end we are our readers' advocates: we want to see what is best for our community, and a successful X1800 GTO and the flexibility of NVIDIA after this die shrink would certainly be advantageous for all enthusiasts. But we digress.

The end result of this die shrink, regardless of where the prices on these parts begin to settle, is two new series in the GeForce line: the 7900 at the high end and 7600 at the midrange.

The Newest in High End Fashion

The GeForce 7900 Series is targeted squarely at the top. The 7900 GTX assumes its position at the top of the top, while the 7900 GT specs out very similarly to the original 7800 GTX. Due to the 90nm design, NVIDIA was able to target power and thermal specifications similar to the 7800 GTX 512 with the new 7900 GTX and get much higher performance. In the case of the 7900 GT, performance levels on the order of the 7800 GTX can be delivered in a much smaller, cooler package that pulls less power.

While the 7900 GTX will perform beyond anything else NVIDIA has on the table now, the 7900 GT should give NVIDIA a way to provide a more cost effective and efficient solution to those who wish to achieve 7800 GTX level performance. The specifics of the new lineup are as follows:

Image wise, the 7900 GTX takes on the same look as the 7800 GTX 512 with its massive heatsink and large PCB. In sharp contrast to the more powerful 7900 and its 110nm brother, the 7800 GTX 512MB, the 7900 GT sports a rather lightweight heatsink/fan solution. Take a look at the newest high end cards to step onto the stage:

Midrange Chic

With the introduction of the 7600 GT, NVIDIA is hoping they have an X1600 XT killer on their hands. Not only is this part designed to perform better than a 6800 GS, but NVIDIA is hoping to keep it price competitive with ATI's upper midrange. Did we mention it also requires no external power?

In our conversations with NVIDIA about this launch, they really tried to drive home the efficiency message. They like to claim that their parts have fewer transistors and provide performance similar to or greater than competing ATI GPUs (ignoring the fact that the R5xx GPU actually has more features than the G70 and processes everything at full precision). When sitting in on a PR meeting, it's easy to dismiss such claims as hype and fluff, but seeing the specs and performance of the 7600 GT coupled with its lack of power connector and compact thermal solution opened up our eyes to what efficiency can mean for the end user. This is what you get packed into this sleek midrange part:

Post Your Comment

97 Comments

Any idea when we'll see a comparo showing the 7900GT against following cards?

X800XL
X1800XL
X1800XT
7800GT

It is important for people running cards like those right now to know how much gain they will see going with a 7900GT versus going with a 7900GTX. Clearly they can see the difference between the 7900GT and 7900GTX on this review, but no one knows what improvement the 7900GT would have today (with today's drivers and games) over the cards many people are still using such as the X800XL or 7800GT.

It's important to know if the 7900GT offers enough gain for such users over their current cards, or whether they should step all the way up to the 7900GTX.

I definite like the review and the presentation Derek. There are definite tradeoffs for price, power load and performance.
All this talk of HDCP, DX10 (with Vista) and HD (1080p) PureVideo vs. AVIVO (shame on you nVidia for asking more from the consumer when ATI bundles such goodies) just around the corner, I'm still on the wait-n-see list before making the plunge to PCIe (besides AMD's M2 chipset, and Intel's DuoCore refresh to spank the PentiumD).
What I don't get is how drastic ATI's ability to do AA with HDR (how many games truly support this? FarCry? but Splinter Cell can't? Half-Life2 engine?) shines above nVidia's lack. Is this the only feature ATI has an exclusive win over nVidia?
Also, there was a preface of the 7900GT being marginally faster than the 7800 GTX-256, with a nice price advantage going to the 7900GT (as well as lower power load), killing off the 7800 line for newcomers. So where is ATI's high-mid or low-high 300$ competing part? Along with the 1900XTX being a gratutitous weak "ultra" offering, since a Crossfire only paces the 1900XT, "wasting" the XTX's 5% extra power, except to prove the King-of-the-Hill mentality. More power to ATI's customers paying a cost premium.
The 7900GT SLI might be penny-wise & pound-foolish, as two of these cards cost substaintially more (350x2 vs. 550 = ~150$ / 20%) a single 7900GTX, draw more power (hence hidden cost of a beefier UPS upgrade) for roughly ~15% gain (YMMV with oc'ing, or manufacturer tweaks).
And sure, the ATI X1800GTO squeaks a victory from the nV 7600GT, with est. end of March '06 MSRP of 249 vs. ~180-200. For the non-graphic fanatic, cost-conscious WoW player, the 7600GT is a nice target for newer PCIe. For the gung-ho FPS shooter, for a bit more, why not aim for the 7900GT instead of the X1800GTO?
For curiosity sake, Derek, could you downgrade a 7900GTX to GT core/mem clock speeds and see how much a difference the extra 256MB makes? If the GT has good OC headroom, it can be a better bargain. With the same basic core, how much can the clock be pushed on the GT? As for memory, how much is 256MB more of GDDR3 RAM worth (over the same bus), and how far can the GT's bandwidth get pushed to help the benchmarks? Is nVidia able to push a GT to a GTX due to better active cooling? This might be wait tweakers looks for to justify their purchase. We see that a 7800GTX-512 had definitive victories over a 7800GTX-256/7900GT of about ~20%, which does bring nearly playable frame-rates into the 60 fps point. Maybe an enterprising third-party might offer a $400 7900GT-512 with a slightly higher mem clk; there is room for opportunistic pricing there.
Reply

I know it can be used to beat world record of fastest graphics cards, but for whom this tests are targeted. I think they are useless for 99.99999% of readers. Atlhon FXs, SLIs all over the place and almost none nonSLIsetups of currently available cards. How typical user, which has card 6-18 months old is supposed to evaluate speed of new cards and value of upgrading. I think it's the main task of such test - convince people to upgrade. Who have such systems like your testbed and who use SLI in real life - 0.00001% of these readers. Maybe even less. Reply

some of us have been making this request for months now but it routinely falls on deaf ears. it appears most anandtech readers would prefer to read what is essentially technical advertising for GPU performance as tested in ubersystems 99% of us will never own. Reply

Can we please have separate charts for SLI/Crossfire setups and single cards that normal people will end up using. That way we can easily compare apples to apples. I'm sure that the 1% of you that use an SLI/Crossfire setup will like the articles, but the rest of us normal people will appreciate a direct comparison between the various single cards. Reply