NVIDIA’s GeForce FX 5700 Ultra GPU

FOR NEARLY a year, ATI’s mid-range Radeons have owned the performance crown for mid-range graphics. ATI’s dominance started with the Radeon 9500 Pro, continued with the Radeon 9600 Pro, and was most recently refreshed with the 9600 XT. NVIDIA took a stab at the mid-range with its GeForce FX 5600s, but not even a faster respin of the GeForce FX 5600 Ultra had enough punch to take on the Radeon 9600 line, especially in DirectX 9 applications.

Before the Radeon 9500 Pro came along, NVIDIA’s GeForce4 Ti 4200 was the most widely recommended graphics card for budget-conscious enthusiasts. NVIDIA knows what it takes to be a mid-range market leader. With memories of the GeForce4 Ti 4200’s glory no doubt in mind, NVIDIA is ready to put up a fight for the mid-range performance crown with the GeForce FX 5700 Ultra. The 5700 Ultra is powered by a new NV36 graphics chip and promises more shader power, higher clock speeds, and greater memory bandwidth than its predecessor. On paper, NVIDIA’s third shot at this generation’s mid-range graphics crown looks pretty good, but does it have the charm to capture the hearts and minds of budget-conscious gamers and PC enthusiasts? Will this be NVIDIA’s third strike with the budget-conscious crowd? Read on as we unleash the FX 5700 Ultra on ATI’s Radeon 9600 cards to find out.

Introducing NV36 The GeForce FX 5700 line is based on NVIDIA’s new NV36 graphics chip, which is essentially a mid-range version of the high-end NV35 chip found in NVIDIA’s GeForce FX 5900 and 5900 Ultra. The NV36-powered GeForce FX 5700 Ultra will replace the mid-range FX 5600 Ultra at a suggested retail price of $199, and is expected on retail shelves starting this Sunday.

NV36 doesn’t represent a major redesign of NVIDIA’s current GeForce FX architecture, but the chip does have a number interesting characteristics that are worth highlighting.

4×1-pipe design – Like NV31, NV36 has four pixel pipelines with a single texture unit per pipe. The chip behaves like a four-pipe design regardless of the kind of rendering being done, which makes it a little bit easier to understand than something like NV30, which can act like eight-pipe chip under certain circumstances.

128-bit memory bus – NV36’s memory bus is 128 bits wide, just like NV31’s. However, NV36 supports DDR2 and GDDR3 memory types in addition to standard DDR SDRAM. Initially, GeForce FX 5700 Ultra cards will ship with DDR2 memory chips running at 450MHz, but board manufacturers may eventually exploit the chip’s compatibility with different memory types to produce budget cards with DDR SDRAM or more exotic offerings with GDDR3.

‘mo shader power – The GeForce FX 5600 Ultra’s shader performance never really cut it against ATI’s mid-range Radeons, so NVIDIA has beefed up shaders for NV36. The chip boasts three vertex units that conspire to deliver triple the vertex processing power of NV31. NVIDIA also re-architected its programmable pixel shader for NV36, though no specific pixel shader performance claims are being made.

Chip fabrication by IBM – Unlike the rest of NVIDIA’s NV3x graphics chips, NV36 is being manufactured using IBM’s 0.13-micron fabrication process. NV36 is the first product to emerge from NVIDIA’s recently announced partnership with IBM, and NVIDIA is quite happy with how well things have worked out so far. NV36 isn’t built using low-k dielectrics like NV38 or ATI’s RV360 GPUs, but it’s still clocked at a speedy 475MHz on the GeForce FX 5700 Ultra.

What’s particularly impressive about NV36’s fabrication is the fact that NVIDIA was able to get its very first chip sample from IBM up and running Quake just 50 minutes after the chip entered NVIDIA’s testing lab. A testament to IBM’s mad fabrication skills, the first A01 spin of NV36 silicon is actually being used for retail versions of the chip.

Overall, NV36 doesn’t represent a radical departure from the GeForce FX architecture; the chip should share all the perks that go along with “cinematic computing,” but it will also inherit a number of quirky personality traits that have thus far had a negative impact on performance.

NVIDIA continues to reiterate the fact that its entire GeForce FX line is sensitive to instruction ordering and pixel shader precision. Optimized code paths can help NV36 and the rest of the GeForce FX line realize their full potential, but NVIDIA’s new Detonator 50 driver also has a few tricks up its sleeve to improve performance. You can read all about the Detonator 50 drivers in our GeForce FX 5950 Ultra review.

NV36: NVIDIA’s first GPU fabbed by IBM

The specs Perhaps to illustrate just how close the GeForce FX 5700 Ultra is to store shelves, NVIDIA sent out retail cards instead of standard reference review samples. The eVGA e-GeForce FX 5700 Ultra that showed up on my doorstep came in a full retail box, shrink-wrapped and everything. Let’s have a quick look at the card’s spec sheet.

GPU

NVIDIA NV36

Core clock

475MHz

Pixel pipelines

4

Peak pixel fill rate

1900 Mpixels/s

Texture units/pixel pipeline

1

Textures per clock

4

Peak texel fill rate

1900 Mtexels/s

Memory clock

906MHz*

Memory type

BGA DDR2 SDRAM

Memory bus width

128-bit

Peak memory bandwidth

14.5GB/s

Ports

VGA, DVI, composite and S-Video outputs

Auxiliary power connector

4-pin Molex

NVIDIA’s reference spec calls for an effective memory clock of 900MHz, but our sample’s memory was running at 906MHz

The GeForce FX 5700 Ultra is all about high clock speeds and fancy memory. Quite honestly, I didn’t expect cards with 450MHz DDR2 memory chips to hit $200 price points this soon, but I’m certainly not going to complain. Profit margins on GeForce FX 5700 Ultras may be slimmer than with other cards, but that’s a good thing for consumers looking for the most bang for their buck.

Here’s a few nudies of the e-GeForce FX 5700 Ultra to drool over before we get started with the benchmarks.

The GeForce FX 5700 Ultra looks imposing, and it is

475MHz with a single-slot cooler that doesn’t sound like a Dustbuster.Imagine that!

BGA DDR2 memory chips bring in the bandwidth

(Insert incessant whining about the lack of dual DVI here)

Our testing methods As ever, we did our best to deliver clean benchmark numbers. Tests were run three times, and the results were averaged.

Our test system was configured like so:

System

Processor

Athlon XP ‘Thoroughbred’ 2600+ 2.083GHz

Front-side bus

333MHz (166MHz DDR)

Motherboard

DFI LANParty NFII Ultra

Chipset

NVIDIA nForce2 Ultra 400

North bridge

nForce2 Ultra 400 SPP

South bridge

nForce2 MCP-T

Chipset drivers

NVIDIA 2.45

Memory size

512MB (2 DIMMs)

Memory type

Corsair XMS3200 PC2700 DDR SDRAM (333MHz)

Graphics card

GeForce FX 5600 Ultra 128MBGeForce FX 5700 Ultra 128MB

Radeon 9600 Pro 128MBRadeon 9600 XT 128MB

Graphics driver

Detonator FX 52.16

CATALYST 3.8

Storage

Maxtor DiamondMax Plus D740X 7200RPM ATA/100 hard drive

OS

Microsoft Windows XP Professional

OS updates

Service Pack 1, DirectX 9.0b

Today we’ll be comparing the GeForce FX 5700 Ultra to the GeForce FX 5600 Ultra (rev 2), and to a couple of Radeon 9600 flavors from ATI. NVIDIA was able to get us a set of WHQL-certified Detonator FX 52.16 drivers for testing, giving us our first peek at Microsoft-approved Detonator 50 drivers. The Detonator FX 52.16 drivers should be available for public download today.

The test system’s Windows desktop was set at 1024×768 in 32-bit color at a 75Hz screen refresh rate. Vertical refresh sync (vsync) was disabled for all tests.

When it comes to theoretical peaks, the GeForce FX 5700 Ultra is simply a monster. The card’s 475MHz core clock speed yields pixel and texel fill rates that rival the Radeon 9600 XT, but the real story is the card’s memory bandwidth. With DDR2 memory chips running at an effective 906MHz, the 5700 Ultra offers a whopping 14.5GB/s of memory bandwidth50% more bandwidth than the Radeon 9600 XT. ATI has caught some flack for the Radeon 9600 XT’s relatively unimpressive memory bandwidth, but the card has thus far held its own quite nicely against the GeForce FX 5600 Ultra. Against the bandwidth-rich 5700 Ultra, the Radeon 9600 XT may not be so lucky.

Of course, theoretical peaks are sometimes worth little more than the paper they’re printed on. Let’s see how the FX 5700 Ultra’s fill rate specs pan out in practice.

In 3DMark03’s fill rate tests, the GeForce FX 5700 Ultra delivers the best single-textured fill rate performance, but is stuck behind both Radeons when it comes to multitexturing.

Shaders

When NVIDIA claimed that NV36’s shader power was much improved, it wasn’t kidding. In 3DMark03’s pixel shader 2.0 test, the GeForce FX 5700 Ultra is 50% faster than the 5600 Ultra, but even then it can’t catch the Radeon 9600 Pro. Vertex shaders are another story, though. The 5700 Ultra bursts to the front of the pack in 3DMark03’s vertex shader test. Most striking is the fact that the 5700 Ultra delivers more than double the vertex shader performance of the 5600 Ultra.

ShaderMark 2.0 ShaderMark 2.0 is brand new and includes some anti-cheat measures to prevent drivers from applying questionable optimizations. The Radeons run the benchmark with straight pixel shader 2.0 code, but I’ve included results for the GeForce FX cards with partial-precision and extended pixel shaders, as well.

The GeForce FX 5700 Ultra shows significantly better pixel shader performance in ShaderMark than the 5600 Ultra; in many cases the 5700 Ultra doubles the frame rate of its predecessor. However, as much of an improvement as the 5700 Ultra is over the 5600 Ultra, the card still can’t catch either Radeon 9600 in ShaderMark. Because NVIDIA’s Detonator FX drivers have yet to expose NV36’s hardware support for floating point texture formats, the GeForce FX cards aren’t able to complete a number of ShaderMark’s tests. Unfortunately, NVIDIA hasn’t yet set a timetable for including floating point texture support in its GeForce FX drivers, and it’s unclear when the functionality will be exposed to applications.

Quake III Arena

Wolfenstein: Enemy Territory

Unreal Tournament 2003

The GeForce FX 5700 Ultra offers improved performance in a number of first-person shooters, but we’re not looking at a revolution here. With the GeForce FX 5700 Ultra and Radeon 9600 XT each winning three of six tests, this one’s a wash.

Comanche 4

Codecreatures Benchmark Pro

Gun Metal benchmark

The GeForce FX 5700 Ultra comes out ahead in Comanche, Codecreatures, and Gun Metal. The card’s performance with antialiasing enabled is especially impressive, as is its huge performance advantage over the GeForce FX 5600 Ultra in Gun Metal.

Serious Sam SE

Serious Sam SE has always performed well on NVIDIA hardware, and that doesn’t change with the GeForce FX 5700 Ultra. The 5700 Ultra even comes out ahead with antialiasing and aniso enabled.

Splinter Cell

The GeForce FX 5700 Ultra is the only card to really distance itself from the pack in Splinter Cell and offer consistently better frame rates across multiple resolutions.

3DMark03

The GeForce FX 5700 Ultra and Radeon 9600 XT split 3DMark03’s game tests, but the Radeon comes out ahead by a hair if we look at the overall score. Notice how much the FX 5700 Ultra improves on its predecessor’s performance in the pixel shader-packed Mother Nature test; NVIDIA’s managed to improve performance by almost 50%.

Tomb Raider: Angel of Darkness Tomb Raider: Angel of Darkness gets its own special little intro here because its publisher, EIDOS Interactive, has released a statement claiming that the V49 patch, which includes a performance benchmark, was never intended for public release. Too late, the patch is already public. We’ve used these extreme quality settings from Beyond3D to give the GeForce FX 5700 Ultra a thorough workout in this DirectX 9 game.

The GeForce FX 5700 Ultra does a lot better in Tomb Raider than the 5600 Ultra, but the Radeons still have a healthy lead. What’s worse, both GeForce FX cards refuse to run with 4X antialiasing and 8X aniso at resolutions above 1024×768.

It’s important to note that Tomb Raider’s benchmark mode uses the default DirectX 9 code path rather than the NVIDIA-optimized path that’s used when the game is actually being played normally. The scores you see above aren’t necessarily a reflection of Tomb Raider gameplay performance, but they do show how poorly the GeForce FX 5700 can perform when it’s forced to deal with an unoptimized DirectX 9 code path. Still, the 5700 Ultra’s performance is much-improved over the 5600 Ultra, and NVIDIA’s latest drivers have significantly improved the performance of both cards.

AquaMark3

In AquaMark3, GeForce FX 5700 Ultra performs well until antialiasing and aniso are enabled. The card doesn’t perform particularly poorly with 4X antialiasing and 8X aniso, but it’s well behind the Radeon 9600 XT and even a hair behind the 9600 Pro.

Halo I used the “-use20” switch with the Halo benchmark to force the game to use version 2.0 pixel shaders.

The GeForce FX 5700 Ultra is much faster than its predecessor in Halo and even has enough horsepower to best the Radeon 9600 XT. Halo’s benchmark timedemo only runs through cut scenes, so the frame rates you see aren’t necessarily a reflection of gameplay performance. However, Halo’s cut scenes are all done using the game engine, so the benchmark mode is a valid tool for graphics performance comparisons.

The GeForce FX 5700 Ultra manages to double the 5600 Ultra’s performance in the rthdribble demo, but ATI’s Radeons still have a pretty significant lead, especially at lower resolutions. Unfortunately, the images generated by both GeForce FX cards in this demo have a few issues. Either 16-bit floating point precision really isn’t enough for high dynamic range lighting, or NVIDIA’s drivers need to expose the GeForce FX’s support for floating point texture formats in order for rthdribble to work properly.

Edge antialiasing

In Unreal Tournament 2003, the GeForce FX 5700 Ultra is really only playable with 4X antialiasing. There’s some weirdness going on with the card’s 2X antialiasing performance, which is dreadful on both the FX 5700 and 5600 Ultras. Unfortunately, NVIDIA’s antialiasing doesn’t look quite as good as ATI’s gamma-corrected SMOOTHVISION, either.

Texture antialiasing

In Serious Sam SE, the GeForce FX 5700 Ultra suffers roughly the same performance hit with each level of anisotropic filtering as its competition.

Overclocking In testing, I was able to get my GeForce FX 5700 Ultra stable at core and memory clock speeds of 560 and 950MHz, respectively. Considering that the card’s cooler isn’t noticeably louder than what one may find on a GeForce FX 5600 Ultra or Radeon 9600 XT, I’m quite happy with an 85MHz core overclock.

Of course, just because I was able to get my sample stable and artifact-free at 560/950 doesn’t mean that every GeForce FX 5700 Ultra will be capable of those speeds. Then again, some cards may have the potential to hit even higher clock speeds. Overclocking is never guaranteed and can potentially damage hardware, so be careful.

Overclocking the GeForce FX 5700 Ultra gives the card a nice little performance boost, but remember that the Radeon 9600 XT appears to be quite comfortable with overclocking, too.

Conclusions GeForce FX 5700 Ultra cards will be available in retail stores this Sunday and are expected to sell for around $199. This isn’t even close to a paper launch, folks. The card’s $200 price point puts it in direct competition with the Radeon 9600 XT, but 9600 XTs seem to be a little scarce at the moment.

Overall, the 5700 Ultra’s performance in DirectX 8-class games is impressive, and the card even handles DirectX 9 titles like Halo with aplomb. However, the 5700 Ultra’s performance with unoptimized code paths and synthetic pixel shader tests is a little discouraging looking forward. With more compiler tuning and driver tweaking, NVIDIA may improve the GeForce FX 5700 Ultra’s performance further, but the potential for future performance gains is hard to quantify and impossible to guarantee. At the very least, NVIDIA’s willingness to help developers optimize code paths should ensure that games and applications milk every last drop of performance from the GeForce FX architecture.

Since the GeForce FX 5700 Ultra beat out the Radeon 9600 XT in nearly every real-world game we tested today, it’s tempting to reinstate NVIDIA as the king of mid-range graphics. However, the Radeon 9600 XT is faster in Unreal Tournament 2003, which is particularly important title considering how many future games will use the Unreal engine. It’s also worth noting that ATI’s mid-range Radeons offer superior gamma-corrected antialiasing and a coupon for Half-Life 2, both important considerations for enthusiasts and gamers alike.

So I’m not quite ready to crown NVIDIA as the new mid-range graphics king, but I still feel pretty good about recommending the GeForce FX 5700 Ultra over the Radeon 9600 XT for gamers who have $200 to spend and need a graphics card by the end of the weekend. The Radeon 9600 XT’s performance in synthetic DirectX 9-class benchmarks makes it feel like a safer bet, but NVIDIA’s new run-time compiler has me a little more excited about the GeForce FX’s performance potential in future titles. The fact that the GeForce FX 5700 Ultra rips through today’s current games and uses some pretty swanky hardware for a $200 graphics card doesn’t hurt, either.

Of course, if you don’t need to pick up a new graphics card this weekend, I’d hold off just a little while longer. We’ll be comparing the rendering output and image quality of ATI and NVIDIA’s newest graphics chips with the latest drivers in an upcoming article; the benchmarks we saw today are just the beginning.

The next big question is, how close are we to impulse-buy (<$80) budget DX9 cards with decent performance? Does ATi have anything up its sleeve, considering how close regular 9600’s are to $100 already?

espetado

16 years ago

In these kind of tests I always hope someone would include Ti4600 results. Does anybody know where a Ti4600 would position itself against al this FX violence?

grtz
esp

HiggsBoson

16 years ago

y[

Anonymous

16 years ago

Apparently the Halo development team had only limited access to nvidia programmers to help them optimize for the game, while they simply didnt need it for ATi because the R300 parts actually run DX9 properly.

Anonymous

16 years ago

ATI cards outsell Nvidia ones as low as 2-1 and as high as 8-1, Nvida is really hurting.

Anonymous

16 years ago

Um no, actually ATI cards outsell Nvidia’s 5-1. There is so many people buying ATI that ATI has actually stepped up production on their new line of cards 10 fold!

If ATI can beat the giant on such a small market base just think what they could do since they are the market leader in sales 5 fold. It’s mind boggling. We will be seeing a card that runs at a 1ghz and 2ghz ram with a stock cooler and runs 10x’s better than any Nvidia card ever conceived. Granted they are rumored to be a tad spendy but who cares when Nvidia is getting 10fps and you’re getting 100!!!!!!!

Anonymous

16 years ago

No, overall, Nvidia still sells more cards.

Anonymous

16 years ago

Great article!

Nvidia still sucks!?!? When the heck are they ever going to get it right.

Seriously though, Nvidia is NEVER going to remake its hardware to run the dx9 natively like the radeons are they? They will continue to force their own version for years to come…. The reason I say this is because they obviously didn’t change any of that in the new release of the 5700. So its to be expected none of the cards will ever do it, right?

Forge

16 years ago

5950 and 5700 aren’t all new designs.

Even the 5900 isn’t a new design, compared to the 5800.

Don’t forget: Nvidia has two design teams…. But both of them are on one year cycles. The NV3* team also brought you the GF3/GF3 Ti. The NV40 team brought the GF4 Ti. With that in mind, the NV40 is going to be the first design from the Other Team since GF4 Ti…. It might just kick ass….

That said, the NV38 (5950) is still from the NV30 team and is still on the NV30 design. It’s just too bad that even though the NV30 team was able to do two full respins/refresh parts in one year, they still haven’t gotten any of them to run PS 2.0 worth a damn.

EDIT: Ack. Now that I think about it, even though GF4 Ti was a really nice core, it was almost the exact same feature set and performance profile as GF3 *, just clocked a lot higher. Let’s all hope real hard that NV40 isn’t going to be a clocked-up NV35 with no serious pixel shader redesign, and/or no FP24 support. That could suck hard.

Anonymous

16 years ago

I’ve read in an article or two where nvidia was said to have three design teams. I’d have to dig around for links on it tho.

BabelHuber

16 years ago

y[

Forge

16 years ago

GF3/GF3 Ti = NV20
GF4 Ti = NV25
GF4 Ti with AGP8X glued on = NV28

FX 5800 = NV30
FX 5900 = NV35
FX 5950 = NV38

Let just hope that NV38->NV40 is a lot more impressive than NV28->NV30 was. 🙂

BabelHuber

16 years ago

y[

droopy1592

16 years ago

Yeah, like it only takes a few weeks to redesign a 130 million tranny core and test it and test it and test it and test it and revise it and test it and revise it and test it and get engineering samples and test them and revise it and test it.

Forge

16 years ago

And then release it.

droopy1592

16 years ago

And then get good yields. Then fix errata. Then…

Anonymous

16 years ago

Alright I see your points. I guess I didn’t take the time to think exactly how long it would take to come out with a new design. Still how long have they had since the 5800 hit store shelves?

I see the point where it’s good that they tried to squeeze all they could out of the FX. But then I have to think what if they spent all that time money and energy and just focus it on the next release, make the next one that much better.

Can someone explain the rotating design teams? Why do that? I guess it would let one side practice, develop, engineer and play with new stuff before they are put on the spot.

Then again how many design teams does ATI have? I am just assuming 1 but I’m sure I’m wrong. What would be best, people spread in a few directions or all focused on one objective?

BabelHuber

16 years ago

y[

Anonymous

16 years ago

you should have added a 4200 in the mix, especially after you mentioned it in the article.

Sumache

16 years ago

Was Overdrive enabled? If so what was the cool time anyways?

Forge

16 years ago

Overdrive == 3.9 Catalysts. Those aren’t out for another 1.5-2 weeks or so. Ask again then.

Disco

16 years ago

It’s looking like this whole mid-range gumbo is going to have to be completely reassessed in a couple weeks(?) when the 3.9 cats are released and when 9600xt hits retail with its ‘rejigged’ frequency. Looking forward to the rematch.

ding! ding!

Anonymous

16 years ago

vagina!

AmishRakeFight

16 years ago

no editors choice awards…that pretty much sums it up!

Anonymous

16 years ago

All this is moot, when companies like Powercolor but out cards like the following:

I didn’t think it was possible to put out a card with 256MB memory at 64-bits. I think that card is 128-bit. I agree, though, that it’s lame that they can call their card a Pro simply because of its core clock.

Forge

16 years ago

This is Powercolor you’re talking about.

If they made NV cards (NV won’t let them, last I heard), they’d probably announce:

No cost-cutting card design is too stupid for Powercolor to make. These are the guys who made one of the two Radeon 9000 PCI designs ever… Theirs was 64bit, Hercules’s was 128bit. It was really disgusting in benchmarks. Two completely different categories of performance, yet only a few bucks difference in retail price.

Anonymous

16 years ago

Until Nvidia realizes that the fx line up was a waste of time and money they won’t take the lead. My guess would be that thier next line will be way better and possibly re-take the lead. Until then Nvidia sucks a **** like Elton John

Pete

16 years ago

Nice review, Diss. Can’t wait for the (required, IMO) IQ analysis.

My main concern is the potentially skewed picture presented by your Wolf:ET benches. I note you use an XP, and the 5700U whups the 9600XT using your demo (Oasis, IIRC). Yet Damage’s FX yielded a much more even situtation with the high-end cards. That leads to a few questions:

1) Are the different CPU+memory combos contributing most to the benchmark disparity?
2) Are the drivers better/worse optimized for the different CPUs, thus contributing most to the disparity?
3) Is this mainly a memory-bandwidth-limited benchmark?
4) Is Wolf:ET better-optimized for nVidia’s DX7/8 architecture (a remnant of nV’s OGL dominance and ATi’s driver shortcomings for the years preceding the 9700P’s launch)?
5) Or, maybe your benchmark is not indicative of overall Wolf:ET performance?

I know my 9100 shows much better performance in Oasis than in Radar and the Beach. Railgun and Fuel Depot are more in the middle (I’ve never created a demo on Gold Rush, oddly). Perhaps you’d consider a new demo using a more challenging map? Or you could test Wolf:ET with two maps, a “fast” and “slow” one?

AT used an FX CPU and tested Radar and their 5700U and 9600XT scores were close, so I can’t tell if it was the CPU or map that changed the relative results.

Edit: Another niggle: You mentioned the 5700 is 4×1 all the time. I think it’s still 4×1 in single-texturing situations, and 2×2 in multi-texturing situations. AT and THG released that nV pipeline schematic, and AT also noted that nV said 3DCenter’s analysis of the FX pipeline was accurate, and 3DC concluded the 5600/5700 was 2×2/4×1.

AmishRakeFight

16 years ago

Stick with ATI…

PLASTIC SURGEON

16 years ago

Best mid-range card: The 9500pro. Bar none. Next in-line for $10-$20 more: The 9700pro. It has reached mid-range prices. If you can find these gems…Grab it QUICK. Forget the 5700, 9600XT altogether. 8 pipelines go a far way compared to extra clock speed.

Yep, a 256MB XT with faster-than-stock memory speed is exciting enough, but a 128MB XT with much-faster-than-stock memory speed looks to make an outstanding 5700U competitor. It all comes down to the price, though, and whether Powercolor will still include HL2 with their cards.

Forge

16 years ago

All of NV’s ‘optimizations’ take on a new humorous quality in light of recent NV comments that they’d like to slip down to annual driver releases.

I’m picturing Windows Update for drivers, where you keep running 55.02 or whatever, but the drivers download new cheats^H^H^H^H^Hoptimizations from Nvidia directly. ‘Oh boy, HL2 is out! I better hit NVUpdate so that I can ‘optimize’ the game out of 640×480!!’

“either 16-bit floating point precision really isn’t enough for high dynamic range lighting, or NVIDIA’s drivers need to expose the GeForce FX’s support for floating point texture formats in order for rthdribble to work properly.”

The former would appear to be correct – rthidrbl utilizes 32 bit precision, which of course looks excellent dithered to 24 bit, but not so good when rendered in straight 16 bit. That’s why you get all of the color banding and apprent rendering flaws.
Wake me up when nvidia ships a card that’s as fast at full 24 bit or 32 bit straight DX9 as the R300.

Dposcorp

16 years ago

Way to go, Diss. I like the review, but I am suprised you didnt toss in a 9500Pro for kicks. A few can still be found here and there, and it still may be the best video card at less then $200.
Aside from that, job well done.

dolemitecomputers

16 years ago

Is this right? Check the box photo at top of news for the Aopen 5700. Does this card have dual DVI’s or is it just a bad image?

That picture has dual DVI but everything else on the site says D-Sub, DVI, SVideo

dolemitecomputers

16 years ago

I noticed that too but the photo shows it differently.

danny e.

16 years ago

dont forget the new Matrox card coming out next year with its 16×2 pipe… built on .09 micron process and est. clock at 700Mhz!!!!

(hehe)

indeego

16 years ago

I missed the funnyg{

derFunkenstein

16 years ago

a true matrox fan, you are…;)

Disco

16 years ago

This makes my plan to upgrade to a 9600XT by Christmas a bit confusing. Of course, the coupon for HL2 will be a bonus (that immediately chops $50 off the price of the card) and will likely remain the deciding factor – all the other pro-ATI and anti-nVidia issues notwithstanding. The only problem with the coupon is that I don’t know if I want to wait 8 weeks for the game to arrive on my doorstep, and i’m not very interested in downloading the game immediately and getting involved with the whole Steamy Crap(tm). I’d like to have the game CD in my grubby hands. I’ve read rumours on the Inq that the retail 9600XT’s will actually have a higher clock than the ones that were available for testing because of the 5700’s ‘good’ performance. I guess I’ll be waiting until the last minute before making my decision (which is always the best policy anyway when dealing with tech purchases).
Diss – if the retail cards have a higher clock, will you be re-visiting these hardware reviews (you could just overclock your current card to the new frequencies)?

Anonymous

16 years ago

ATI will be increasing the clock speeds on the 9600xt. So you will see this very soon. Also. People the best mid-range card out is the 9500pro if you can find one. And even the 9700pro! It’s $200. And smokes the 9600xt and the 5700.

Anonymous

16 years ago

you got a link to back up that 200$ claim. ive heard people say that but never found anyplace actually selling it for that price. basically i call bs.

Dissonance

16 years ago

If ATI increases the 9600 XT clocks for retail, you’ll see me do an article about it.

No, The Inq said the VPU underwent another respin, not the board. And no one confirmed whether the respin meant higher speed or just better yields at the same speed. And, as is obvious, the 9600XT really needs more memory bandwidth, not core speed, and a core respin won’t help increase memory bandwidth.

Anonymous

16 years ago

Did I miss it where you stated that all FX cards don’t do true tri-linear on D3D? If I missed where that was pointed out here forgive me. If I did not, then whoopise as thats a pretty major part.

Dissonance

16 years ago

Did you miss the part where I talked about our upcoming image quality/rendering output article? Did you miss my FX 5600 Ultra review where I note the Radeon 9600 Pro’s lack of trilinear with aniso in D3D?

Heh, not to steal Diss’ thunder, but ATi and nV both take shortcuts if you enable AF via the control panel. ATi has always done tri on the first texture layer, and bi on the rest. nV has recently (with the 50 series Det’s, I believe) totally replaced trilinear with brilinear (a lesser tri) in D3D, and forced only 2x AF on all texture layers beyond the first.

I think Diss forced AF via the control panel for those ATi shots. He can get full trilinear with AF by setting the card to App. Pref. and enabling tri and AF via UT2K3’s .ini file.

WaltC

16 years ago

No, but what I missed is how you set up both of these games for 8x AF and Trilinear with the Catalysts. Did you do it through the control panel, or did you set the Cpanel to “application preference” and then use the internal settings controls in the games to specify these settings? I’m sure you’re aware that if you set anything in the Cpanel other than “application preference” then any setting you make internally in a game will be overriden by what is set up in the Cpanel.

Can’t speak about SS, but I do know that it’s been well known for awhile that if you use the Cpanel to set AF in UT2K3 with the Catalysts, then you will only get trilinear filtering on the first texture stage in the game. The texture layering used in UT2K3 is a bit different from most games, and the *global* AF and trilinear approach used by the Cpanel should not be used if you desire trilinear treatment for all visible texture stages in UT2K3. What you do in that case is to set the Cpanel for “application preference” and use UT2K3’s internal settings for trilinear and AF control, and in that case the Catalysts will honor the application’s request and provide trilinear treatment on all visible texture stages.

This is similar to how the Detonators have handled trilinear filtering in UT2K3 with one very important and notable difference. Even if you set up the Detonators’ cpanel for “application preference,” and you select for full trilinear from within UT2K3’s internal controls, you will never get it. From what I’ve read the latest nVidia drivers are hardcoded to provide trilinear filtering only one texture stage in any game, and all other visible texture stages will be bilineared, regardless of what the end user sets up in the game itself.

Although it may not mean a thing, I looked at your above link and find it curious that you simply do not state whether the effect you oberved happens at 16x AF, which of course is the setting most Catalyst users will actually use. As to whether or not what you discovered was a bug, or something else happening from non-nefarious causes, you might want to dash off an email to ATi about it, and perhaps the question you raised as to “Is it a bug or a cheat, etc.?” might be answered. Just a thought…:)

Zenith

16 years ago

…Wow, this 5700 makes nVidia look like a option for me again….but since this thing is most likely going to cost MORE then a 9600Pro, they can forget it.

Sargent Duck

16 years ago

I think I’ll just stick with my 9600Pro. At least I know it can run dx9 in all of it’s glory, without having to lower the color precision. That, and when I want my card to run tri-linear filtering, it will.

PLASTIC SURGEON

16 years ago

Don’t expect dx9 performance increases beyond ATI’s offerings with the NV3xx line of chips. I for one have issues with the fact that Nvidia has to rely too much on driver specific optimizations for ALL dx9 games to run properly. And then rely that game developers will code for their specific paths. Nope….toooooo much of a gamble. I will pass.

slymaster

16 years ago

I think all major game develpers will indeed code for Nvidia – there are still too many cards out there to ignore.

What I would worry about it that a new game might be released, and then one has to wait 2 months for the Nvidia performance improvement patch. We will have to wait and see how developers treat the issue as DX9 becomes more popular.

PLASTIC SURGEON

16 years ago

That’s the issue. And a perfromance patch won’t correct pixel/vertex shader issues. They will have to release a whole new driver set with MORE apllication specific optimizations for THAT given game. That’s the BIG gamble. ATI does not have that problem. And what’s more. Even with all those driver tweaks and programming pixel shader/vertex shader swaps, dx9 games still run slower on nvidia cards then then ATI cards. You gamble if you purchase a fx card. Period. Not for $500 bucks i won’t. Not even $200…

/me waits for nvidia to get off this chip line…….till then its going to be a 9700/9800 nonpro for christmas

maxx

droopy1592

16 years ago

q[

Dissonance

16 years ago

Look again, if you check out real world apps, the results aren’t that even at all. Even stuffl like TRAOD doesn’t count as real world because, as I stated, the benchmark mode doesn’t use the optimized code path that the actual game already has.

WaltC

16 years ago

I have no idea what you mean here. The game comes pre-configured for nV35 on the game’s DX8.1 code path. The default setting for the R9800P in the game is the game’s DX9 code path. nV35 runs the game about as fast on the DX8.1 path as R9800P runs it on the games’ DX9 path, which of course is why the game by default uses its DX8.1 path for nV35. The benchmark allows you to see what code paths are being used by each card, and what feature support is provided in each path, for each particular feature. As far as I recall, when you setup the the nV35 to run the game on the DX9 path most of the precision is set at fp16, which is partial precision for the nV35, but it’s all set for fp24 for the R3x0, which is full precision. And even in this configuration the nV35 is whipped pretty decisively on the DX9 path. I can only imagine that the DX9 support with partial precision instead of full for nv35 is what you mean by “optimized path” here. At fp32 nV3x isn’t at all competitive with R3x0 at fp24, however R3x0 fp24 is often faster than nV3x fp16, while rendering at a higher precision at the same time. I suppose if you mean that full precision (fp32) in nV3x isn’t “real world” because it’s too slow to be competitive or useful in a 3d game, I could certainly go along with that…:) But as far as I recall even using the DX9 code path in the game, lots of software support for DX9 features that are turned on for R3x0 are turned off for nV35. AFAIK, you can set up the TR:AoD software to use *any* of the code paths available in the game for any of the products it supports. But the features can only be accessed by the game engine is the product’s drivers support that API path feature. What was interesting to me when looking at the DX9 path feature support for both cards was how many more features the game supported on the DX9 path with R3x0 than it supported on the DX9 path with nV35–obviously because nV35 didn’t support that DX9 functionality even using the game’s DX9 path.

But getting back to the default nV35 setup on the game’s DX8.1 code path, the only mode in which nV35 performs on strictly a frame-rate basis pretty close to R9800P on the DX9 path, no fp precision is used at all as DX8.1 only supports integer precision, and of course ps2.0 isn’t used at all, either, as the DX8.1 path supports nothing higher than ps1.4. As well, this should be very obvious to everyone at this point in time, since nVidia made several statements about its driver optimizations being made to convert ps2.0 instructions to their ps1.4 rough equivalents, when that is possible. Heck, back when nVidia first started its attack on 3dMk03 early this year one of their stated complaints was that they found the use of ps2.0 code to be “unnecessary.”

I really think we should all drop this “real world” nonsense stuff…:) You know as well as I that if nVidia’s ps2.0 shader performance was on a par with ATi’s that nobody on this earth, including nVidia, would be calling it less than “real world,” or attacking it. The idea that the “real world” may only be defined by what is convenient for nVidia is, frankly, getting a bit long in the tooth.

Anonymous

16 years ago

q[http://www.gefen.com/kvm/product.jsp?prod_id=1403

Corrado

16 years ago

But DVi was on all cards WAY before DVI flat panels were really available at reasonable prices.

Entroper

16 years ago

Yes, but having one VGA and one DVI output means you can use your card with either a single LCD or a single CRT without adapters. A card with dual DVI would require an adapter to be used with an analog-only monitor. Expanding to VGA+DVI increased their market segment, but expanding to DVI+DVI would shrink it dramatically.

Krogoth

16 years ago

Well, this market range is far more interesting then the line-up between the Radeon 9800 XT and Geforce FX 5950 Ultra. The Geforce FX 5700 Ultra maybe the only half-decent NV3x base core that Nvidia designed. The card performs somewhat better then Radeon 9600 PRO and I excepted to be on pair with upcoming Radeon 9600 XT. Unfountrely, the Geforce 5700 Ultra biggest weakness is it’s lackluster DirectX 9 performance like rest of it’s sibings. With the upcoming popular DirectX 9 titles like Half-Life 2 the Radeon 9600 XT will really shine plus the fact the 9600 XT will come with a coupon for Half-Life 2 eventhough, Half-Life 2 will be a little late. That might be the breaking point between for buyers choicing between the Radeon 9600 XT and Geforce FX 5700 Ultra.

Anonymous

16 years ago

why cant nvidia get all the effects working in shadermark?! its annoying…

i have to buy ati again……

Corrado

16 years ago

Sure its faster… but it only took them HOW LONG? The 9600 has been out for almost 6 months has it not? Wow! Not too impressive that it took them 2 ‘generations’ of chips to catch up and finally ‘retake’ the lead…

The 5700 Ultra is definately better than the 5600 Ultra, but still not unqualifiedly better than, or equal to, the 9600 series. And what the hell is up with the Gun Metal results? Is it just one big set of vertex shaders or something?

Anonymous

16 years ago

Gunmetal is on of those Nvidia CG, The way its meant to be played games. It shouldn’t even be in the review, without some kind of footnote, otherwise, its just misleading.

Pete

16 years ago

The only thing DX9 about GunMetal is a VS 2.0 shader, which should obviously benefit from what AT reports is tripled VS units in the 5700. I think the benchmark itself is more aimed at nVidia, though.

just brew it!

16 years ago

Heh… well, this complicates my next video card purchase decision a bit. I had sort of written nVidia off, and decided to go with ATi. Now I’m not so sure…