If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Well from the benchmarks I've seen so far it is around 30% faster at and above 1024 x 768.
That's not really much to get excited about.
The move to .18 micron silicon is the best news as that might mean the GPU can stay below flashpoint!

Also the Voodoo 5 is still in Alpha state and the drivers are not near release spec, any hardware site worth their salt that has tested them will make this clear, I expect there to be very little between the Geforce 2 and V5 5500 in final guise. I just hope the V5 6000 is significantly faster than both.

A BAD *** new feature with the GF2GTS is pixel-shading! If game programmers use this feature it will look like a TOTALLY different game between a GF2GTS and any other card! However, I'm still looking very much forward to the Voodoo5 6000 and the retail drivers for the Voodoo5 5500. What I really want to see are UT benchmarks between a GF256, GF2GTS, Voodoo5s, and Voodoo3s. I want to see how much a GF2GTS or Voodoo5 will improve the picture quality and FPS in UT compared to my Voodoo3 3000.

But basically, I'm pretty much unimpressed. Compared to voodoos, yeah, it'll probably be better. But even with 3DFX's beta drivers: not by much. BUT... compared to what ATI says their next card will do, it doesn't really sound that hot.

First of all, even on paper, it can only do 25 million triangles, while ATI claims 30 million. Also that's only as long as they're attached to each other in a very well defined way... or it drops to about 8 million. Basically it doesn't do 25 million _triangles_, it does 25 million vertices per sec. There's a difference.

Second, from those benchmarks it looks like it's _very_ far from the promised 1.6 gigatexels. At a wild guess, it chokes on the memory bandwidth, like the 256 did with SDR. Simple maths on memory timings would have shown that there just wasn't room for that 4x increase. Heck, there wasn't even room for a 2x increase. Until someone makes QDR, those 1.6 gigatexels will likely exist only on paper and in marketing hype. And, no, there won't be any QDR available any time soon. (On the other hand, if ATI actually makes a memory access optimization that works, it could easily gain some serious ground there.)

Third, the features really don't look that impressive.

Briefly, yeah, they'll probably get a bunch of people to buy it based on lots of hype, and a lot less actual substance. Then again, what else is new?

All I can say is: look out 3dfx. The Q3 benches I saw at Anand's show the GeForce2 demolishing the V5500. I'm kinda skeptical about them pulling that much performance out of driver optimization.

The GTS looks impressive from where I sit, but I think I'll wait for the NV-20 since my Anni Pro is doing fine for now.

_________________________"Give a man a fish and he will eat for a day,
Teach a man to fish and his wife will divorce
him, get the house, the kids, the boat, his rods
and reels, and he will learn to drink..."

Even with fast writes and everything, let's do some simple maths. That 1.6 gigatexel figure is based on the assumption that each of those 4 units can actually be kept going on and on and on, like the energizer bunny, and doing two textures per clock.

Now let's say we're running in 32 bit colour.

A) Let's take a worst case scenario first, where everything is drawn back to front. Two textures per unit mean two memory reads to get that pixel. Plus one memory read from the Z-Buffer, plus one memory write to draw the pixel, plus a memory write to update the Z-Buffer. (If you also have transparency/translucency effects, add one read to get the old pixel. But let's assume we have no transparent textures.) That's five memory operations per cycle, and per texturing unit. Now multiply this by 4 units, and you get 20 memory accesses per clock. Times 32 bits, that's 640 bits moved per clock.

640 bits per clock on an 128 bit bus? Dream on. But real life memory will have at least 6 cycle penalty for a page miss, and that'll happen a lot for reading the textures. And the memory writes aren't that fast, either.

B) Even in a best possible case scenario, things aren't looking that much better. As in: all the drawing is done front to back and there are no transparencies, and you're looking straight at a wall, so everything will get obstructed right away. (And assuming the card or game is actually smart enough to optimize drawing for this situation. That remains to be seen.) It's not quite the typical situation in a game, unless your only purpose in life is to view walls up close, but let's pretend it happens. And it only happens after the first layer of pixels have been drawn, as per the previous scenario. But even so, it's still at least one read per texturing unit for the Z-Buffer. Now it's 4 operations times 32 bits, and that's 128 bits moved per clock. On an 128 bit bus. It fits quite nicely, but you'd need some very ideal memory to actually get it. As in: can do one operation per GPU clock, and again that just doesn't exist.

C) So far I've been assuming that all the textures and triangles are in the card's memory, and _nothing_ needs to be transferred on the AGP bus. I.e., not only with fast writes, but even with divine intervention on the AGP bus, it'll still fall a lot shorter than that advertised number.

D) Note that the above calculations have already been _very_ generous. E.g., I've been blissfully ignoring bus usage issues. To actually get that number of bits per clock, the 128 bit bus would have to be able to act like 4 independent 32 bit busses, with separate address and control lines. Furthermore, the memory would have to be quad-ported so reads from the same chip don't wait for each other. In practice, the situation would be a lot less nice.

E) I've also been ignoring the fact that the screen refresh itself needs to read the memory, too. At least at high resolutions and high refresh rates, this can eat some of the memory bandwidth, too. At, say 1280x1024 by 32 bits colour, and with a 75 Hz refresh rate, that's 384 megabytes per second eaten just by that. It's not much, compared to DDR bandwidth, but it's there.

Briefly: the GeForce 2 can't possibly achive that advertised gigapixel, and that's it. It'll exist just in the marketing hype, not in your computer. So wth, go buy it. We really need to support falsehood in advertising, you know

i would like to add something to moraelin letter i just saw a pick of the voodoo 4 and 5 and wow that thing is hugh you will be lucky to get it in your case it is like a foot lone it has like 5 fans and looks like a piece of crap that will melt your come if one of the fans dont work (also it doesn't have any heatsinks)

Everybody knows they are far too late (damn, they've a discount on Valium or what?). They were already with V3. Now, for their reasons, I don't know, and I think most gamers don't care anyway. It's be there or be square.

It seems that customer loyalty is rotting. Not that exiting products anymore (cf delays) and not that more price competitive either.

And also all these acquisitions, that must have crushed their cash-flow.
I would not like to be the CEO.

From a product point of view, I'd say they only have two aces left if they want to keep a bright image: drivers and (potentially) V5 6000, the main issue being price.
Drivers, because for what I have heard the GTS will use the Detonator, already highly optimised, whereas 3dfx will have plenty of space.
As I see it, they are definitely going to take the challenger's seat, and we'll see the same stuff as for the V3 line: the top selling product will be the El Cheapos, bought by those who want cost effective upgrades. Although they could go for a GeForce.
Maybe then their V4 and V5 PCI boards for mobos with integrated chipsets.

Damn, there are so many factors, I'm getting lost (and I am tired out, that doesn't help).
Headache generating situation. But hard times for 3dfx anyway.
And ATI is closing by ...

Where do all you Geforce lovers suddenly appear from? it beats my why you are all so fanatical about what has been, (and still is), a troublesome product that should not have been released in the beta form it was.

The GTS will be nothing special over the Geforce. UP to 30 % above 1024 X 768 and It will still suck at Glide games. Where is the Glide support Nvidia? it is and has been open code now for a while.

I'm not a Voodoo troll either but how come this is the only card I can truely say I've been dissapointed with.

Wait until the Voodoos are out or you may all end up eating your words.

Umm... Iixus, exactly what is your problem, anyway? I never said the Voodoos 3's are better. I mean, hell, if you want to argue with me, at least argue about something I've said, not about some fiction

Yes, the V3 has been out-gunned for quite a while now, by just about any other card. I still think it offers better bang per buck than a GeForce, though. But then every single card out there offers better bang per buck than a GeForce, and a lot of them do offer better bang than a V3. So basically I wouldn't advise anyone to buy a V3 nowadays. A TNT2 or a G400 will likely give better performance, at a similiar price.

But then, if they already have a V3, I wouldn't advise them to upgrade, either. At least, not right now. As I've said, for playing most games it should be more than enough. And I repeat, for _playing_ the game, not for bragging about some fps rate your eye can't even tell.

I must admi the card was very impressive. The Voodoo5 really strugles at lower resolutions (of coarse I never use those lower resolutions) and did not show great results in FSAA (but I would probably limit them to racing games anyways). If they can get some driver improvements out by June, I might consider it.

Otherwise it will be the GeForce2. The only problem is that they have hamstrung the card with too slow of memory. They should have tried for a) faster DDR, b) 256bit data path for memory, or c) a dualk channel memory. If you up the color to 32bit you get very little performance differance on the GTS. Also even with 2.5 times faster geomitry engine the GTS shows little improvement in DMZG or their new GTS Evolva game.

Maybe in June they will have a few cards out there that will be willing to spend a little more for faster DDR memory. Raideon256 is supposed to use 400Mhz DDR.

The Core on the GTS is VERY impressive though with a lower wattage and heat and overclocking to almost 250Mhz with pre-production chips... WOW!!! The DDR still tops off at about 350-370Mhz.

I'm with Blade! The GeForce has problems playing UT and that is my FAVORITE game! I don't care how fast and how much better the other games look, if UT looks bad or doesn't play well then I don't want it! Hopefully the new drivers for the V5 5500 will help ALOT. The thing that has me worried is the fact that when 4XFSAA is turn on performance drops like a rock. I really like the way FSAA improves the look of the game but it might take the V5 6000 to make 4XFSAA play well.