I'm personally looking forward to seeing what this new hardware can actually do for gaming and performance.

As much as i prefer nVidia GPUs over AMDs GPUs, i can't say bad about ATI/AMD on any of their previous GPUs, because i think of the 360 GPU and what it can actually do, and to actually outpace the RSX by such a wide margin that's impressive, especially as you're working on a limited but standardized platform as compared to the PC and its wide open platform and unlimited system configurations

If you remember the nVidia rumors from last spring/summer the GT300 chip was supposed to be 400-500SP scaleup of the existing architecture from the 8xxx/9xxx/2xx series cards done on a 40nm process (possibly with the DX10.1 update that the GT240 is sporting). I suspect that nVidia's 40nm woes resulted in this chip being canceled in favor of putting effort into bringing Fermi forward in time; although they still weren't able to beat ATI to the punch with a DX11 part. Reply

It’s called “Cost of Opportunity”. There’s no price to be the first to experience a ATi Radeon HD 5870 in its all glory, a single card crushing nVidia’s dual card GTX 295. And we’re talking about a heavy title such as Crytek Crysis @ 2560×1600.

According to Tom’s Hardware, nVidia GTX 295 simply didn’t work at that resolution. Pity again! Please, see for yourself.

“Notice the missing result for Nvidia’s GeForce GTX 295 at 2560×1600 with 8xAA? That’s due to the card not having ample on-board memory to run that configuration (and the game not knowing any better than to keep it from trying). Grand Theft Auto gets around this by simply making resolutions unavailable if a graphics card doesn’t have a large enough frame buffer. Crysis crashes instead.”

GTX 295 not being able to run Crysis @ 2560×1600? Pity!

Fermi has got to be better and faster than Cypress. It’s an obligation for nVidia to build this in that way, since they had, at least, more time to conceive it.

And, as always, don’t be fooled: you’re going to hurt your pocket to have Fermi installed onto your RIG. Be prepared to pay the price. It happened with Cypress. It’s going to be the same with Fermi. And since, nVidia cards are always much more expensive than ATi/AMD’s, one Fermi card can reach as much as 750 bucks. Wait and see.

Take this weekend and go to your favorite retail store and grab your ATi Radeon HD 5870. It’s there, real. Just take it.

ATi/AMD offers 3-monitor output. You don’t need a Crossfire (CF).
nVidia: the new Fermi delivers only 2-monitor output. If you want to experience a 3-monitor gaming setup, you must buy two nVidia cards, a lot more expensive than one single ATi Radeon HD 5970, for instance. Although it’s a bit expensive to afford a 3-monitor rig, that’s what gamers are starting to look at.

ATi/AMD is delivering the ultimate gaming experience with its latest Dx11 card.
nVidia is moving away from the gaming industry bringing horrible products that definitely didn’t make it.

We’re talking about gaming here, not working at the office with business solutions, CUDA, GPGPU, etc.

@those_who_said_about_dx11_titles
What the hell is CUDA, PhysX, anyway?
If now we have just a couple of titles, it’s the beginning of a new gaming generation with Windows 7 and DirectX 11; it’s a trend, it’s the future.

Does anybody REALLY has any titles which benefit from PhysX. How many are available? Did you know that PhysX is proprietary? nVidia does not offer it as an open standard. Guess now why so few or non-PhysX titles? huh?

ATi/AMD is working on a new generation of its current line of products to be available in the second semester.
nVidia: when exactly Fermi will be available? hehehehe…pity!

ATi/AMD is completely committed to the gaming industry.
nVidia is a big company, but it’s not working to the gaming industry anymore.

ATi/AMD won last year. And it will win again this year.
nVidia was a big FIASCO last year. It it will do it again in 2010! Pity!

What a pity you are just picking and choosing results to suit your argument.

Theres actually a benchmark many sites use to convey what you are talking about. That is, the benchmark consists of dozens to 100 games. Each FPS score of every game is added together. Nvidia always wins. SLI always comes in last. Get a clue.

The Sum-FPS benchmark gives 0 points for a game if it cannot play at that resolution. ATI cards wind up playing the least amount of games when compared to Nvidia. Thats all I need to know who to buy. I dont have time to fiddle with settings to make *any* video card work, it just better, off the bat.

Wrong,when fermi comes out ATI wont be obsolete. It will still be a hell fast DX11 card that will play any game at the highest resolutions for the next couple of years. Fermi being slighty faster doesnt make ATI obsolete. Obsolete is what ATI did to Nvidia when they released a DX11 card. Nvidia had nothing so their GTX 2XX series became obsolete. Reply

Nah, the douche bag just has no life, like an AMD loving, Intel hating fangirl (and I do say "girl" because little zit-faced punks like this couldn't get a girl in the real world - or a hot one anyway).

What the limpdix like "Bob" don't understand is that competition is good. One year ATi wins; another year nVidia wins. When both take turns winning we consumers all win. I've had GeForce and ATi, and have both in two current gaming rigs (HD 4870 in one and GTX 275 in another). And for the record, my GTX 275 overclocked and tuned with programs like nHancer smokes the HD 4870 with the best it can do from ATi's CCC. But you'll never hear an ATi fangirl bring up overclocking facts and ATi driver and CCC snafus.

In any event, I'll be selling one of those rigs this year and building another (probably the HD 4870 rig). If ATi still has the better card than nVidia as currently, then I'll buy ATi. If nVidia has better performance with the new card however - even for more bucks like I spent on the GTX 275 over the HD 4870, then I'll probably go nVidia. Finally, I game a lot with Microsoft FSX, and nVidia cards smoke the ATi cards in that program (one reason I have two rigs).

Ati owner here, not fanboy, system builder, tester of every card on the market. Ati overclocking fact, never used CCC it shucks but ATI Tray tool overclocks well. Comparing GTX275 overclocked with ATI4870, comon, look at the prices, 4870 fares well against GTX260(higher priced than 4870). Bob, Fermi will overkill ATI and that's for sure, they developped something that was technically a challenge. Sure It's gonna be overpriced, but it's gonna be for those who want performance over anything like price.

Problem nowadays, most games are developped for consoles and PCs, so they run amazing on something like GTX260/4870 at high/max details 1920*1080, except Crysis. The best of the games are gonna go out on consoles and PC. So until XboxCube or Playstation4 gets out, titles won't need amazing video card, keep your cpu and ram to a good level and everything will be alright. Performance wise, these video cards are for benchers, extreme performance overclocker and or someone that owns a 30 inch monitor. That represents .01%(if so) of the market for video cards, welcome! Reply

Agreed with the portion that GTX 275 and the 4870 are in 2 different price categories. I would say the 4890 and the GTX 275 are at even parity.

Fermi is designed to push technical limits and have a halo effect on Nvidia's lineup to help sell their lower offerings. It wasn't designed to be truly cost effective, like ATI's price/performance parts are. Nvidia has a different goal in mind, and that is totally alright.

With the GPU horsepower we have now, we can continue pushing boundaries, you can run a multi-monitor setup, or have GTX 295 performance levels in a single GPU configuration at lower power consumption.

If you wanted adequate performance, you can get something in the 9800GTX+ range that plays fine at 1680x1050.

Take a look at Steam' survey results, even now 1280x1024, an older 17inch CRT, 17-19inch LCD Resolution, and 1680x1050, 20-22inch LCD's represent the majority.

These video cards were never developed with "adequate" in mind, Fermi is truly about pushing boundaries and being bold.

There is the fact that Fermi still isn't available, and AMD has had a good amount of time to improve things on the 5870. As a result, we may very well see a 5890 released by the time Fermi even hits the shelves. In addition to that, work is obviously going on toward the Radeon 6000 series while NVIDIA has to keep working to get Fermi out the door.

Fermi may end up taking the performance crown by the time it is released, but we may be looking at another Radeon 9700 vs. Geforce 5800 situation here, where nothing NVIDIA does in this generation will let them catch up due to all the "extra abilities" that their products are trying to offer.

The Radeon 5870(and a number of other 5000 series parts) are already capable of 3 monitor output, but not just for a few titles that are designed to support it. Eyefinity really does let you just combine multiple monitors together so applications only see one display with the higher resolution available. Fermi, no matter the horsepower may not be able to offer that.

That's really about it, Fermi just isn't out yet, and until it is, the big question won't be about how POWERFUL it is for most, it will be about DirectX 11 support, and going forward, OpenCL support and other standards. Yes, PhysX support will matter in a handful of applications/games, but how long will it be before it is replaced by an open standard by application and game developers?
Reply

nVidia has already announced, and possibly launched the GeForce 300M series, along with the G 310 OEM desktop card. They could be skipping the rest of the 300 series so that consumers will not be confused as to which cards carry the new architecture, and which do not. Just my guess, however. Reply

There have been 8800 rebadge jokes already, but I really think they are reserving the 300 namespace for rebadges of current products since the die size and price of Fermi will be incredibly high. Reply

I was going to point this out... given that the various 300M parts are all currently DX10/10.1, it would really be good to see all the 300 series parts follow that feature set. Then 400 series can be reserved for true DX11 parts. Kind of makes you wonder when we'll see 400M, eh? If the next mobile architecture out of NVIDIA is only DX10, they're going to have a tough battle against AMD/ATI's Mobility 5000 parts! Reply