GeForce GTX 780 Ti vs Radeon R9 290X

Intro

The GeForce GTX 780 Ti features a GPU clock speed of 875 MHz, and the 3072 MB of GDDR5 memory is set to run at 1750 MHz through a 384-bit bus. It also features 2880 Stream Processors, 240 TAUs, and 48 ROPs.

Compare those specifications to the Radeon R9 290X, which features a GPU core clock speed of 800 MHz, and 4096 MB of GDDR5 RAM set to run at 1250 MHz through a 512-bit bus. It also is comprised of 2816 Stream Processors, 176 Texture Address Units, and 64 ROPs.

Power Usage and Theoretical Benchmarks

Power Consumption (Max TDP)

Memory Bandwidth

Theoretically, the GeForce GTX 780 Ti should be just a bit faster than the Radeon R9 290X in general. (explain)

GeForce GTX 780 Ti

336000 MB/sec

Radeon R9 290X

320000 MB/sec

Difference: 16000 (5%)

Texel Rate

The GeForce GTX 780 Ti is quite a bit (more or less 49%) faster with regards to texture filtering than the Radeon R9 290X. (explain)

GeForce GTX 780 Ti

210000 Mtexels/sec

Radeon R9 290X

140800 Mtexels/sec

Difference: 69200 (49%)

Pixel Rate

If running with high levels of AA is important to you, then the Radeon R9 290X is the winner, by a large margin. (explain)

Radeon R9 290X

51200 Mpixels/sec

GeForce GTX 780 Ti

42000 Mpixels/sec

Difference: 9200 (22%)

Please note that the above 'benchmarks' are all just theoretical - the results were calculated based on the card's specifications, and real-world performance may (and probably will) vary at least a bit.

Price Comparison

GeForce GTX 780 Ti

Amazon.com

Radeon R9 290X

Amazon.com

Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.

Memory Bandwidth: Bandwidth is the maximum amount of data (in units of megabytes per second) that can be moved past the external memory interface in a second. It's worked out by multiplying the card's interface width by the speed of its memory. If the card has DDR type RAM, it should be multiplied by 2 once again. If it uses DDR5, multiply by 4 instead.
The higher the card's memory bandwidth, the better the card will be in general. It especially helps with anti-aliasing, High Dynamic Range and higher screen resolutions.

Texel Rate: Texel rate is the maximum amount of texture map elements (texels) that can be processed per second. This is worked out by multiplying the total number of texture units of the card by the core speed of the chip. The better this number, the better the card will be at handling texture filtering (anisotropic filtering - AF). It is measured in millions of texels in one second.

Pixel Rate: Pixel rate is the maximum amount of pixels the video card can possibly record to the local memory in a second - measured in millions of pixels per second. Pixel rate is worked out by multiplying the number of ROPs by the the card's clock speed. ROPs (Raster Operations Pipelines - sometimes also referred to as Render Output Units) are responsible for drawing the pixels (image) on the screen.
The actual pixel output rate also depends on lots of other factors, especially the memory bandwidth of the card - the lower the memory bandwidth is, the lower the ability to get to the max fill rate.

I have done a long research about this. The outcome is screaming 290x. Anyways, AMD and Nvidia set together and set specifications and pricing after consulting with their marketing departments to reach to the mutual goal of squeezing as much money out of us. Do not be a fan boy; be the person who want the best for himself. Pursue happiness without blinding agenda!

I see a lot of ppl saying about memory speed bla bla bla thats only part of the complete card and makes littel to no differince. Ok so you want a graphics card. what rez 1080p/2k/4k
if you have lots of dosh wast it on a 290x/780ti neither of them will run 4k very well thats dual card setup domain atm just look a reviews DoHhhhh.
If 1080p/2k get 780/290 max they will do your job for years.
Forget synthetic benchmarks look for real world game fps then choose what you need. To many fanboys who aint got a clue posting here. Remember most screens are running at 60Mhz so wont display any fps above that. If you are lucky enought to have a higher spec 120mhz screen chances are you wont be worried about coast of a gfx and get the best.
Bottom line is dont be a fanboy just get what makes scence for YOUR OWN use.

The first processor with 64-bit technology. AMD.
The first 2 Core Processor. AMD.
The first 4-core processor. AMD.
The first 6-core processor. AMD.
The first 8-core processor. AMD.
The first Accelerator with GDDR3. ATI.
The first Accelerator with GDDR5. ATI.
The first Accelerator DX10. ATI.
The first Accelerator DX11. ATI.
The first Accelerator with 512 bit memory bus. ATI.
The first dual GPU Accelerator. ATI
The first Accelerator with 4k technology.
History teaches us that AMD and ATI innovate all the time.
Where are Nvidia and Intel?

Memory clock isn't everything. Effective bandwidth is, 290X has a wider bus (512bit vs 384bit) than the 780 Ti. 320GB/s vs 336GB/s isn't as massive a difference as the 2GHz difference in memory clock would lead you to believe.

290x is better value. Custom cards will reinforce this. 780Ti is a more efficient, cooler-running card that is slightly faster. All comes down to the price gap between them, which can be as much as £100 here in the UK.

@Jack Half of this is false and the lets just look back in the days when AMD and ATI didn't even exist and only Nvidia and Intel were...
GDDR3 technology and GDDR5 are both from intel/Nvidia ATI just made an accelerator for it same with DX10 and DX11, altough I admit 512 bit memory bus is pretty good but it doesn't make a big difference compared to 384 bit,,
And why didn't you mention
"The first Accelerator with 256 bit memory bus. Nvidia.??
Stop fanboying

The fact is fact GTX 780 Ti won here
but Nvidia+Intel is more of a high performance "heavy tank" style for various tasks while AMD+ATI (AMD+AMD nowadays) is for specially for gaming.
This doesn't mean Nvidia+Intel is not good for gaming.
Also Intel is using the factory strenght against the AMD "OC Mania", AMDs are better for overclocking but Intel will function for way longer time.
I mean an AMD/ATI will work for like 2-3 years while Nvidia/Intel will still work after 5-6+ years

So keep that in mind and about price, there is not much to it they are about the same range in both Intel VS AMD and Nvidia VS ATI
the 8-core CPUs of AMD doesnt cost too much less than the i7s

It's amazing the performance of a graphic that costs almost half the GTX 780 TI and almost has the same performance. Always had Nvidias from the RivaTnT (16-32) but the truth is that AMD is currently by far the best choice. The level of processors is that I still think that is not the height of Intel.

For all AMD fans out there:
nVidia could crush AMD. They have probably already designed the chips for 8xx and 9xx series and they could release them even in a month from now. Proof? The 780Ti had been in golden status for months, but they waited for AMD to respond to the Titan. When that happened, all of the sudden they have introduced it. Maxwell (8xx series architecture) has been announced in 2010. 4 years before they will have used it in GPUs. They are just waiting and watching AMD struggle to beat their 8-month-old card. They finally achieve it by running at 95C constantly and sounding like a vacuum Conclusion? AMD is a nooby tryhard, nVidia is a lazy pro. You have to decide which one of them suits you.

Watercooling the 290X shows what it is truly capable of (as with any card, really).
It really is a matter of taste these days... I have used both; and when it comes down to it I choose to save 20%, and miss out on 5-10% performance?
That is OK with me. Others will disagree, that is their opinion and they are entitled to it.
When it comes to cyrpto-currency mining, AMD has it all over Nvidia...if that matters.

The info provided in the majority of the posts here is a joke. People raving about cards that they know very little about, or at least that's how it appears to be judging on the rubbish that's being posted, If you really want to know how the cards perform get current reliable info and stop spouting c**p.

Holy shit this is hilarious, guys, they're both good cards, and, frankly, this choice comes down to preference, fanboying either is ridiculous. yes, Nvidia ridiculously overprices their products, we've known this for years, and yes, AMD doesn't hold the same benchmark power, but the thing is they'll both hold up on any game for years to come, so just pick which one ya' like.

It really depends on what setup you're going for. Have 2-Wat SLI GTX 780 TIs and Nvidia has been doing an amazing job with their drivers. I've ran this exact SLI for past 3 months and have had zero problems. If some game would not work to with SLI, I would disable one of the cards and run through one of em which is still plenty of power. I am not a fan boy. My main rig is Intel-Nvidia. I also have second rig that is AMD annd am most likey gonna buy an R9 280X very soon for it. I currently have an OLD ass GTX 260 running in my AMD build and it is working beautifully still. It's not about buying the cheapest, you know. It's whatever your setup up suits the best. You also have to make sure a high end card doesn't bottleneck your CPU. So don't buy a high end ATI card with a pile CPU (that would be pointless).

Nvidia wouldnt even exist or have been able to excel into future graphics without the purchase of 3dfx(voodoo) for those of you who remember thats where "sli" came from nvidia is not an innovator they simply took a tech from another company (after they bought out voodoo after bankruptcy) and made it their own, whereas ati always had stable technologies on their own, and it was made even better with the merger to amd it was win win. Look it up if you dont believe me. Frankly its like this if you have intel go nvidia, if you have amd go amd/ati. Seems as though thats what works best together at the moment.

Those are the two advantages. Personally, I prefer NVIDIA, but would take AMD if I had to. What I don't like about NVIDIA is price. Too expensive for the performance, but they have the most powerful GPU's out. What I dislike about AMD is their noise and heat. Run wayyy to hot, but can be water cooled for awesome performance.

What I don't see reasonable is that people buy the r9 290x and water cool it. That takes away the whole AMD price advantage. You can buy an r9 290x for about $520. Water cooling costs about $150. That equals $650. Yet the people who do that claim that they bought AMD for price. Yet for $650, you can buy a perfectly good 780 Ti. It doesn't make sense. And buying an r9 290x is irrelevant, as the r9 290, which costs about $100 less, can basically be turned into a "x" version. And for that $400 for the 290, you can get a Windforce edition, which has superb cooling.

But I have to give props to AMD. People are complaining about the whole memory clock 5GHz vs 7GHz, yet even with that huge drop in performance, the AMD still performs 5-10% less than NVIDIA. If they had a 7GHz memory clock, I even have to say that it'd be on the same level and maybe even outperform the 780 Ti.

But, put it simple:
AMD-Price
NVidia-Power

And if you do go AMD, buy the r9 290 Windforce, and tweak the BIOS settings to make it a 290x basically. But either way, AMD/NVIDIA, or Intel/AMD, you'll get superb performance out of both.

And one last thing: Why are AMD fanboys going against Intel. I'm fine with going against NVIDIA, but Intel? It's funny, because AMD GPU's perform best with Intel CPU's. Best AMD combo is r9 290 with an Intel i5-4670K. Just saying...

R9 290 Is much worse then the 290x, 290x has DOUBLE the render output processors, 15%+ cores and many other advantages.

Also the memory clocks don't matter here, because the GTX 780ti has 384BIT memory bus, and the r9 290x has 512 bit memory bus... When you overclock both to the limits the R9 290x kicks ass especially at 4k.

Alright fanboys let's break it down here, the amd one wins this one, and this from a guy who has a gtx 770, not only do they have a wider bus, more memory for higher resolutions, and a better price, they do this with a bad stock cooling system unless you get a brand like Asus, they use less cores, use less tmu's and have less MHz on the memory speed, YET they can compete with your "king" if amd built a GPU with the same specs as the 780ti then they would obviously win, the only difference here is efficiency, money, and total speed.
Yes I am an nvidia boy, my first GPU was the 7500le, my first CPU was the e6400 from Intel, yet amd does incredible thing using either older technology or less efficient tech, also their price gives us the best bang for buck and a great gaming experience. Everything I just wrote here except my first stuff, is said in their products or on the screen

I Have several gaming machines. My favorite is a 6 core AMD and a GTX770. I have mostly ATI/AMD GPU's but there is just something about the quality of the NVIDIA card that gives me that warm fuzzy feeling that can't be beat. I was glad I spent the money for it. I just ordered an r9 280 for my mini-itx gaming machine at my gf's house because it was reasonably priced and I know it will perform well. Bottom line they're both good...buy what suits the build/budget.

I own both, and have found one important thing; use whichever bramd card that the game developers used when writing their game for best results. For cards that run hot, advanced, somewhat easy to install cooling solutions are available aftermarket should you so desire.