This site may earn affiliate commissions from the links on this page. Terms of use.

For the past 18 months, AMD has been playing a defensive game in the GPU market. The Radeon HD 7970, which launched nearly two years ago, was initially met with strong reviews, but the card’s $549 price tag raised eyebrows. Then, Nvidia delivered the first 600-series cards based on the GK104 GPU. Suddenly, it was AMD with the hotter, slower, more power-hungry solution, and Nvidia on the ascent. Since then, AMD has fought back with tweaked GPU speeds, the impressive “Never Settle” bundles, and more aggressive pricing. It’s focused on improving driver compatibility and solving problems related to frame latency. It’s done everything, in short, but release a new GPU.

Today, that changes.

The R9 290X supports up to six displays across two dual-link DVI, 1x HDMI, and 1x DisplayPort

R9 290X hardware specs

The R9 290X is a GCN-based GPU with a little English and a lot of Miracle-Gro. The card’s full specs are available below:

There’s a lot to dig into here. First, as with Nvidia’s Titan, the R9 290X is a larger version of the underlying architecture, with 44 CUs and 2,816 cores, up from 32 CUs and 2,048 cores. At first glance, the difference between the R9 280X and 290X is much smaller than the gap between the GTX 680 and the GTX Titan. When Nvidia launched Titan, it packed in 75% more GPU cores, for a total of 2,688 cores, 224 texture mapping units (TMUs) and 48 ROPs (render outputs). The GTX 680, in contrast, offers 1536 cores, 128 texture mapping units, and 32 ROPs. So with Titan, cores and TMUs both jumped 75%, ROPs grew by 50%, and memory bandwidth increased by 50%.

AMD has chosen a different approach. The R9 290X has 37.5% more cores than the R9 280X, 37.5% more TMUs, but fully twice the ROPs — from 32 to 64. It can handle four primitive operations per clock rather than two, which means its tessellation performance should be higher. The memory bus has been increased to 512 bits wide, but the RAM itself is clocked more slowly, at 1250MHz, down from 1500MHz on most HD 7970/R9 280X cards. Total memory bandwidth, therefore, is only modestly increased, to 320GB/s, up from 288GB/s. Overclocking tests suggest the GPU can handle another 10% to both core clock and RAM — in line with expectations, but nothing extraordinary.

The other major difference between the Radeon R9 290X and the R9 280X is the addition of new asynchronous command engines (ACEs). Asynchronous command engines are important for GPGPU computation workloads — each ACE can handle its own stream of parallel instructions. The old HD 7900 family and the R7 260X, R9 270X, and R9 280X all have two ACEs per GPU. The R9 290X has eight. This is almost certainly a direct result of customization work AMD did for the PS4, Xbox, or both, and it should allow the chip to execute far more threads in parallel in certain situations. We’ll examine what some of those might be in tests a little further on.

Along with the major changes, there’s a host of minor ones. The Hawaii GPU that powers the R9 290X incorporates a host of minor improvements that could arguably classify it as a GCN 1.1 chip, like the R7 260X/Radeon 7790. It’s also the first high-end GPU from AMD or Nvidia that has no need of a Crossfire external bridge. AMD has promised that the GPU can handle multi-GPU combination without one thanks to dual DMA engines that are compatible with new frame pacing technology and handle rendering without external penalty. Unfortunately, AMD wasn’t able to provide us with a second GPU for testing this new system in Crossfire mode.

Tagged In

Could do with a bit more info on the graphs.
Is the FPS minimum? maximum? mean? etc.
And how come you left nvidia off the Unigine test?

Joel Hruska

The Unigine test was specifically aimed at the theory of whether or not the R9 290X would show a smaller tessellation hit than the 7990. Specific test, specific question. My point was not to compare against Nvidia’s tessellation performance.

andreana mccann

my Aunty Sophia got an awesome 12 month old Audi A8 by working parttime online. hop over to here w­w­w.J­A­M­20.c­o­m

Singh1699

I get 850+ mhash with a 7970 at 11-1200. Could you try an oc test?

Also do you know if I’ll be able to crossfire a 7970 with the 290 or 290x?

Joel Hruska

If you’re getting 850Mhash off a 7970 at 1100MHz I’d like to know what your miner settings and kernel are, because my OC’d 7970 doesn’t hit that high — and wouldn’t, even at 1200MHz.

The highest stable core clock was about 12%, or 1120MHz. Extrapolating, that would put this card in the 967MHash range. Software to enable voltage adjustment isn’t available yet. But the rise of ASIC mining has obviated GPUs unless you aren’t paying for your own power.

The 7970 and R9 280X both require Crossfire connectors, the R9 290X does not. Presumably the R9 290 also does not.

Singh1699

I just used the most popular miner, it has a command line interface. Set load to 6, and mine’s underwater temperature usually 32-36 on load. My card on air wouldn’t even hit 1100. I thought I had a dud since I got it on eBay.
It’s not diablominer. If you don’t remember, I’ll find it just my sister’s car battery died. :D So brb.

Joel Hruska

I typically use 50Miner’s front end for CGMiner and the Diablo kernel. I’ve also used Poclbm and the diakgcn kernels.

Singh1699

Cgminer.

Joel Hruska

what workload size, vectorization, and kernel?

Singh1699

I don’t know all those things. I just tried p2p as well as a few pools for a few different days. Last year and earlier this year as well. I just played with that workload setting until I got a peak.

I_WRITE_IN_ALLCAPS

What you guys still use radeons for mining?? that’s so 2012… Do you even cover the cost of electricity with the difficulty so high today?

Singh1699

I mined for two days last October lol. Are the asics going for 400 online a good deal? They are 6-8 gh

I_WRITE_IN_ALLCAPS

asics? never got hold of one, was mining from 2011 to early 2013 with two barts core radeons got almost 600m/h from them, when the dificulty got too high I just gave up mining. Good luck with trying to get asics, the butterfly labs “asics provider” scammed me with a whole bitcoin and a half back in april…

Singh1699

They out in ebay now. There even ones the size of a sub key that di 3-499 mh using 5 watts. Those are 20-50.

e92m3

Don’t forget to include the cost of one of the more powerful powered USB hubs. In truth, even those ‘block erupters’ are showing poor ROI.

e92m3

Don’t forget about sCrypt blockchains. Most of the GPU farms have moved to sCrypt sha-256 due to the effectiveness of ‘simple and cheap’ cores for pure sha2.

Anyways, the little mining gold rush is mostly over in my mind, even with sCrypt.

Now, luxmark on the other hand…That’s still excellent. SLG is the best GPU raycasting engine I’ve used by far and mocks octane and the like for speed. SLG is a bit more difficult to use, but it’s a non-biased spectral render, tune materials around real-world values and it works very nicely. Pretty hard to beat, especially when you consider the grand total of $0 USD for software.

e92m3

I agree that a little more detailed information about the metrics would be handy.

Tessellation is a technique that can be applied efficiently or very broadly with drastic impacts to performance on all arcs and still have the same visible benefit for displaying height data of a texture. It takes careful consideration to yield results that are actually of value with tessellation…Unigine isn’t really one of them.

I also propose a new standard for test data: minimum weighted average (or something along those lines). Peaks are pointless and averages are influenced by those peaks. A situation (hardware/software) that provides higher bursts without increasing the minimum is counter-productive in terms of perceived smoothness and input latency.

The frame-pacing for this GCN 1.1 in single card solutions is actually slightly better than nVidia’s if you go the majority of reviews online. I wasn’t stunned to see the card perform on par with the Titan, but the response time is what I found to be really shocking. Cards from either manufacturer are at a point where you can’t perceive the difference, but the 290X was faster still :O

In crossfire the XDMA solution appears to be much better than the crossfire bridge. The situation reverses with respect to frame response times with AMD slightly behind SLi 780/Titan, but they’re again at a point where the difference isn’t perceivable.

The most shocking of all of this is that AMD attained that performance with a die that’s only 63% the size of the 551mm^2 Titan, and much of that 438mm^2 die was dedicated to increasing the ROPs for its 4K performance.

Hats off, AMD. That’s an absolutely amazing GPU. Now if only it had a better cooling solution :P

Joel Hruska

The cooling solution on the R9 290X is amazingly good, though I don’t talk about this much. If you crank the fan controller up to 100% while mining Bitcoins, you can keep the GPU temperature down to about 66’C. The temperature at 75% fan speed is ~72’C.

Let me put that in perspective. My 7970 can barely hold 92’C while mining BTC, with the fan at 100%.

The *volume* of the R9 290X’s fan at 100% is incredible. Ear-blasting incredible. Makes-the-FX-5900-sound-like-whispering incredible. But that baby can cool.

pelov lov

Was that at uber or quiet mode? From what I understand, the dynamic overclocking should push the GPU temperatures to 95C and bump up the clock speeds. If the non-reference coolers improve upon the reference design, these GPUs could be significantly better than what we’re seeing now. It’s just begging to be overclocked with LN2 or under water =P

It’s funny to see how the roles reversed over the last couple of generations. The Fermi cards were absolute power hogs and focused on compute while AMD went for smaller more frugal dies with still great gaming performance. This time AMD increased their die size for compute across the board but nVidia went small with GK104.

Whereas the GK104 is a great gaming die, I really don’t understand the point of the GK110 die as a gaming GPU. There’s just so many wasted transistors and die space that’s not doing anything. To me at least the Titan makes far more sense than the GTX 780. There the compute stuff isn’t fused off and can still be utilized, and the CUDA crowd is quite happy. The GTX 780 is just a big die with fused parts meant for a completely separate market. It performs well, sure, but why is it so damn big? I remember asking the same question when seeing the GTX 580.

Joel Hruska

What, the ear-busting noise volume? That’s in uber mode, with the fan manually forced to 100%. The card gets NOWHERE near that loud if left to its own devices.

So, about GK110. The first thing to understand is that the capabilities it adds relative to GK104 appear to take up very little die size. GK104 packs 1,536 cores into 294mm sq, or 0.19 sq. mm per core.

GK110 packs 2,880 cores into 551mm sq, or… 0.19 sq. mm per core. If the “other stuff” in GK110 actually took up much in the way of die space, that ratio would be larger for GK110 than for GK104. It’s also possible that GK110’s abilities are actually baked into GK104, and simply not enabled.

But either way, the fact is, Titan isn’t very good for most compute. The problem, as near as I can tell, is parallelism. A full Fermi implementation had 16 SM units of 32 cores each. Each SM unit had its own dispatch units, warp schedulers, and register file.

Now, Nvidia beefed up all these capabilities for Kepler, and it made the architecture more sophisticated, but it also sharply cut the ratio of SM front-end to cores. Fermi had a 1:32 (front-end to SM) ratio. Kepler’s ratio is 1:192.

That makes it extremely important for each front-end to extract the peak amount of parallelism possible. If you can’t, you wind up with terrible usage efficiency. This is one of the reasons why Titan is curb-stomped by GCN in Bitcoin mining — with a 1:64 ratio, GCN has far more front-ends that can process parallel operations across its cores.

It’s not the only reason — GCN has a right-shift SHA-256 hardware capability that GK104 completely lacks. Titan can do funnel shifts in hardware, which helps accelerate the process, but there’s a fundamental utilization problem.

Gaming maps really well to Titan because gaming is easily parallelized. I’m certain that some GPU workloads map well, and that fine-tuning can yield impressive results, but generally speaking, GK110 strains to compete with the HD 7970 in GPGPU.

pelov lov

So it’s an issue of scaling up? I never considered the Kepler front- end to be a problem, but, yea, there’s something clearly off about their architecture here. Is it that the large pool of L2$ that contributes to the die size isn’t being utilized? 1,536KB of L2 cache is going to be very dense and adds a lot of transistors to the package, but I’m just not seeing the gains here.

Because, even in gaming the Titan doesn’t look all that great. The benchmarks are proof of that. Tessellation performance is quite good, but as soon as you crank up the resolution past x1440 you start to lose efficiency (lack of ROPs). For a die that large, you expect it to be doing better.

Titan is a monster of a card, but it really didn’t have to be. It seems to me that AMD didn’t just win the benchmarks, they beat them out in architectural efficiency as well. Last time that happened was with the 5870. That’s a tough spot for nVidia, as they now have to drop the price on their much larger die despite yielding fewer per wafer. Dropping the price is the last thing they want to do. A price war with the GK110 die is something that won’t ever win

Joel Hruska

I would have to take a close look at fundamentals and really high-rez die shots before I ventured a guess as to why Titan is much larger than the 290X. I am assuming it is larger for a reason.

Remember that Titan still draws less power than 290X — that’s significant if you consider that one card runs about 10% slower than the other. In fillrate-limited scenarios, an extra 10% clock would be quite helpful.

Pineapple

GK110 has 3,840 cores if you include the Double Precision cores in each SMX.

Joel Hruska

Those aren’t cores. Not in the sense you’re using the term. Each GK110 SMX contains 192 cores and 64 double-precision *units.* That’s Nvidia’s own terminology.

This implies that while there is additional hardware baked in for handling DP calculations, it’s not accurate to treat these additional capabilities as equivalent to CUDA cores, or as representing additional untapped potential in gaming performance. According to Nvidia’s own documentation, GK110/Titan contains 2,880 cores in a full implementation.

Pineapple

I don’t want to argue semantics, but I never said they could be used for gaming, however it does account for the larger die size. compared to Hawaii.

Joel Hruska

Pineapple,

I don’t think it does.

Look at GK104, which supposedly lacks these units. GK104’s die size is 294 mm sq. It has 1,536 cores. That means 0.1914 sq. mm per core (Obviously we are blending together the ROPs, TMUs, SMXs, etc, under the “core” metric.)

Now, look at GK110. Technically, it’s 16 SMX’s (NV only activates 15 on the current cards). That works out to 0.1913 if we use the 2880 core figure, or 0.1793 if we use the more accurate 16 SMX figure, since that last SMX unit still takes up die space, even if it isn’t working.

That means the amount of die space allocated per core is actually slightly smaller on GK110 than GK104. So an NV core is actually slightly wider than a GCN core.

But that’s not too surprising, either, because back in Fermi days, one Fermi core was *much* more powerful than one VLIW5/VLIW4 core.

I suspect, strongly, that the GK104 contains the same hardware as GK110, but that the blocks are fused off at build time.

brenro

Hot and loud. But that price!

cougar3429

I wanted to care, but I switched from AMD to Nvidia years ago for one
simple reason. I had issue after issue trying to use the Catalyst
software. It is one of the worst experiences I’ve had with a single
program. I kept it updated, but it often crashed and caused various
issues. It never worked well. I have never regretted switching. I have virtually no issues with Nvidia’s drivers.

Phobos

I have been using ati-amd sense the ati 9550 days and never had an issue with the ccc, you must be doing something wrong.

Dozerman

Probably is just repeating what he heard before. Wouldn’t be surprised if he actually owns a mac.

cougar3429

What I heard where? I was stating my experience with the subject matter in the comment section. That’s kind of what happens. You fan boy trolls are ridiculous morons.

Also, I hate Apple.

cougar3429

Congratulations. I’m glad it works for you and don’t care if everyone else uses it. I have nothing against the company. I use their processors and have been very happy with them. You assume that since you have had no problems “sense the ati 9550 days” that nobody anywhere could have any problems with any version of their graphics cards and drivers. That is a ridiculous assumption. Of course, “sense” you can’t even spell since, I can see you have limited cognizant thought.

Singh1699

Google how to properly units tall Amd drivers. Some registry entries.

cougar3429

First of all, the fact that uninstalling is necessary is a problem. Second, the fact you need to edit your registry to do it properly is another problem. I understand how to uninstall. I have a degree in programming and build computers so I am comfortable doing these types of things. I was simply stating my experience. I’m curious as to what the two down votes were for. They apparently know that it would be impossible for me to have problems with the Catalyst software, or that it must be my fault. Arrogant trolls no doubt.

e92m3

First of all, I question your understanding of basic fundamentals. Nvidia leaves behind values in the registry, along with data in other places…I suggest you do a quick google search (or look on your own system…) and you’ll see that I am correct. If you had actually taken the time to read about the cause of your perceived ‘problem’, you would understand that 99% of the time, un-installation of the current display driver for a standard upgrade is not necessary. Downgrading is a different matter and we needn’t speak of the variability between chipsets when making a comparison.

Again, these are commonalities between essentially all display drivers since before you touched a mouse.

As you should know by now, a ‘degree in programming’ is meaningless. Everyone has a degree these days, >75% of which are useless, not only because they are ‘supported’ by an extreme lack of fundamental knowledge but also because the quality of education is incredibly variable. Allow me to direct you towards the giant gap between ‘programmer with a degree’ and someone who actually understands the interaction of hardware and software, usually with multiple, earned (not purchased) degrees. Or, they taught themselves due to a true passion for the topic and continue to educate themselves, even without the carrot-on-a-stick ‘degree’ for motivation. Knowledge comes in many forms and cannot be accurately quantified through the possession of a mere bachelor’s, let alone any degree….

cougar3429

Everything you said is irrelevant.

A) I’ve had many problems with Catalyst software
B) I’ve had zero problems with Nvidia software

These points are what matter, and what my initial comment described. All you ATI fanboys can say whatever you like, but it doesn’t matter nor alter my previous points. Just like my problems with ATI do not matter to you. My statement about having a degree was merely to indicate my comfort level in basic software functions. I’m not an elitist as it seems you are.

Ivor O’Connor

I need a card that can handle two 2560×1440 monitors under linux mint. Will it be able to handle office work running these two monitors without making much of a noise?

Joel Hruska

Yes, though I doubt you need a $549 video card to drive two 2560×1440 monitors. A much cheaper card should handle this just fine.

Ivor O’Connor

“Yes, though I doubt you need a $4549 video card to drive two 2560×1440 monitors.”

It is $549, not $4549. I’d go for cheaper if I knew what I was doing.

Joel Hruska

What’s the brand / model number on the monitor? It might an Asus VG2438, for example. If you can tell me the brand and model, I can give you better advice on what GPU will drive it.

Ivor O’Connor

Two Nixeus nx-vue27s. Right now on my HD 6950 despite using the right cables and the latest drivers and even beta drivers on linux and windows 7 my resolution is capped at 1680×1050 on one and the other is getting full 2560×1440.

Theoretically this card is suppose to run both at 2560×1440. So I’m very shy of GPU claims.

Ivor O’Connor

Two Nixeus nx-vue27s. Right now on my HD 6950 despite using the right cables and the latest drivers and even beta drivers on linux and windows 7 my resolution is capped at 1680×1050 on one and the other is getting full 2560×1440.

Theoretically this card is suppose to run both at 2560×1440. So I’m very shy of GPU claims.

Joel Hruska

What cables are you using? Are both mini-Display Port?

Ivor O’Connor

Dual-link DVI cables into both of the Dual-link DVI slots on the back of the AMD Radeon HD 6950 card.

Ok. Next question. I’m assuming that the second monitor *won’t* drive at 2560×1440. You try to set that resolution, it fails *or* the card won’t detect that the display is capable of it. (I am not a Linux user — I know my way around Ubuntu, but am only modestly proficient.)

Here is what I’d like you to try.

1). Disconnect the display that normally runs at 2560×1440. Reboot the system. Can you now set the second monitor to the same resolution, or is it still stuck at 1680×1050?

2). If it is still stuck at 1680×1050, plug it into the other DVI port. Does it work now?

I think I know the answer to this problem, but I want to make sure the issue isn’t with the monitor.

Ivor O’Connor

Just got back home. I was going to answer you based on memory but decided to test everything thoroughly.

1) After disconnecting the display that runs at 2560×1440 the other still remains stuck at 1680×1050.

2) After plugging the “stuck” 1680×1050 into the other port it now boots up at 2560×1440.

I hope you have the answer to this problem!

Joel Hruska

Welllll…..no. But the cost is far less than what you were considering.

Without knowing the exact model number of the card, I cannot know for certain, but I’m guessing you have two different DVI ports on that GPU — a single-link and a dual-link. The dual-link port can drive either monitor at 2560×1440, but the single-link *port* can’t.

Having a dual-link cable isn’t helping because the problem isn’t with the cable, it’s with the GPU.

The simplest way to test this is probably to grab an HDMI or DisplayPort cable and hook those up instead. Either one should work, but if I had to bet on one being *more* likely to work, it would be DisplayPort.

That *should* solve your problem.

Ivor O’Connor

I ordered two DisplayPort cables. I did not know what they were until you suggested it and got me thinking a “DisplayPort” might not be a fancy work for a dual-link dvi port. Arrival date is the 30th. Thanks for the input.

Joel Hruska

Erm. Argh.I wish you hadn’t done that. You only need one, and you still need a mini-DisplayPort to DisplayPort connector.

If the front of your card looks like that, you need mini-DisplayPort to DisplayPort adapters. Well, one at least.

Ivor O’Connor

No harm. I ordered two mini-DisplayPort to DisplayPort adapters, v1.2, 6′ long, for $8 each with free shipping. I got the impression this DisplayPort interface is the future so I might as well have a second. With two I can do additional testing.

The card is suppose to support up to four monitors. If all it takes are DisplayPort interfaces for the two big ones maybe I can toss in two unused smaller displays.

Joel Hruska

Alright. Obviously I can’t promise without seeing the card, but that *should* do the trick.

Singh1699

Do you think display port will replace hdmi?

Joel Hruska

I expect DP and HDMI will co-exist for the next few years. DVI will start to fade out if 4K displays become more common.

Ivor O’Connor

I’ll let you know if it worked when I get it. And hold you totally responsible for promising that it would work. ;)

Ivor O’Connor

Ok, the DisplayPort cables finally came in. I hooked one in and restarted the computer and now I have two full 2560×1440 monitors. Thank you Joel.

Joel Hruska

Excellent! I’m glad I could help you. :)

Ivor O’Connor

I’m glad I ordered two DisplayPort cables because I quickly found only the monitor with the DVI plug would resume. To get both to resume properly required they both have DisplayPort cables.

Joel Hruska

Ok. Next question. I’m assuming that the second monitor *won’t* drive at 2560×1440. You try to set that resolution, it fails *or* the card won’t detect that the display is capable of it. (I am not a Linux user — I know my way around Ubuntu, but am only modestly proficient.)

Here is what I’d like you to try.

1). Disconnect the display that normally runs at 2560×1440. Reboot the system. Can you now set the second monitor to the same resolution, or is it still stuck at 1680×1050?

2). If it is still stuck at 1680×1050, plug it into the other DVI port. Does it work now?

I think I know the answer to this problem, but I want to make sure the issue isn’t with the monitor.

Joel Detrow

You’d probably be fine with a run-of-the-mill HD 7770. If it’s just office work, all the heavy lifting is done on the CPU.

Phobos

Very good video card in deed, it matches and sometime surpasses an overpriced 1k video card. Now my question is with such high temperatures what’s the life expectancy of the R9 290x? Assuming it has the proper cooling.

Joel Hruska

I can’t comment on that. I’ve had a card a week. Even if I had it longer, you’re asking a fundamentally impossible question.

AMD has stated that the cards are fully tuned to run at 95’C. This is not impossible; the old Radeon 4850 ran very, very hot, yet did not suffer from high failure rates (as far as I know). Warranty rates remain the same terms as with previous products.

Nvidia’s GTX 480 series would run at 93-94’C under heavy load, but I never had a problem with any of them over their lifespans. Conversations I’ve had with Intel about the lifespan of Core i7s in high-termperature environments have indicated that the chip can absolutely handle 94-96C for years without significant problems.

Finally, users have the ability to lower the temperature to a level they prefer. If you think 95’C is too hot, set for 90’C.

There is no intrinsic reason why the 290X can’t be built to run at these temps.

Phobos

I had the radeon x1950pro AGP it fail after two years I think, I don’t know if heat made it fail. I never check the temps for the GPU( my fault I know) I wonder if heat takes an effect more on the GPU or the memory. Technically what I’m saying when a video card fails what component fails GPU or the memory unless I’m missing something else.

Joel Hruska

Impossible to answer. It’s going to be component specific. The GPUs in the original Xbox 360 failed because of problems with the solder, for example. When they heated and cooled repeatedly, the solder cracked and broke.

“Failure” is a vast topic when discussing chips this complex. But properly built, a part can last for thousands of temperature cycles. Your Radeon X1950 Pro may have failed because thermal protection wasn’t as good back then — these days, a GPU will cut its own clock and increase fan speeds dynamically to avoid thermally damaging scenarios.

Phobos

Thank you, very interesting indeed. It does make you wonder if they do a poor job in soldering( I know you can sometimes tell by just looking at the board and see, though we don’t look at it or at least me to see for defects on the components.) or if the lead they use to solder is not good enough.

Joel Hruska

That’s what broke the Xbox.

See, using lead in solder is (or can be) a really good thing. Helps the solder flex. When we transitioned away from doing it, MS went with a solution that they thought would do as good a job. I don’t know if they failed to test it adequately, or if the company doing the material manufacturing failed to mix it well — but one way or the other, the defect crept in.

If the Xbox 360 had used leaded solder instead of lead-free solder, the defect problems wouldn’t have happened. But that was a long time ago — the issues have been worked out since.

solomonshv

“Warranty rates remain the same terms as with previous products.”

no they haven’t. pretty much everyone had 3 year warranties in the past. now it’s 2 years for most cards, and some have a 1 year warranty.

Ting Qian

Great article. I am looking for performance comparison on OpenCL for 290X and found it here. Very useful!

Caught a typo at the end of the article:

“but with the R9 280X coming in at $549 — a full $100 less than the GTX 780 — the Titan’s reign is finished”

should be “R9 290X”?

Joel Hruska

Yes, thank you. Fixing that.

Ned Noodleman

Yeah maybe conflating the TITAN and the GTX 780. While similar, they are not the same.

Joel Hruska

The GTX 780 is a GK110 GPU without some of the workstation features that the Titan offers, like double-precision floating point.

You might notice that in all GPGPU workloads, the R9 290X absolutely shreds Titan. There’s no comparison. Maybe highly optimized CUDA code presents a different picture, but that’s not relevant to general purpose work, which is typically done in OCL.

Ting Qian

Great article. I am looking for performance comparison on OpenCL for 290X and found it here. Very useful!

Caught a typo at the end of the article:

“but with the R9 280X coming in at $549 — a full $100 less than the GTX 780 — the Titan’s reign is finished”

should be “R9 290X”?

VirtualMark

Benchmarks for Quake 3@800×600 please.

Joel Hruska

Q3A would fit entirely in cache at this point.

Phobos

oh god yes, Q3 on Android! screw angry birds.

Joel Hruska

Also, due to age, I’m not sure it would even run particularly well, anymore. Still, I wonder…

400+ fps on a GT220, if you blink you’ll miss the benchmark! I don’t know what resolution that was though.

Phobos

Quake 3! I miss that game or at least games like those, now we are fill with BF’s and COD games to death.

Joel Hruska

At 640×480, default settings, a GTX 770 hits 900 FPS. The Radeon R9 290X is currently in a different system — an AMD system — where it hits just 746 FPS. Since that’s AMD vs Intel and the entire executable is going to be loaded in cache, I can’t give you a full update yet.

But, uh, I think it’s pretty fast.

VirtualMark

Lol I didn’t expect you to actually try it! Fair play.

Yeah 900 fps should be just about playable. I imagine it’d be faster if it took advantage of todays hardware, such as shaders and multi core cpus. It’s probably running on a single cpu core and not using half of the gpu’s features.

Joel Hruska

VM, I can now give you an apples to apples comparison. In Q3A, default settings and resolution, on the same platform…

*drumroll*

The R9 290X scores 920 FPS.
The GTX 770 scores 900 FPS.

Checkmate!

VirtualMark

There’s not much in it then! I’m guessing it’s cpu limited? Seeing as how it was a lot slower in the amd system.

It’s actually quite interesting to see how fast such an old game runs, I might have to try the benchmark now.

Joel Hruska

Quake 3: Arena was notorious for having a benchmark that fit very neatly into the P4’s cache structure and ran extremely well there. It was the game that P4’s won, even when they weren’t winning much else.

That implies, strongly, that the entire data set drops into L2 cache, but the FX chip’s L2 cache isn’t nearly as fast as Ivy Bridge’s. Keep in mind, however, that “not nearly as fast” still translates into more than 700 frames a *second.*

But on modern chips, it’s a cache test, not a GPU test. If I disabled the vertex lightmapping (I did, for fun) and turned texture detail all the way down, the AMD system scored ~930 FPS, up from the 750s.

jax

Very nice card. Although i switched to Nvidia for years (you guessed it, horrible drivers, i want to play not fix the freaking crap catalyst drivers), it’s good to see AMD is lowering the price treshold. But il wait for Maxwell to see what the new architecture can do. Maybe number 8 is Nvidia’s lucky number. I remember when 8800series came out, they demolished everything in it’s path. Maybe 800series will do that too :)

d00hicky

The drivers haven’t been a problem for a couple years now.

bakgwailo

Unless you are on Linux/a non windows platform.

d00hicky

This may be true. I’ve been using an HD7850 on Win 7 since release and I have had 0 problems. I’m not a fan boy, I just bought this card as a place holder until the new AMD and Nvidia cards came out. Now I plan to get an R9 290 and sticking with AMD because I had zero problems with drivers. Friends of mine experienced major driver issues with their nvidia cards when we went to play the new Tomb Raider. It took almost 2 weeks before we could play because of driver problems they had with nvidia.

I say to each there own but to me it’s clear AMD has done a great job keeping their driver problems in the past.

redmane

Still have problems with audio out over hdmi. Drivers crash. Audio disables itself sometimes on computer resets. Happens on both amd computers. My nvidia computers do not have crashing issues.

disqus_1MPBi7N5aG

huge fan of Nvidia, but Amd just took a piss on them… lol

A Anon

Awesome that they are using Bitcoin mining for benchmarks now.

Mike McDermott

Noticed the lack of temperature and power consumption numbers. Could this be because they didn’t want to admit that AMD is still sucking down massive amounts of power and getting hot enough to cook a steak to accomplish any of this? I want to see the power consumption per watt figures. Good thing I already read reviews elsewhere…

Joel Hruska

Nope.

You’re right that the graphs aren’t here, of course. But that’s not why. From the review:

“It should be noted that Uber is considerably louder than “Quiet,” and that if the GPU actually gets up to 100%, you’re going to hear something not unlike a jet turbine.”

“If you push the cooler, or the clock speed, however, the upper noise boundary is considerable.”

The GTX 780 and the GTX Titan remain cooler, quieter cards that draw less power. If that’s what you want, buy those cards. It’s that simple.

Jare Henrik

Sorry to say but this is a really poor title for your review. If you visit Anandtech and check there review on the 290X you will see that the Titan came out on top for the majority of games tested. Titan is still using less power, generating less heat, and offering the least amount of noise. I really wish people were not subjected to this nonsense as they are looking to see what will give them the best overall experience. Just to clarify I ran 4 7970’s and have no quams with amd but I am now using a single Titan because I prefer the overall experience. Nvidia has delivered a more consistent driver base as well. So to all you peeps who are shopping for a new card I’d say get a 290X if your looking for performance per dollar, however if money is not a concern I think you would still be happier with a titan. My last bit of advice would be to look into the games you play the most and check the resolution of your monitor. look at the graphs that are applicable to your gaming needs and go from there. Don’t let peoples opinions affect your buying decision.

Joel Hruska

Let’s check that. Comparing single GPU:

The R9 290X is faster in Metro Last Light, across the board.
It’s faster in Company of Heroes 2, across the board.
It is slightly slower, in Anand’s tests, in Bioshock Infinite
It trades off with Titan in BF3, depending on which settings you use.
It trades off in Crysis 3, depending on which settings you use.
It’s faster in Crysis, across the board.
It trades with Titan in Total War: Rome 2
It’s faster than Titan in Hitman: Absolution, across the board.
It’s faster than Titan in GRID 2, across the board.

There’s a reason why Anand wrote: “Consequently against NVIDIA’s pricing structure the 290X is by every definition a steal at $549. Even if it were merely equal to the GTX 780 it would still be $100 cheaper, but instead it’s both faster and cheaper, something that has proven time and again to be a winning combination in this industry.”

The only game Titan wins all-out is BioShock Infinite, and Titan is a $1000 GPU compared to a $549 GPU.

I am glad you like Titan. Titan is a great video card. It is quieter, it draws less power. But even if you award Titan every single win in the “tie” scenarios, the count stands at 5 wins for the Radeon R9 290X, and 4 wins for the GTX Titan.

Now my whole reason for the post was to clearly point out that by no means is the 290X a Titan Slayer lol, it performs extremely well and is great value. I pulled data from two reputable sites and chose a resolution that I feel is applicable to many gamers. Your review consisted of two compairsons and your head line is “To slay A Titan”?

You’re right I missed the 1920×1080 win for Titan in Hitman: Absolution. and in Grid 2 at 1920×1080, but those titles are, at best, overall ties, not wins. And the performance gap is on the order of 5%.

Do you know what you call a $550 video card that performs at 95% the level of a $1000 video card?

A Titan slayer.

Jare Henrik

I’m not going to bother trying to argue what would deem something as a slayer…. But I look at relevant information that is applicable to the “majority” of gamers which is 1080p gaming on either smaller flat panels or LCD/LED/Plasma TV panels. I work in AV retail and most consumers are not into 4k gaming by any means and multi monitor and 3D gaming is also not with the majority. Titan still came out on top in 14 of the 20 titles I found compared. At the end of the day if I see something indicating that I should consider replacing my Titan because a lower priced solution is “slaying” my current GPU I expect to see better results that what I have seen. Oh I almost forgot to help you fill in your Unigine’s Heaven benchmark comparison that you conviently left Titan out of 53.9FPS to 42.4FPS Win Titan again at 1920X1080 (courtesy of Forbes review). Nvidia Released there Titan Back in Feb and its still holding its ground against AMD’s Flag ship card period. That’s what being a Titan is all about.

Joel Hruska

Search my reviews. I don’t use Heaven. I use it for the review I write for PC Magazine on full systems, because they require it.

As I have described. The only reason The 290X’s tessellation performance was measured here, specifically, is because I was curious to see if the card took a smaller hit than the 7990 in full tessellation mode.

But to speak to your earlier point:

I am not telling you to upgrade. If you pay attention to the video card market to any great degree, you should be aware that the pace of GPU introductions from AMD and Nvidia has dropped off markedly. This has nothing to do with poor performance from either company, and everything to do with the slowing rate of advance at the foundry level.

The R9 290X and GK110 are both “cheats,” in the sense that they simply expand existing designs with a few modest tweaks or workload-specific enhancements for target markets, as opposed to introducing dramatically improved performance-per-millimeter .

As a result, no, I do not expect any buyer of a Titan would upgrade from the card this year. I do not expect you’ll upgrade next year. I expect, in fact, that the best way for you to get better performance will be to add a second Titan.

You may upgrade in 2016 – 2017 depending on the pace of GPU introductions, whether or not you choose to go SLI, and how much disposable income you have.

e92m3

And how long did it take Nvidia to release the ‘titan’ after the 7970 was able to defeat the 680 with optimization alone?

Sounds like the marketing ploy of ‘titan’ ( maybe they left off ‘ic’ on the end: titanic and ineffective die) was successful only from the standpoint of fooling children.

Let’s not forget, if you hadn’t become mildly delusional, you would be able to purchase two of these significantly more powerful GPUs for approximately the price of your single…’Titan’.

However, I agree, all benchmarks should be done at 1080p only, it’s incredibly important that you don’t get hurt feelings about who has the most powerful GPU.

Jare Henrik

firstly, if you read my post above you would notice that I owned a 4 way 7970 crossfire setup previously to purchasing a Titan card. And if you had a clue about how terrible crossfire was and is with 7000 series graphics cards you would realize I’m not delusional at all and have zero interest in running two 290X cards. here’s why, 1) your favorite title gets released and hey now I have to wait one or two months while they optimize drivers for crossfire.2) AMD runs there chips hot as hell so the 7000 series was not silent by any means(especially when running multiple cards). 3) microstutter, yup its there and if you think its not you don’t know what your looking for. If you want the best gaming experience you want to run the most powerful single GPU you can get your hands on. and the 290X is only a great value at the end of the day. Not to mention if you do any research on the 780ti you will see how quickly AMDs flasgship card is going to sink. Now I’m going to try be very clear with you… I have no Issues with AMD, I wish they would get there act together and launch a card that beats out nvidias greatest chip in all categories but as it stands all the 290X did was excell at titles that are optimised for AMD and there are still a tonne of titles that there missing the mark on. And I’m getting really sick of people showing 4K benchmarks as if they are a gaming standard. By the time 4K becomes relevant to all gamers the 290X is going to be a dinosaur.

Bryan

Pfffttt Another biased review.. show only two games and just happen to be the ONLY two games at a specific resolution the 290x beats the Titan.. hardly a piledriving HAHAHAHAHA

that’s ok .. the GTX 780Ti will crush the dreams of that 290x anyway

e92m3

Perhaps you would prefer the 2k and 4k resolution results? Didn’t think so, little kid. This is the most powerful single-die GPU on the market.

What exactly do you expect the architecture of the 780’Ti’ to be? Obviously, you haven’t a clue about this topic, merely the need to justify poorly-reasoned purchases.

Jare Henrik

seriously ….. the most powerful single-die GPU. the market …… I posted 20 titles at 1080p gaming and 290X only won 6 of them against the Titan? on what planet does that make it the most powerful?

Joel Hruska

1). Because performance and price both matter.
2). Because most of those titles were competitive across the resolution board, not sweeps.

All products are evaluated in relation to other products. I was hugely enthusiastic about Titan when it launched. Great GPU. Fabulous on every front. Quickly outpaced.

That’s life at the highest end. Just as the 780 Ti was a better deal than Titan for the vast majority of gamers, the 290X was better than the 780. When the GTX 780 Ti launches in two days, it will outperform the Titan handily, on every front, and it’ll sell for dramatically less money.

But whether the new 780 Ti will offer better overall performance / dollar than the 290X will depend on your preferences regarding noise, thermals, and gaming resolution.

Jare Henrik

Well I work hard for my money, and to me I don’t care what the price is as long as the performance is there to back it up. AMD has released a card that performs very well at a solid price, but as discussed earlier any owner of Titan wouldn’t considwr this as an upgrade. When the 780ti launches and dominates the Titan I will be happy and might even buy one because I am confident that It will dominate in every comparable title, not just a a handful of select titles which are hardware optimized. People tend to really over look drivers as part of the package and AMD is going to have to work really hard to earn my trust on that category. I would have to say I’m more interested in the Dual GPU 790 with rumored GK110 chips, Im not to fond of my last crossfire experience so I’m hoping SLI is better. The only way I will reinvest in AMD is if mantel proves to be the future of gaming.

Joel Hruska

I would never suggest anyone buy a $1000 GPU but upgrade 10 months later. If you need to upgrade 10 months after you buy a $1K GPU, then I didn’t do my job right in the first place.

Jare Henrik

So you wouldn’t reccomend buying a 1k GPU? I’m really confused cause to me spending 1k on a GPU isn’t that ridiculous at all. What would be ridiculous is spending $5-600 on the 290X which excels at 4K gaming and then finding the cheapest 4K monitor which I believe Is around 3K or more. Spending 1K On the Best 1080p GPU and pairing it with a excellent 1080 monitor would be my recommendation. But let’s forget the Titan cause if you were doing your job you might tell consumers to hang in there until the 780 Ti is released…. A card that will most likely dominate the 290X in every category, and come in at a similar price point! Bottom line 290X is a waste of time at this point and I would strongly reccomend against buying it or a Titan at the moment. Then again I forgot, your probably getting free hardware and are getting paid to hype up AMD’s latest release.

Joel Hruska

*rofl*

No. I get to test hardware. I don’t get to keep it. It’s not “mine.”

I recommended Titan when Titan was brand-new, because it was an ass-kicking piece of equipment. When the GTX 780 came out, I recommended most people buy it instead of Titan, because it offered 85 – 90% of Titan’s performance, for about 70% of its price tag. The only exception was people who needed double-precision FPU at full speed.

With the 290X out, I’d definitely recommend that for 4K gaming over Titan. Would I recommend it over the GTX 780 Ti? I don’t know yet. We’ve had a rapid-fire release schedule here.

The idea that AMD buys reviewers off with $550 graphics cards is a little silly, man.

Jare Henrik

almost forgot to mention that overloading chips to the point where there running super hot is not an indication of power…. its poor efficiency

Bryan

I was talking about the 4k results, little girl
Aparently you have not seen the benchmarks for the 780ti tee hee muah ha ha ha ha

Stephen Emhecht

laughing my balls right off now that the 780TI is out

Stephen

I have a powercolor R9 290 unlocked with asus r9 290x bios. Good times

svfoxhotmail

Hopefully AMD can translate this into other middle tier cards. They fail so often I wonder how they are still in business.
Samhttp://www.unscramblewords.com

This site may earn affiliate commissions from the links on this page. Terms of use.

ExtremeTech Newsletter

Subscribe Today to get the latest ExtremeTech news delivered right to your inbox.