If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Upgrading GPU AMD or NVIDIA?

Hello guys i was wondering if you guys could help me decide which GPU i should go for. Right now i am deciding between the MSI GTX 970 and the MSI R9 390. As of now the R9 390 is 20$ cheaper than the 970. So I ask which would be the best choice? What would you choose and why?

I have a 970 It's reliable and does what you want for 1080p gaming and 1440p as well. As for team reds recent entry just look at contrasting reviews and decide for yourself as I believe at this time they're similar. Apparently driver support is an ongoing issue with AMD in general but that shouldn't stop you, some games handle better one brand over the other.

I have a 970 It's reliable and does what you want for 1080p gaming and 1440p as well. As for team reds recent entry just look at contrasting reviews and decide for yourself as I believe at this time they're similar. Apparently driver support is an ongoing issue with AMD in general but that shouldn't stop you, some games handle better one brand over the other.

The new AMD 300 series cards are rebrands, exluding the Fiji XT(I think, correct me if im wrong), the 390 offers minor improvements over the R9 290 card(s); And he's right about the Drivers, I have (recently, meaning since launch) seen a ton of reviews stating major driver issues, as well as owners of 300 series cards; Im not for AMD or Nvidia, but I will pick whichever is best at the time.

They will. Current AMD cards have the first gen of HBM, that may have been exclusive access as seen with only 2 cards currently using this technology. Nvidia and AMD will both be using HBM 2 for next gen cards

Also unlike Nvidia, AMD has a much more powerful incentive to launch its next generation of FinFET GPUs first. This is because the company has priority to HBM2 capacity – which is going to be limited initially – as a result of co-inventing the technology with Hynix.

Commercializing a product that uses CUDA processing, we've studied all of the factors a gamer would be interested in and a few extra. It's a sad day when you see a machine learning programmer melt a Titan Z. Nvidia doesn't like supercomputing companies to use their retail cards. They are not friendly about telling CUDA folks to buy their TESLA series cards instead at 4x the cost because they have trouble keeping up inventory when we buy 50 or more at a time. We ignore them and after all of the analysis was over, cost/benefit chose the EVGA (manufacturer matters) 970 over every other card - including Titan and Titan Z. Those 970s outlast the TESLA series when we hit them hard and are $350 vs $4500.

Recently we made the decision to switch to the 980ti as it leads the 970 in our second round of cost/benefit from a CUDA perspective.

Brutal, and TESLA cards do worse in cost/performance? I thought those were supposed to be more specialized. In that case is there even an area where they are worth getting over a consumer card? (Not gaming of course)

I myself have an AMD 290X in my rig and to be honest it isn't all that much of a spectacular card (Mine is a stock reference one with no overclocking). AMD is great at OpenGL, but yet's be honest, how many games in the past few years even properly support OpenGL? Also, the drivers are a HUGE issue for me, in as much as I can't use my Oculus Rift DK2 at the moment in extended mode as the drivers cause a complete black screen when trying to. I wanted AMD to 'come back with vengeance' and give Nvidia a kicking, but the truth is, for the extra $20, you're getting a better card that has better software driving it.

From what I understand Qrow, the 64 bit of the 980ti ramps up in line with the Tesla and with a little planning on the architecture end one can avoid having to rely on RDAM. Other than a 10 year warranty which isn't relevant given the pace of innovation, our data was surprisingly not in favor of the Tesla cards. If we had coders who were still learning the ropes, likely we would want the added insurance. None the less, with some decent memory buffering the 6GB of ram is sufficient making the 12GB of a Titan series less meaningful. At that point you just save yourself 300 bucks a card.

Interesting to see the 780ti still make the top 5 for price/computing. We got in 4 X's to eval and they're out of box now. 980ti's took their place permanently. Again, without strong optimization of code most people would want the extra ram. More memory is always better. That said, more profit is better than the 'always better' of more memory. (follow that if you can)

I have a 970 It's reliable and does what you want for 1080p gaming and 1440p as well. As for team reds recent entry just look at contrasting reviews and decide for yourself as I believe at this time they're similar. Apparently driver support is an ongoing issue with AMD in general but that shouldn't stop you, some games handle better one brand over the other.

The 970 can handle 1440p? One thing that i don't understand quite well if itll impact gaming with lets says battlefront. I hear so much about the 970s false vram of 3.5 instead of 4 does that actually impact gaming to like lets says game like only battlefield or battlefront games? cause to be honest thats all i think i will be playing with my PC.

No, the reviewer performance of the 970 is inclusive of the slower 500mb of vram so the post launch performance benchmarks you see factor in this aspect. The short story is that it's a 4GB card. It would have only been a 3GB card, but Maxwell architecture brought some tricks up it's sleeve and this allowed them to get to 4GB, though with a .5GB partition of the total block operating at a slower clock - though still faster than your bus.

In the end, the value of a card's performance beyond 60fps starts to depend on many things, such as the speed, in pixels, of the item moving, the refresh rate of your monitor, the amount of light entering your eye and whether it is direct or coming in more from your periphery. If you happen to still use a 60hz monitor, any FPS above 60 is not producible by your monitor. Program may be reporting it, but your monitor isn't showing it as it is not capable of refreshing faster.

When it comes to getting new hardware getting what's best for your needs does matter. Given the message under your name and your comment you probably should reconsider looking at these kind of threads if you have nothing productive to add

They will. Current AMD cards have the first gen of HBM, that may have been exclusive access as seen with only 2 cards currently using this technology. Nvidia and AMD will both be using HBM 2 for next gen cards

Also unlike Nvidia, AMD has a much more powerful incentive to launch its next generation of FinFET GPUs first. This is because the company has priority to HBM2 capacity – which is going to be limited initially – as a result of co-inventing the technology with Hynix.

It's most likely true then, The Pascal architecture reportedly will support it. Sometimes WCCFTech isn't necessarily a 100% valid source of information, in this instance i think that it's probably not the case, but just letting you know if you dont already; Nvidia's Pascal GPU's are something im really looking forward too. Nvidia is very good at making advancements/optimizations, even when constrained by Die space, process node, ETC(As well as driver's *usually*); I dont know many things AMD Excels at, Their drivers usually aren't great, and Ithe only thing I can credit them( And Hynix ) for is HBM. Other than that they havn't done much in the GPU Market in years, nor have they in gamer/mainstream-based CPU's (The SB950 Chipset 6+ YEARS old, that's just ridiculous).

nvidia all the way they run cooler and can usually be ocd 10 - 20% without anything other than increasing the fan speed if ur thinking to go with the aftermarkets i highly recommend EVGA they liike to throw on a better cooling system and tweek the oc so most of their cards come with at least a 10% oc.

It's most likely true then, The Pascal architecture reportedly will support it. Sometimes WCCFTech isn't necessarily a 100% valid source of information, in this instance i think that it's probably not the case, but just letting you know if you dont already; Nvidia's Pascal GPU's are something im really looking forward too. Nvidia is very good at making advancements/optimizations, even when constrained by Die space, process node, ETC(As well as driver's *usually*); I dont know many things AMD Excels at, Their drivers usually aren't great, and Ithe only thing I can credit them( And Hynix ) for is HBM. Other than that they havn't done much in the GPU Market in years, nor have they in gamer/mainstream-based CPU's (The SB950 Chipset 6+ YEARS old, that's just ridiculous).

Right on, I wasn't sure about FCCFT but I saw the NVIDIA HBM thing from other sources but without the big article for support.

NVidia will run quieter and Cooler, and will have a higher price. They will also have a better result for straight power in your GPU than AMD will have.

Guessing since you're on the AOD forums you play games. Go with Nvidia. The Cuda Cores are extreme when it comes to power and if you can spare the extra $100 without hurting any other component budgets then get the GTX 980 they are an awesome card with a huge amount of power.