Early DirectX 12 games show a distinct split between AMD, Nvidia performance

This site may earn affiliate commissions from the links on this page. Terms of use.

One of the major questions readers have raised over the past year is which company’s graphics cards would perform better in DirectX 12. It always takes a certain amount of time to answer questions like this, and DX12 is still in the early stages of deployment, with only a handful of titles currently available: Hitman, Rise of the Tomb Raider, Ashes of the Singularity, and Gears of War. Of these four, one of them (Gears of War) is DX12-only and available solely through the Windows Store; the other titles can run in DX11 or DX12 and support multiple operating systems.

Tweaktown recently put Hitman through its paces in both APIs. In 1080p DirectX 11, Nvidia wins top overall honors with the Titan X squeezing out the Fury X. Switch to DirectX 12, however, and AMD’s Fury X pulls ahead. The gap between the AMD and Nvidia cards continues to widen as the resolution rises; AMD wins 4K in both DX11 and DX12 and the gap in 4K DX12 is large enough that the R9 390X is able to surpass the GTX Titan X, as shown below:

Hitman’s 4K DX12 performance. Image by Tweaktown

These results are broadly similar to the benchmark results we saw in Ashes of the Singularity a few weeks before that game shipped. As in that title, Nvidia gains nothing in DirectX 12 and suffers some small performance regressions.

DirectX 12: A bifurcated story

Of the four DirectX 12 games currently in-market, Ashes and Hitman are wins for AMD and show a marked advantage in that API. Rise of the Tomb Raider, on the other hand, is a major Nvidia win. Benchmarks performed in that title show that AMD still lags Nvidia, even when testing in DX12 and even at 4K.

We can’t really draw many conclusions from Gears of War; the game appears to have been a terrible port with unplayable performance on AMD hardware, and is less than stellar even on Nvidia. The developer has released several patches, but it’s not clear if the game’s been truly fixed yet. With Fable Legends now cancelled, our early performance tests in that title don’t tell us much, either. Still, three games is enough to point to at least the beginnings of a trend.

First, we see AMD picking up performance relative to Nvidia in two of the three titles here. Both Hitman and Ashes use asynchronous compute, but Hitman’s lead render programmer, Jonas Meyer, noted at GDC 2016 that doing so only improved AMD’s performance by 5-10%, while Nvidia gained nothing from the feature.

One reason AMD GPUs do better in DX12 than their Nvidia counterparts is because the new API eliminates a great deal of driver overhead, and AMD’s drivers were never as adroit as Nvidia’s at handling these workloads in the first place. AMD’s 4K performance in DX12 is roughly 10% faster than in DX11, which jives with Jonas Meyer’s comments at GDC 2016.

What’s less clear is why Nvidia consistently loses performance in every DirectX 12 game published to-date. The GTX 980 Ti is faster than the Fury X in Rise of the Tomb Raider when using DirectX 11 or DirectX 12, but it leads AMD by roughly 9% in DX11 and by just 2.4% in DX12. These performance drops aren’t large in and of themselves, but if moving to DirectX 12 makes AMD 8% faster and Nvidia 6% slower, you’ve got a net performance shift of 14% in favor of Team Red.

DirectX 12 appears to help AMD by both reducing driver overhead and allowing developers to leverage GCN’s formidable asynchronous compute capabilities. It’s less clear why Nvidia continues to struggle with delivering absolute performance improvements in DirectX 12, even in titles that otherwise favor the company’s products.It’s still too early in the DirectX 12 / Windows 10 product cycle to draw absolute conclusions about which architecture will prove definitively better and the imminent arrival of new GPUs from both companies will render the question at least somewhat moot. So far, it looks as though AMD gamers are generally better off using DirectX 12 when it’s available, while Nvidia owners may want to stick with DX11, even when gaming in Windows 10.

We’ll continue monitoring the situation as new titles arrive and will update you accordingly.

Tagged In

Performance veries by title and partner. Ashes of the Singularity showed better performance with AMD than Nvidia because Async utilization. Fable legends did not as it was an Nvidia partnered title.

Nvidia hardware starting with maxwell is completely geared towards DX11 and will suffer in titles that have heavy Async utilization due to its software ran context switching. AMD hardware is geared for parallelism w/ low level api’s.

Rumor has it AMD will fix its DX11 issues GCN has seen since its inception with the release of polaris/vega due to the hardware changes (new “command processor”). Just a rumor so be sure to believe it 100%.

Joel Hruska

The only AMD weakness I’m aware of in hardware is GCN’s primitive processing. It’s not very good at small triangles.

Daniel Anderson

I believe you’re speaking of tessellation if I’m picking up what you’re putting down. That has to do with the DX11 optimization that AMD’s hardware lacks, seeing that they’ve geared themselves towards low level api’s since their talks in 2010. With this, they focused primarily on what their hardware can provide opposing what software was employed.

This can be seen also with the FX CMT design, which is severely hindered in single threaded processing, however does well against its competitor (at the time the IX-3k series Intel CPU’s) in multithreaded scenarios. This was because the old management were “geniuses”. They decided to use a design ahead of its time (Eg Focused on parallelism in a world of serialization just early enough to really hurt their financial numbers).

So if you throw an older GCN card into DX11 thats heavily focused on serialized rendering as opposed to a lower level API’s parallelized rendering, its obviously going to hurt.

This is why Nvidia is typically losing a few FPS in DX12 titles (I’d guestimate Vulkan as well) seeing their software scheduler has to choose between Async Compute or the “traditional rendering” (my super technical jargon) and is completely unable to do both at the same time (will change with volta, potentially, but not pascal).

In DX12 titles like Fable Legends (RIP), which was partnered with Nvidia, you’ll notice Nvidia didn’t lose much, if at all, fps where as AMD didn’t gain any like what we saw with Ashes of the Singularity. This is because Async is optional, not required. Very benefcial, but obviuosly Nvidia partnered tech won’t focus on it until their hardware can handle it properly. I expect to see the Async Performance grow as games utilize it more (Unreal Engine 4 has this backlogged, DX12 expected in May).

The Multithreaded DCL’s are required, which is why AMD see’s a huge boost in DX12 as opposed to DX11 when their GPU’s are used with the FX CPU, and Nvidia is relatively the same as far as CPU multithreading goes. The major difference is DX11 does what I call “Core balancing” as the threads are still serialized, where as DX12 has potential for true parallelism.

Joel Hruska

Tessellation and small triangle performance are related but distinct issues. Tessellation involves drawing triangles, but not all triangles are tessellated.

You can see some benchmark info on this here (though it’s limited to older cards):

The performance issues around AMD GPUs on AMD CPUs are partly related to the distinct lack of single-thread performance in AMD hardware, yes — NV GPUs often outperform GCN GPUs by larger margins if tested on AMD hardware as opposed to Intel.

“seeing their software scheduler has to choose between Async Compute or the “traditional rendering” (my super technical jargon) and is completely unable to do both at the same time (will change with volta, potentially, but not pascal).”

Keep in mind that we see performance degradation in NV hardware even when asynchronous computing is disabled. I have yet to see a single game that runs faster in DX12 for Nvidia than it does in DX11, regardless of async compute state.

“In DX12 titles like Fable Legends (RIP), which was partnered with Nvidia, you’ll notice Nvidia didn’t lose much, if at all, fps where as AMD didn’t gain any like what we saw with Ashes of the Singularity.”

We don’t know what DX11 performance in Fable Legends would have looked like; the benchmark we got to test was DX12-only. AMD’s performance in FL was still quite strong relative to the GTX 980 Ti. I think Anandtech tested an overclocked model (I tested a default 980 Ti, so our results were slightly lower).

It would be helpful if we find out what is causing Nvidia GPUs to lose performance on DX12.

AMD getting performance increase with Async Computation also means that the compute cores available are underutilized all this while in DX11.

AMD GPUs do however do always debut with a higher Flops performance when compared to Nvidia in any respective price point competition, which could explain on why they would gain benefit with added features from DX12 such as async compute.

Daniel Anderson

That is the Async compute. Nvidia cards have to switch between Compute or the traditional rendering. This causes a delay as Async Compute is supposed to work in parallel with traditional rendering. This is caused by their architecture and the fact they use a software scheduler.

Skywax9016

I know that when Async Computation is enabled, it will lose performance. But the report still said that Nvidia card will lose some performance on DX12 regardless of enabling/disabling of Async Computation, hence the question was asked.

“It would be helpful if we find out what is causing Nvidia GPUs to lose performance on DX12.”

i think its down to what they could do in dx11 that they can’t do in dx12. With their drivers. No more funky optimizations I think.

Regarding the higher flops, I think dx12 is going to put the cards in their correct tiers based on that. Especially when async allows more efficiency. 390 will be > 980 and fury x > 980ti. even fury > 980ti. stock.

Skywax9016

If you would no longer need to do funky optimization, wouldn’t DX12 be faster even for Nvidia cards? Having it become slower when using DX12 would mean that it would need optimization, only maybe a different kind.
From what I understood, async allows more efficiency for GCN cards only (at least for now). Non-GCN cards would not benefit from async either due to already being efficient in the first place or we’re never built to be utilized 100%

yeah the changes may be needed but may not be possible on dx12 since its not leaving as much to the driver.

people suggest future GCN cards might not need async, but if those cards are made to be able to run even more compute in parallel with graphics it would be (i.e. truly run concurrently)

Skywax9016

Probably, but if DirectX12 is as close to the metal as it was meant to be, then it would be up to the game developers to optimize the performance rather than Nvidia or AMD themselves , since before, Nvidia and AMD updates the driver to reduce overhead from software to GPU and also ways to better utilize the GPU cores. If DirectX12 allows the game developers To-The-Metal access to GPU, then Nvidia and AMD would no longer need to update their drivers frequently.

Ammaross Danan

It’s because nVidia didn’t implement much of any DX12 in hardware; it was all software hacks in the drivers to support a larger range of DX12 features until their hardware can catch up.

Skywax9016

I believe it isn’t really software hacks, rather simulation through software from the lack of hardware. But does that mean DX12 and DX11 are very different that nVIdia would need to do simulation on most parts of DX12 due to lack of hardware?

But that is regarding the use of Async Compute. The reports are saying, nVidia cards gets less fps regardless of enabling/disabling Async Compute in DX12. That is why, I’m rather confused on how a lower latency DX would result in lower fps.

bugref

I think the best course of action is on what hardware you have, if you have Nvidia based Cards, then stick with DX11, if you have Radeon or RX480 Polaris Cards, then make sure you are running DX12 because your card runs best at that environment.

overlord

Being ahead of time unfortunately doesn’t help gain marketshare when firms like MS are slow to adopt. In the past It took MS more than a year to to support 64bit AMD processors. DX11 never fully supported Async which has been around since 2011. HSA adoption is another case in point!

Will Ovtuth

Intel took just as long to support 64 bit too and actually fought against it years ago, until MS pushed it with xp 64 bit. If it wasn’t for AMD’s push of it we probably wouldn’t of had it till now… if we were lucky!

overlord

Wasn’t taking a dig @ AMD. Just highlighting how the Wintel monopoly severely impacts feature diversification from the Intel norm, either in software be it @ OS level or API level. Most Apps remain Single threaded long after Multi threading hardware has been available!

Daniel Anderson

Which is why “geniuses” is being used sarcastically. They had decent ideas but didn’t time it properly. Nothing like the majority of the market beign primarily single threaded to have a primarily multithreaded CPU design to really take that share. Not to mention they had to wait for MS to support their chips properly when it was in development for years.

I’d expect to see this disparity grow in titles that heavily use Async in the near future. Although one thing that concerns me is history repeating itself…

ATI created tessellation in 2001, after they didn’t gain much ground in it they didn’t worry about it for ~9-10 years. Fast forward to DX11’s release in ~2010ish and then you see Nvidia pick up the tessellation crown, even using their strength against their competitors weakness. The same can occur with Nvidias upcoming Volta potentially. Pascal is highly unlikely at this point.

But I fin dit fun seeing a 390 do better than a Titan then watching folks scratching their heads as to why thats possible. Rumor has it polaris won’t have to focus on brute horsepower to do well in DX11 as opopsed to previous GCN revisions due to the new components in use (“Command processor” is what one forum user stated).

Joel Hruska

In the articles I link above, AMD is ~5-10% faster in DX12 compared to DX11.

I don’t watch 16 minute videos without an exceptionally good reason. If you’d like me to review something that way, please give a time stamp.

I don’t expect Vulkan to differ much. Mantle, DX12, and Vulkan are all similar enough that the difference will likely come down to which API the developer optimizes for, as opposed to which API is actually better.

I was really thinking more along the aspects of how well they port. It seems to be that a lot of these Windows -> Linux ports don’t fare as well as a native Linux game and I’ve read some speculation that Linux drivers may be to blame. If that really is the case, minimalizing the impact that the drivers have on the game as a whole should have some interesting effects.

Aaron

Vulkan is the best API. Period. Mantle is dead and gone, DX12 is garbage, and Vulkan is open and the only sane choice.

Joel Hruska

Aaron,

Serious question: How is DX12 garbage when DX12 and Vulkan both have the same features, extremely similar implementations, and ultimately do the same things?
I can understand arguing that you prefer Vulkan because you’re a Linux user, or because you care about open source, or simply because you want an option that works on more than just Windows 10. And if that’s your reasoning, I get it. But I don’t see how that makes DX12 vs. Vulkan “garbage” as if they do fundamentally different things.

Hampus Sjöberg

Well yes, basically what you said, in DX12 versus Vulkan, DX12 is garbage because it’s locked to Windows and Microsoft.

fewtcher

That still doesn’t make it garbage. Majority of gamers are on Windows, not on Linux.

Mordakar .

Right, but Vulkan is cross platform. PC gamers don’t make the largest share of the gaming community. Even if DX12 is similar, if you start development on a new game tomorrow, it makes sense to develop on Vulkan to reach the widest possible audience.

This is especially true since AMD now owns the console market, and there’s no reason not to make the consoles compatible with Vulkan. Now, I can develop a game with one API in mind, and launch it across virtually all platforms simultaneously. I don’t have to worry about optimizing for this platform or the other.

Screw Microsoft’s walled garden. If the low level API’s all implement roughly the same features, I’m going to go with the one the reaches the broadest possible audience. That’s Vulkan, not DX12.

For games that are destined to be multi-platform, supporting win10 only DX12 doesn’t make sense.

But since Vulkan was delayed, I get why they went with DX12 for the time being, but eventually they will need to also do Vulkan when they port ( optionally they might drop DX12 later on if they feel is duplicate work…)

not too impressed with dx12 yet, will have to wait for this upcoming gen of gpus

DecksUpMySleeve

One must note AMD’s cards were released 9 month later and are a couple months from getting trampled by Pascal.

Joel Hruska

The only version of Pascal we’ve seen on so far won’t ship until Q1 2017, by which point AMD will have its own next-generation cards ready to go. Furthermore, the R9 390X is nothing but Hawaii with 8GB of RAM instead of 4GB. It’s not nine months old, it’s 2.5 years old.

prtskrg

Perhaps he was talking about Fiji. BTW Joel, do you have info about Pascal release?

DecksUpMySleeve

No Joel, 1070-80 will come out in July /early Aug, AMD will not be in the driver seat at for more than weeks if at all.
As to 9 months, I was speaking of the 970/980 beating what you compare it to to market.
Release;
GTX 980 Chipset hit MidSept 14′
R9 390X hit MidJune 15′

When comparing things 9 months apart in the microprocessor it’s hardly a level playing field and the 970 took the bulk of the initial upgrading segment sales(WInner).
This Late Summer/Early Fall they both release their flagship consumer cards, then you can finally put apples to apples. Be sure efficiency is a prized attribute, it’s hardly discussed what a polluting pastime it can be and should be weighed. Gaming is a large enough segment to consider ecologic impact/power use as well as the cards own longevity and stability being gauged often by this.

Bri

If you’re so good at predicting the future why aren’t you out playing the stock market?

DecksUpMySleeve

I have, I don’t trust it won’t bubble like the crash of 08′ presently though.
Doubled my money in 4 months post crash was offered a job by my broker. Any other questions?
I also predict the weather long term. Want more future?

WasNvidiaSwitchingToAMD

I have one, Why such a blind Nvidia fanboy?

AMD have much better technology going forward.
That’s why I am buying AMD – and Nvidia BS more than anyone I ever know.

DecksUpMySleeve

I disagree. When AMD has greater performance per watt than Nvidia we’ll talk.
That’s my primary attraction, efficiency. AMD, No Bueno, No Fatties.
The funny thing is I’m no fanboy, ran a XFX 5770 for years, NV wins atm though currently. I also find it funny I get called a fan of various things by obviously opposing fanboys while I in actually go on a product by product judgement.
No sports teams, no gaming system, no hard drive manufacturer across the board, no cell phone creater, no preferences, no fan-dom.
Just statistics and parameters.

WasNvidiaSwitchingToAMD

No we will not as I do not associate myself with clueless individuals.
Pascal will beFermi 2.0 – power hungry and noisy.

DecksUpMySleeve

We’ll shall see. Pretty sure they won’t be loud across all it’s manufactures.
And I’m eyeing the 1070m if it benches as I anticipate, noise should not be an issue.
Since you do bring up noise, I’ve heard complaints of AMD fans being noisy(as are their ‘fan’boys at times), your comment is the first I’ve heard suggest this of Nvidia.
Do me a favor, Google ‘AMD noisy fan’ and ‘Nvidia noisy fan’ which one returns more pages, which one has more recent pages.
Exactly.

WasNvidiaSwitchingToAMD

…and you have confirmed your legitimacy.
Kudos.
LOL

DecksUpMySleeve

I detect sarcasm, or trollcasm.. Yet could care less.
I nearly went from a 5770 to a 7970m(last AMD to be the best performance per watt on the market) yet you’re determined to say I’m biased, sigh.
In fact, I may end up just utilizing a Kaby Lake GT4e i-gpu(128mb) and not even use a dedicated card at all if it surpasses my minimum.

Nuruddin Peters

Well… Now that we know HOW MUCH wattage the 1070/1080 will use, we can effectively say you were *dead* wrong in this prediction a month back. Lol.

you’re an Nvidia FanBoy, if you weren’t you would have known that 390x is like 2.5 years old.

DecksUpMySleeve

And the 970 9 month older what’s your point? Misread something?

disqus_GB8lUuziuG

390x = Hawaii, 970 = Maxwell. Maxwell is much newer than Hawaii, Hawaii is a rebrand. This is common knowledge, meaning if you don’t know this but still run your mouth it only exposes the fact that you are a fanboy.

My point is that your a fanboy, said it 3 times now so hopefully you get what my point is. My point is your a fanboy.

DecksUpMySleeve

I am aware, but this has little to do with the fact we’re talking card(not chipset family) efficiency and the one released first still holds the torch.
I in actuality shouldn’t even dignify another ‘fanboy’ name calling trolling comment with a reply, this is your last.
I’m done replying to AMD fanboy halfwits on this post.
Period.

Calling me a fanboy after all I’ve said about choosing the most efficient product regardless of labeling just makes you another person who shows they have far more bias than me or can’t read.

disqus_GB8lUuziuG

Maxwell was not released first, Hawaii was. Anew coat of paint does not make Hawaii all of a sudden the newer Tech.. this is simple reasoning, yes Maxwell does better in those areas but it’s also significantly newer.

The fact that you can’t accept that means your a fanboy, a rational person would have no problem with this situation.

Ffej Samoht

But then…youre making unsubstantiated claims on products that havent been released yet. Its one thing to say that a company has a history, but to suggest that they simply cant compete ever is ridiculous.

DecksUpMySleeve

I haven’t suggested that, only that on PPW they don’t currently, they did during the 7000 series, and may in the future, it’s yet to be seen. At the current rate though this next gen will lose in PPW.

Orion4tech

AMD has better performance per watt.
The GPUs are: FirePro W9100, S9170, S9150 and S9300 x2.
The fact that you had or have a low end HD 5770 is irrelevant and doesn’t make you objective.

DecksUpMySleeve

? What?
A 28nm $6,000 s9300 x2, ‘launched’ days ago, unpurchasable, that’s soon to be dated by 16nm cards?
That’s what you bring to this discussion?
How bout you wait until the 16nm drop from both AMD and Nvidia this Late Summer/Early Fall to share a verdict?

Orion4tech

Yes a dual gpu card that beats no problem Nvidia’s latest announced Tesla in Single Precision while having the same 300W TDP.
What will AMD be able to do on 14nm?

The only reason Maxwell was somehow more efficient is because Nvidia remove most HPC capabilities and now that they are back whit Pascal we are going to just how efficient they really are.

DecksUpMySleeve

It’s one big we’ll see, and bringing up a $6,000 card with a minimum order of $129,000 currently is pointless..

Orion4tech

Maybe for you.
AMD also has a 1500$ consumer variant that will achieve similar efficiency. It’s not like I compared the s9300 x2 whit a GTX.

DecksUpMySleeve

I myself am only eyeing Mobile GPUs presently. I’ll probly never own another rig over 150 watts, that and 28nm is a pretty big turn off with 14-16nm within sight.
I’d advise most wait til Fall to look into buying, as that’s when the next wave truly drops.

Ffej Samoht

Yes, id like to know when we will have Jetsons flying cars and will live in the clouds on Venus.

DecksUpMySleeve

The cars will not look like the Jetsons as those materials are impractical. Also we may roll into a small ice age by 2090, so that’ll put a kink in the progress chain. Also why live in Venus’s clouds ever, acidic poor conditions we’d be better off elsewhere.
Housing will likely move down into the ground not up in the sky due to efficiency and stability.

Of course it is. We compare on price and overall performance level, not when a GPU was released. If AMD or Nvidia had a five year-old GPU that was still competitive with modern cards and cost the same amount of money, we’d compare against that, too.

All AMD and NV GPUs are currently built on 28nm. All are refinements of older architectures — Maxwell 2 is an improved Maxwell 1, Fiji is a revised version of Hawaii. We compare price point to price point, and draw conclusions accordingly.

“Be sure efficiency is a prized attribute, it’s hardly discussed what a polluting pastime it can be and should be weighed.”

Let’s say you have two GPUs — GPU A and GPU B. GPU A draws 150W, GPU B draws 110W. You game for eight hours a day, seven days a week, 52 weeks a year.

You’ll save more power by turning your computer off for those other 16 hours than you will by switching your GPU vendor. If you want to be ecologically mindful, that’s how to do it.

DecksUpMySleeve

Shutting down of course helps but why not do both. At the rate you stated that’s $1.40 a year in power difference, after 5 years that’s $7.00, which changes the performance to cost ratio(slightly). Ecology is more the price as that would put 8.8 pounds of coal in the air..
I myself game at 78-84 watts, primarily Blizzard games presently or <200w when I feel like playing some Halo on Xbox. My i7-860 xfx 5770 system was closer to 380watts while playing the same game in the same settings. I've grown to prefer mobile cards hooked to the same peripherals due to this, efficient hobbies are never a bad idea.

WasNvidiaSwitchingToAMD

You still rambling on with your illegitimate drivel?
Put a sock in it.

DecksUpMySleeve

You may take this advice yourself, you speak with nothing to say.

WasNvidiaSwitchingToAMD

Like I said, I have nothing to say to clueless individuals.
You have confirmed your illegitimacy here. Call a cab and go home.

DecksUpMySleeve

Derp.

Joel Hruska

” Ecology is more the price as that would put 8.8 pounds of coal in the air..”

Right. Assuming that’s true (I haven’t checked), I happen to live in Buffalo, which runs on hydroelectric power.

Past that, since the act of *starting* a heavy diesel engine and driving it about 500 feet probably puts more than 8.8 lbs of coal in the air, I’m just not going to care much. I could scrimp and pinch my entire life and save the equivalent of 6 hours of weed whacking. I care about the environment. I adjust my habits where it makes sense to do so. This isn’t one of those places.

DecksUpMySleeve

Inches make miles :)

Joel Hruska

The greatest coup of corporate America was framing environmentalism as something an individual chose. Don’t believe the bullshit. The vast majority of environmentally damaging decisions are made in and by corporations and government policies, not people.

Yes, you can choose to buy a fuel efficient vehicle. You can’t choose to reverse decades of policy decisions that ensured most Americans must commute by car to and from work.

DecksUpMySleeve

Oh I realize full well they’re responsible for 2/3rds the filth/damage. I see it all first hand at times, of course policy top down is necessary in the long run, doesn’t mean our daily efforts are fruitless.
It’s also worth noting a small percentage of humans do the majority of the polluting, some 20 times that of others. Heating and cooling of all types account for much of civilian pollution alongside transport, the structure of society itself much change for anything to truly.

Scion

I rate your bias 3.7/4 gb ;)

darkich

You do realize that corporations are nothing without “people”?
It’s exactly the small decisions made by “small” people, that ultimately make the difference.

Helios

While you are right that corporations are nothing without people, when it comes to making decisions for a corporation those decisions are influenced more by group dynamics than by personal morals.

Joel Hruska

“It’s exactly the small decisions made by “small” people, that ultimately make the difference.”

No, it really isn’t.

Don’t mistake me. Yes, protest movements and sustained societal change can make a difference. Of course they can. But if you want to see an example of this in action, consider water usage in America. It’s not uncommon to see stories written about how America’s use of water *per capita* is so much higher than other countries. I read one a few weeks ago that talked about how Americans shower so much more than Europeans, and essentially laid the difference in per capita water use at the feet of that habit.

In *reality*, residential water use in America peaked in the late 1970s and declined significantly thereafter. We still use less water per person today and in total than we did then, despite having added 30-40 million additional people. The reason is simple: Low flow toilets and showers dramatically reduced water waste. So yes, individual behavior matters.

The problem with per capita water use metrics, however, is that they’re often calculated by dividing the total number of households against the total water use in this country. This ignores the massive amounts of water used for other purposes like farming and power generation. The vast majority of water use in farming is for irrigation, and the majority of irrigation is done using open ditches, canals, and by simply letting water sit in fields. This is phenomenally wasteful — 30-50% of the water used for irrigation evaporates before it ever reaches its intended target.

You can solve this problem using existing technology. Drip irrigation and using enclosed tunnels for routing the water can cut evaporative losses dramatically. Adopting these techniques for conserving water would cut the country’s water use far more than if every single human stopped showering for an entire year. Yet because the issue of water conservation is shaped and conceptualized as something individuals *do* rather than something corporations should consider (and the vast majority of farming is corporate these days), the question of whether or not we should cut our water use by addressing corporate issues is overlooked in favor of the simple, *personal* response.

darkich

I appreciate the extensive reply, and you do bring out some good observations, but it all doesn’t challenge the basics of my simple premise.
Yes, vast amounts of water are being wasted by bad corporate decisions, but those corporations still exist because of our individual consumer decisions.

Farming? Tell me what would happen to it if hundreds of millions Americans decided to go vegetarian, decided to take extra effort in being informed consumers and boycott corporations who work inefficiently?

WasNvidiaSwitchingToAMD

Yep, and beating the 980ti in some DX12 titles.

That GPU is LEGENDARY

onstrike112

Given that AMD’s cards will be Global Foundries 14nm fab and Nvidia’s will be TSMC 16nm fab, AMD’s going to be at an advantage there. Performance:watt will PROBABLY be in AMD’s favor.

DecksUpMySleeve

Heavily doubt it, I’m pleased either way, and to the victor goes my purchase(unless of course there is a drawback/flaw to either then that may decide).

onstrike112

Typical fanboy of Nvidia and Intel.

fewtcher

So “I will purchase whoever is better” is typical nVidia and Intel fanboy, then AMD fanboys are like what? “I’ll purchase AMD no matter if it’s better or worse than the competition”?

DecksUpMySleeve

~Echhhooo~
Someone alrdy said that, yes they should be quite comparable. The 16nm and 14nm are said to both be FinFET but it’s yet to be seen what the exact arrangement will be. A fanboy has a preference, I only have statistics. Speaking of statistics, I’ve owned 3 cards, the first 2 were AMD. If AMD does match or surpass NV’s PPW they’ll be my choice til it changes(unless there are other defining drawbacks).

onstrike112

Logic dictates that 14nm chips will be more efficient. Until they drop that’s all we got to go on.

DecksUpMySleeve

We have more than that to go on though, the current precedent is amd lagging behind about 10% in ppw while tied at 28nm. Not sure why we are having such a lengthy debate on something neither of us know definitively.
Let’s just see in 3-6 months.

This argument that AMD is less efficient is not really taking everything into consideration. In most gaming situations sure, but if you are going to talk about architecture vs architecture, GCN does more than maxwell. Hawaii was about as efficient as kepler but was smaller and had more features. AMD did not have a chance to refine hawaii to compete with maxwell, but it still has far better DP, has a hardware scheduler, typically higher memory bandwidth etc. If you tried to do double precision work hawaii would be a ton more efficient from that perspective.

typical gaming sure, but to judge engineering capabilities, I would say AMD has what it takes. Even Fiji uses similar power to 980ti and titan x, and is smaller + more dp.

shakum

I feel that Nvidia’s Pascal architecture does not seem to be geared towards VR at all and this gap will continue to grow. I think Nvidia may release an entirely new Maxwell-style Pascal and call it something else. Polaris looks like it will be the way to go with VR. On the deep learning and supercomputing front, once that market matures in a year AMD will again have a 14nm turnkey Server solution with far better TCO offerings.

prtskrg

For the rumors it seems Pascal is Maxwell+compute improvements.

Bharath Narayan

That COULD potentially be the next Fermi.Great performance but a Power Sucking Vampire and a very very hot Card

While I am skeptical of the Fermi power and heat issue re-occurring, if it is very similar to Maxwell there will be an issue in those two area. Maxwell’s massive efficiency comes from some very advanced optimizations for the 28 nm node. If they convert Maxwell to a new node, those optimisations will be lost. And they have not had enough time with 16 nm FinFET to be able to get the same optimisations for it.

Chris Hunter

it’s very clear. nvidia is emulating dx12 support. They are purposely disabling many dx12 features in order to give nvidia a better stance. They are asking developers to not use features that make them look bad.

solomonshv

nvidia doesn’t have to ask them to do anything. most PC titles are just terrible console ports. look at quantum break for example. the only difference between xbone and a decked out PC is slightly better shadows.

game developers are lazy and game publishers know that no matter how shitty the game is, people will buy it if they advertise the game enough.

Joel Hruska

This is ridiculous. NV isn’t emulating DX12 and they aren’t asking developers not to support it.

Dickie Debbil

He didn’t really say not support it – just leave out elements that they suck the worst at. I wouldn’t put that sort of thing past them tbh.

RadeonShqip

So you are telling me Nixxes Software.who worked with Mantle API on
Thief was not able to make a magic in Rise of the Tomb Raider dx12?
It is clear that they are paying devs not to enable certain features, there is no other explanation for the bad Rise of the Tomb Raider dx 12 score versus dx11.

Joel Hruska

“It is clear that they are paying devs not to enable certain features”

That’s not *remotely* clear. It’s not even likely.

1). Development teams change between games. Look into how the game industry *works* and you’ll discover that it’s absolutely common for a game studio to hire dozens or hundreds of people to finish a title, then lay 70-80% of them off as soon as it ships. DLC helps with this, but it doesn’t eradicate it. The lead design and lead programmer may be the same, but many of the rank-and-file change.

2). The fact that a game does better on Vendor A than Vendor B can absolutely indicate that more time was spent optimizing for Vendor A than Vendor B, yes. But it does not therefore follow that a team was *paid* not to use something. The situation is much more likely to be that the team was paid to include effects or capabilities that *did* run well on Vendor A, as opposed to specifically being ordered *not* to do things that ran well on Vendor B. Optimization isn’t legally actionable, but deliberate attempts to pay a company to create a sabotaged product could be.

3). The plain truth is, some games run differently well on different architectures. You will see this even in games that aren’t part of Gaming Evolved or GameWorks. You will see it in DX11, DX10, and yes, DX12.

Your first conclusion when you see a game that runs well on a company’s hardware shouldn’t be “I bet the other guy’s are PAYING THEM OFF. Not when there are other variables to be considered.

Sweetie

I wonder how accurate Jonas Meyer’s claims are about async. I remember reading other claims that it can offer a lot more performance than what he’s saying it can offer.

badsleepwalker86

It depends on the game and how the game is coded. Async is best used in doing a lot at once in parallel… Aka ashes of singularity’s massive amount of onscreen enemies.

Stephen

it depends on a few things really. Performance gain through async compute depends on how well the game is optimized, in that if most of the “shaders” are actually in use because the game is well optimized you won’t see a ton of improvement with async compute. Because async makes use of the shaders that “are not” being used and makes them available to be used for compute work.
As the shaders are underutilized nearly ALL the time(for various reasons) you typically see an improvement. It’s just that the improvement varies wildly between games.

on a side note:
I think AMD cards will be much better for VR because Async Compute can be used for low latency head tracking and other VR related functions.

prtskrg

If your hardware and driver isn’t geared for it, you’ll see performance regression as we see with Nvidia gpu.

Bare in mind that he is saying that is what it offered to his game. Also, he provided a caveat for that figure whether he meant to or not. He said that the gains provided by async requires developer optimisation and as such will vary from one implementation to another.

Ninja Squirrel

Still there is no enough DX12 games to judge who will win the DX12 master race. Maybe I should allow some more time to get a clear picture. There is no Async computing used in Nvidia title games. For example, Tom Raider used Async computing only for Xbox One.

And there is no word of Async computing being used in Just Cause 3 DX12 update.

So I guess it’s a clear win for AMD in Async enabled games and Nvidia has no chance. I hope developers will make use of full DX12 feature set in future.

Mike

I bought a 7870 this last gen. My first AMD gpu in ages…

It didn’t see minecraft as a 3d game, so it ran extremely poorly. About a year later finally an amd patch fixed it.

In firefox, sometimes the graphics glitched out, with some bad weird artifacts going on… that too was fixed bout a year later.

It also put a sour taste in my mouth that some games actually use phys-x and I missed out on that too.

I do not give a crap if NV are struggling at dx12 on current cards. 4 games use it, and they are boring games. 16nm parts that will no doubt be much better built for dx12, will be out in a few months, and widespread adoption of dx12 by game studios will take some time. I do not care about this minor victory by team red YOU GUYS RUINED MINECRAFT FOR ME! For a whole year! Unforgivable, and no wonder 75% of people on steam use NV cards.

If the 16nm AMD parts are not amazing, I am abandoning ship back to team green. I hate going green *nuke the trees please!*, but this kind of green I can get behind! You red russians are going to lose this cold war!

Cruddy Bapz

Why did you buy a 7870 ??? that cald is really old by now

ForeverACharles

Maybe he’s just a guy who just saved up for an $80 gpu and didn’t feel like spending more

Mike

Nah this was years ago I bought this card. I guess saying “last gen” I shoulda clarified, but this card was re-branded several times so the 270 and 7870 are about the same darn thing to me. Last gen.

It ran games like farcry 3 very well. near max. It just sucked at minecraft and a few other things I really like to do, because amd drivers are the poops, so I find it ironic and silly that the amd fanboys are raging over this very minor and temporary victory, but they have to… What other victories has team red had over the past 6 years? Intel is murdering them on the CPU side, and NV is so far ahead now, that you fanboys have to PLEED with pc gamers to even think about team red! Ho ho ho!

NV will win at dx12 in time on the 16nm. MARK MY WORDS! RAWR!! EPIC FANBOYISM!

ForeverACharles

I can tell you’re very passionate. Might as well admit you’re a proud fanboy :P Anyways, I wouldn’t go as far to say that NV is so far ahead of AMD at the moment. Both have solid cards and are fairly close to each other tbh.

gamerk2

As I understand it, the GCN architecture has had a dedicated Async compute engine since day 1. It’s likely NVIDIA cards simply aren’t optimized for that sort of workload, and thus really don’t benefit.

The performance loss is likely due to taking a VERY optimized driver layer, and splitting it up between cores. This creates bottlenecks should any singular CPU core gets interrupted by another task, causing a small performance loss.

In short: NVIDIA simply isn’t optimized for DX12 workloads.

unkle

People mention the use of Async compute as if it nullifies the lead that AMD has in these titles…
Async compute is just an implementation detail.
Who cares _why_ the card performs better? All that matters is that it does.

Orion4tech

Yeah, GCN can work better whit low level API and in very parallel workloads.
Although it does a fine job under DX11 is still has certain bottlenecks under this API. It looks like AMD developed GCN to be more future proof and took some ideas from Sony(maybe that is why they won both console contracts not because they were cheaper).
Nvidia’s GPUs seem to be better optimized for DX11 and maybe that is why we don’t see performance improvements or even regressions in certain situations.

Streetguru

Pretty sure the new Tomb Raider just had DX12 thrown in at the last minute rather than being built with it in mind, hend the complete lack of performance difference.

Also going by this video, under DX11 Nvidia’s GPUs are basically 100% utilized, while AMD’s stuff is maybe closer to 75% utilized, which is where a-sync comes in with the save, letting GCN work to it’s full potential, which ends up really out classing maxwell cards.

something like that anyways, DX12 may yet save AMD if they can get more devs to optimize for it from the start and polaris/zen APUs are decent.

Martin Emmerset

It seems that the much publicised performance gains by moving to DirectX 12 are non-existent.

This fact is not highlighted in the article.

DX11 versions are faster than their DX12 counterparts.

Ext3h

DX11 version are usually not visually identical to the DX12 variants so that comparison is flawed.

Same as back then with the DX9 / DX10 comparison, the latter one was also slower throughout the bank, but only due to lack of graphical effects.

Martin Emmerset

At least in recent titles, the differences between the two are currently quite minor. I think it’s a valid comparison for now. By reducing the details and special DX12 effects therefore, we should see that massive jump in performance that has been advertised, and that doesn’t happen either. The advertised and projected performance gains aren’t there yet.

Joel Hruska

AMD is generally faster in DX12 than in DX11. Tomb Raider is an outlier in that regard.

evolucion8

Is not driver overhead as the issue with current AMD drivers on DX11 is that the main draw call thread, is single threaded. AMD drivers overall are single threaded, that is why when for example, I put a GTX 750 Ti to run Dirt Rally, it taxes the CPU more while the same game on a R9 370 barely uses the CPU. AMD has a proper hardware scheduler able to feed the cores and balance itself, nVidia’s approach is to use a compiler that taxes the CPU more in order to overcome the DX11 API and being able to break the API into several threads to maximize the execution engine utilization, this is a nice approach but requires hand tuning for every application game out there, as soon as they pull the optimization plug like they did on Kepler, they will tank.

Pedro Nunes

How could Nvidia be a winner in DX12 on Rise of the tomb raider, if both AMD and Nvidia looses frames on it, no one wins there, cause there is no improvement on the averages comparing to DX11.

Mark Lepore

For the games of the future Async will be necessary to optimize heavy workloads. Especially with VR. The clear advantage is AMD IMHO.

Wintyr Walton

Nvidia not going to do optimization on 9000 and under cards to sell up the 1000 series cards making them look much better

Hvd

async compute nvidia doesnt support it and still doesnt with the gtx 1080.

TheLight

It’s all about the drivers. AMD has a reputation for releasing inefficient drivers, sometimes for the entire life of a product family. nVidia, on the other hand, is known for having highly efficient drivers from the very first day a product is released. DX12 removes the burden of driver overhead, so naturally AMD cards will see the most improvement running DX12. nVidia cards don’t have as much room for improvement since their drivers are running the hardware close to it’s theoretical limit already. nVidia is turning out to be a victim of their own success where DX12 is concerned.

Mike DeChellis

Whoa I thought Dx12 was supposed to be some kind of “performance boost” Looks like all the cards lost frame rates making the switch!

Willem Lemmer

I have the MSI R9380 and gain 12 fps more using X12 in TotalWarhammer. 12 fps in such games is alot. Warhammer settings is on Ultra. And in Doom Vulkan i get a massive 60 % performance boost. 85 fps. It really works good on AMD cards. But i like games that uses X11 and X12 because it makes it more versatile for the user and benefits both cards. I do hope future games going to use both X11 and X12 not only X12.

This site may earn affiliate commissions from the links on this page. Terms of use.

ExtremeTech Newsletter

Subscribe Today to get the latest ExtremeTech news delivered right to your inbox.