As DirectX 12 and Windows 10 roll out across the PC ecosystem, the number of titles that support Microsoft’s new API is steadily growing. Last month, we previewed Ashes of the Singularity and its DirectX 12 performance; today we’re examining Microsoft’s Fable Legends. This upcoming title is expected to debut on both Windows PCs and the Xbox One and is built with Unreal Engine 4.

Like Ashes, Fable Legends is still very much a work-in-progress. Unlike Ashes of the Singularity, which can currently be bought and played, Microsoft chose to distribute a standalone benchmark for its first DirectX 12 title. The test has little in the way of configurable options and performs a series of flybys through complex environments. Each flyby highlights a different aspect of the game, including its day/night cycle, foliage and building rendering, and one impressively ugly troll. If Ashes of the Singularity gave us a peek at how DX12 would handle several dozen units and intense particle effects, Fable Legends looks more like a conventional first-person RPG or FPS.

There are other facets to Fable Legends that make this a particularly interesting match-up, even if it’s still very early in the DX12 development cycle. Unlike Ashes of the Singularity, which is distributed through Oxide, this is a test distributed directly by Microsoft. It uses the Unreal 4 engine — and Nvidia and Epic, Unreal’s developer, have a long history of close collaboration. Last year, Nvidia announced GameWorks support for UE4, and the UE3 engine was an early supporter of PhysX on both Ageia PPUs and later, Nvidia GeForce cards.

Test setup

We tested the GTX 980 Ti and Radeon Fury X in Windows 10 using the latest version of the operating system. Our testbed was an Asus X99-Deluxe motherboard with a Core i7-5960X, 16GB of DDR4-2667 memory. We tested an AMD-provided beta driver for the Fury X and with Nvidia’s latest WHQL-approved driver, 355.98. NVidia hasn’t released a beta Windows 10 driver since last April, and the company didn’t contact us to offer a specific driver for the Fable Legends debut.

The benchmark itself was provided by Microsoft and can run in a limited number of modes. Microsoft provided three presets — a 720p “Low” setting, a 1080p “Ultra” and a 4K “Ultra” benchmark. There are no user-configurable options besides enabling or disabling V-Sync (we tested with V-Sync disabled) and the ability to specify low settings or ultra settings. There is no DX11 version of the benchmark. We ran all three variants on both the Fury X and GTX 980 Ti.

Test Results (Original and Amended):

Once other sites began posting their own test results, it became obvious that our own 980 Ti and Fury X benchmarks were both running more slowly than they should have. It’s normal to see some variation between review sites, but gaps of 15-20% in a benchmark with no configurable options? That meant a different problem. Initial retests confirmed the figures shown below, even after wiping and reinstalling drivers.

The next thing to check was power management — and this is where we found our smoking gun. We tested Windows 10 in its “Balanced” power configuration, which is our standard method of testing all hardware. While we sometimes increase to “High Performance” in corner cases or to measure its impact on power consumption, Windows can generally be counted on to handle power settings, and there’s normally no performance penalty for using this mode.

Imagine our surprise, then, to see the following when we fired up the Fable benchmark:

Click to enlarge

The benchmark is actively running in the screenshot above, with power conservation mode and clock speed visible at the same time. And while CPU clock speed isn’t the determining factor in most titles, clocking down to 1.17GHz is guaranteed to have an impact on overall frame rates. Switching to “High Performance” pegged the CPU clock between 3.2 and 3.3GHz — exactly where we’d expect it to be. It’s not clear what caused this problem — it’s either a BIOS issue with the Asus X99-Deluxe or an odd driver bug in Windows 10, but we’ve retested both GPUs in High Performance mode.

These new results are significantly different from our previous tests. 4K performance is unchanged, and the two GPUs still tie, but 1080p performance improves by roughly 8% on the GTX 980 Ti and 6% on the Fury X. Aftermarket GTX 980 Ti results show higher-clocked manufacturing variants of that card as outperforming the R9 Fury X, and those are perfectly valid data points — if you want to pay the relatively modest price premium for a high-end card with more clock headroom, you can expect a commensurate payoff in this test. Meanwhile, the R9 Fury X no longer wins 720p as it did before. Both cards are faster here, but the GTX gained much more from the clock speed boost, leaping up 27%, compared to just 2% for AMD. While this conforms to our general test trends in DX11, in which AMD performs more capably at higher resolutions, it’s still unusual to see only one GPU respond so strongly to such ludicrously low clock speeds.

These new runs, like the initials, were performed multiple times. We ran the benchmark 4x on each card, at each quality preset, but threw out the first run in each case. We also threw out runs that appeared unusually far from the average.

Why include AMD results?

In our initial coverage for this article, we included a set of AMD-provided test results. This was mostly done for practical reasons — I don’t actually have an R9 390X, 390, or R9 380, and therefore couldn’t compare performance in the midrange graphics stack. Our decision to include this information “shocked” Nvidia’s PR team, which pointed out that no other reviewer had found the R9 390 winning past the GTX 980.

Implications of impropriety deserve to be taken seriously, as do charges that test results have misrepresented performance. So what’s the situation here? While we may have shown you chart data before, AMD’s reviewer guide contains the raw data values themselves. According to AMD, the GTX 980 scored 65.36 FPS in the 1080p Ultra benchmark using Nvidia’s 355.98 driver (the same we driver we tested). Our own results actually point to the GTX 980 being slightly slower — when we put the card through its paces for this section of our coverage, it landed at 63.51 FPS. Still, that’s just a 3% difference.

It’s absolutely true that Tech Report’s excellent coverage shows the GTX 980 beating past the R9 390 (TR was the only website to test an R9 390 in the first place). But that doesn’t mean AMD’s data is non-representative. Tech Report notes that it used a Gigabyte GTX 980, with a base clock of 1228MHz and a boost clock of 1329MHz. That’s 9% faster than the clocks on my own reference GTX 980 (1127MHz and 1216MHz respectively).

Multiply our 63.51 FPS by 1.09x, and you end up with 69 FPS — exactly what Tech Report reported for the GTX 980. And if you have an NV GTX 980 clocked at this speed, yes, you will outperform a stock-clocked R9 390. That, however, doesn’t mean that AMD lied in its test results. A quick trip to Newegg reveals that GTX 980s ship in a variety of clocks, from a low of 1126MHz to a high of 1304MHz. That, in turn, means that the highest-end GTX 980 is as much as 15% faster than the stock model. Buyers who tend to buy on price are much more likely to end up with cards at the base frequency, the cheapest EVGA GTX 980 is $459, compared to $484 for the 1266MHz version.

There’s no evidence that AMD lied or misconstrued the GTX 980’s performance. Neither did Tech Report. Frankly, we prefer testing retail hardware when such equipment is available, but since GPU vendors tend to charge a premium for higher-clocked GPUs, it’s difficult to select any single card and declare it representative.

Amended Conclusion:

Nvidia’s overall performance in Fable Legends remains excellent, though whether Team Red or Green wins is going to depend on which type of card, specifically, you’ve chosen to purchase. The additional headroom left in many of Nvidia’s current designs is a feature, not a bug, and while it makes it more difficult to point at any single point and declare it representative of GTX 980 Ti or 980 performance, we suspect most enthusiasts appreciate the additional headroom.

The power issues that forced a near-total rewrite of this story, however, also point to the immaturity of the DirectX 12 ecosystem. Whether you favor AMD or Nvidia, it’s early days for both benchmarks and GPUs, and we wouldn’t recommend making drastic decisions around expected future DirectX 12 capability. There are still unanswered questions and unclear situations surrounding certain DirectX 12 features, like asynchronous computing on Nvidia cards, but the overall performance story from Team Red vs. Team Green is positive. The fact that a stock R9 390, at $329, outperforms a stock GTX 980 with an MSRP of $460, however, is a very nice feather in AMD’s cap.

As with Ashes of the Singularity, the usual caveats apply. These are pre-launch titles and early drivers on a still-young operating system. So far, however, the DX12 results we’ve seen have been very positive for AMD — lending credence to the company’s longstanding argument that GCN would fare well under the new API.

Update (9/24/2015): After results from other sites began to go live, it became apparent that our performance figures for Fury X and the 980 Ti were oddly low. We’ve inserted a section above (and new benchmarks) to explain what happened and examine the new data. Relative performance between lower-end AMD and Nvidia cards is also now addressed.

Tagged In

“The results for these two cards show the Fury X with a marked advantage at 720p, but no one buys a GPU like this to play at a resolution that low.”

That’s where it’s most interesting, imo. That’s precisely where AMD would have shown a disadvantage in DX11 due to the load shifting to the CPU.

Joel, quick question:
I’m under impression that with Vulkan/DX12 APIs the game devs will need to update/patch their games in order to properly support hardware post-launch (eg., GPU comes out 2 months after RTM), and AMD/nVidia can’t handle that on the driver side. Does this seem to be the case?

Joel Hruska

You’ll want to re-check the new performance figures and details. Turns out I had a major power problem.

You raise a very interesting question. I’m not sure.

Joel Hruska

Contact I’ve spoken to say that this is not a problem. Older code should run just fine on newer hardware from all vendors. At least it won’t be any more of an issue than it has sometimes been in the past.

Provided there’s an influx of equity/$$$, it should. ATi has historically fared far better without the noose that is AMD around its neck.

randomguy48

Did you miss the part where AMD has a new Beta driver and Nvidia hasn’t had a windows 10 driver in like 6 months lol. And it is still just neck and neck. Let’s check back again when they both have a driver released around the same time

More like the article specifically states this fact. Nvidia generic driver, AMD beta driver specifically provided to them for this test. Seeing that no game out supports direct X 12 you can’t assume Nvidia’s normal drivers are optimized for it.

Chase Masters

Nvidia releases a new driver for every released game.

Will Ovtuth

That could be a year from now at the rate nvidia has been churning out garbage drivers,lol.

Chase Masters

Compared to AMD once a year at best?

200380051

Is this benchmark making any use of Async Compute?

Georg Dirr

Yeah seems rigged. No amd cpu’s either.

Joel Hruska

True fact: AMD doesn’t test with AMD CPUs.

http://vgsage.com Azix

They don’t send out tests with AMD CPUs but they would test them internally. But I bet they avoid regular i7s as well. It’s for the same technical reason why sites often use haswell-e or other more extreme processors for graphics card reviews. AMD and Nvidia would at least want to show their hardware in the best light so, even if an i3 is putting out good numbers, use the 50 core processor.

was amusing at first, but then you realize there are no ITX/mATX boards for FX processors. they couldn’t even if they wanted to.

Quenepas

Why AMD wouldnt test with their own CPUs? Why wouldnt they work with the CPU team to improve performance? Seems that while the GPU team is giving the good fight the CPU team just threw their arms in the air and said : “F’ it. Budget 4ever”. They even worked together for a while (now separate entities) so they had time for this unless it was an APU affair only. Why oh why?

Joel Hruska

Because AMD CPUs don’t show AMD GPUs in the most competitive light.

Domaldel

Current ones, no.
I expect that to change with Zen

Sweetie

FX chips are supposed to get a sizable boost from DX 12 though.

Domaldel

Engineering time costs money and developing a new CPU architecture takes a lot of engineering time, years in fact.
The base of the current AMD CPUs are not in any shape to beat Intel in the high end where these GPUs belong.
Carrizo can barely beat Intel in some workloads in the price class and power envelope.
There won’t be anything high end till they can start selling the brand new Zen architecture.
Zen is expected to beat Excavator with 40% in IPC roughly matching Haswell.
(Of course depending on workload)
That’s a processor that might work in the quantum.
But while the high end Zen CPU will arrive in 2016 most of the Zen processors won’t be out till 2017

Esdese

Nope. Unreal Engine doesn’t utilize Async Compute. Their Xbox version does though. Check the latest Unreal Engine demo benchmarks as well. They don’t use Async Compute. I don’t know if they’ll EVER code in Async Compute due to their partnership with nvidia, but hopefully thirdparty developers will utilize that key feature. Otherwise, there will be very minimal gains to be had.

If nvidia is actually able to prevent this it would be really sad. if they just don’t bother that is one thing, but intentionally doing a disservice to gamers for nvidia’s sake? Not good.

exjohn

Don’t worry once pascal is out and if it fixes async compute nvidia will champion the addoption of async and all engines will suddenly start to support it … magic!

http://www.hikingmike.com/ hikingmike

So they are actually removing/leaving out async compute usage when porting from console? That has to be annoying extra work for devs. I guess they are still better off than before with x86 consoles and PCs though.

exjohn

Nvidia at its best… becoming slowly the most toxic company in it, a position proudly held by intel till last year…

Isa

Minimal Async Compute usage. I’m sure the developers could have pushed for more use but that would have been exponentially difficult to optimize for nVIDIAs Maxwell 2 architecture.

Ben Mitchell

I think thats why nvidia’s results at 720p increased so much with the switch from balanced to high performance in windows, as i understand it nvidia’s async implementation requires more cpu overhead to work properly. As it stands from a dev forum testing if it works at all (and it does, barely) nvidia needs a lot of cpu power just to use async shading in any meaningful way. This isnt something that can be optimized since even in a simple test maxwell 2 can saturate a weak cpu just trying to use async shading. others testing this bench found nvidia hardware scaling better upto 6 cores where amd scaled upto 4.

Doesn’t really matter as you can add quite a lot to the 980ti’s performance by overclocking it to 1500mhz, so it’s still the better buy.

Carlos Oliveira

Do you really think all 980ti’s are capable of 1500mhz? A 1500mhz is a very impressive overclock only few will achieve because they got lucky. A realistic overclock is around 1400mhz

Brushy Bill .

My PNY Titan X and My EVGA Titan X will both do 1565Mhz core and 8000Mhz Memory while in SLI. They do a little more if run by themselves. So I don’t see 1500Mhz being unrealistic at all.

Carlos Oliveira

Yea thats a shame tho because my Fx-8350 is overclocked to 18Ghz and outperforms 2 Intel Core i7 5960X, so suck it

Ruben Mirmo Lopes

because the cpu will affect so much the graphics in gaming :3 xD wtf thats nothing to do even with the article xD fanboys are awesome tho good job there

Carlos Oliveira

My point still stands, few 980ti’s are capable of 1500mhz and up

Peter Den Gamer

actualy, using diffirent CPU’s influences fps depending on how high taxing the game is, so not fan boy related. Crysis 3 for example has more fps with 6820k then with 6700k so obviously it DOES matter. You’ve got to do some better googling.

qubit86

If you check under the hood, it’s the China Syndrome

druout1944 .

lol

obababoy

Yeah…but you are a texans fan…Your stats are thrown out and you lose by default! ;)

druout1944 .

Neither of my two 980Tis will do 1500Mhz; 1361Mhz max on boost.

Wrinkly

My Gigabyte 980Ti G1 breaks 1600mhz with the F3 BIOS. 1500 does appear to be a realistic average.

Peter Den Gamer

if you only want to use your gpu for 1 year, be my guest..

libastral

You can get something like Zotac 980 ti amp extreme with a high factory OC.

druout1944 .

I agree; also I don’t really view either the Fury or 900 series as true DX12 cards since both are missing DX12 features. Whatever the next series of cards are will be the ones to show what DX12 can really do. This is like the one legged version of what DX12 can do.

Reaper7799

I’m at 1500…hybrid…most will hit upper 1400’s.

[img]http://i.imgur.com/L456GRj.jpg[/img]

solomonshv

yes. they pretty much all can. i use to have 2X MSI GTX 980 Ti Gaming LITE EDITION cards in SLI. now i have 2x EVGA ACX 2.0. i had to switch to the ACX because i decided to go with water cooling and they had the better waterblocks available to them. didn’t really need it. i was just bored as hell one day.

in any case, all 4 cards had ASIC scores of less than 65 and yet they were all able to comfortable run at 1470MHz MINIMUM! one of my MSI cards that I sold off could do 1520MHz and by itself it posted a higher graphics and overall scores in firestrike than any 295X2 on record.
link: http://www.3dmark.com/fs/5739719

my brother has a Gigabyte GTX 980 Ti G1. it does 1490MHz.

Carlos Oliveira

Your 980ti could beat a 295×2? Ok, whatever. You’re a fake.

solomonshv

i even posted a link to my VERIFIED firestrike score you retarded f*ck. there is no 295×2 on record that’s even close to that score.

Carlos Oliveira

Yea. Now go play an actual well optimized game instead of a synthetic benchmark and tell me that your 980ti is beating a 295×2

Zymenth Hanzo

dude u keep denying other people facts and valid result about nvidia’s but u didnt even show or post any evidence that show ur ATI performance…prove them wrong by posting ur cards performance….or rite now u are the wrong 1

tachyonzero

in addition to proprietary Gameworks API.
If AMD card is detected, increase tesstelation. if just regular Nvidia, normal testelation.

Hah, shows the opposite where Nvidia is in the lead a tiny bit. AMD still pwns on the 720p range though. 1080p is like the standard so… yeah.

Joel Hruska

You’ll want to re-check the performance figures. I had a power issue I had to address.

rp1367

Hey Brush, AMD driver from your link is different from ExtremeTech used. AMD provided a beta driver here for this benchmark. ExtremeTech review is more representative compared to your link. Ask Anandetch to redo their test.

Brushy Bill .

Thanks for the heads-up! I’ll pass along the info.

wowgivemeabreak

Good that they are pretty much equal but damn, only just a bit above 30fps at 4k ultra?

obababoy

That is good..That means the games visuals are pushing the GPU’s to the limit so the game will look better longer. When better GPU’s come out the game will still look good and be relevant. Think about it that way. That is why games like Crysis visually last a long time. When we got Crysis we all played on like medium or low and the game still looked great. Then two years later we played on max settings and it still looked good. Catchin what i’m throwin?

Wizedo W

Agreed. Although I’d like to see GPUs “punished” at 1080p. I mean, let the “ultra” setting be really ultra.

obababoy

True but with PC gaming 1440p is going to be the new 1080p I think. Im sure GPU’s will struggle a bit at 1440p.

Wizedo W

Makes sense.

JustinConstantino

hell yeah. recently got a acer 1440p freesync monitor… OMG i will never game at 1080p again if i can help it!

Ekard

1080p, even with everything maxed, has not been sufficiently taxing enough for the past few card generations. Display tech has held back the need for GPU advancement for a bit now. With 4k and VR though I really hope to see some serious advancements again. Honestly, if you are using a 1080p screen, the 950/380 are more than enough to keep up.

Wizedo W

Yes, that’ s the analisys of the current situation, but 1080p could be made more taxing. I’ m thinking about G.I., more polygons, soft body physics, more accurate soft surfaces (mud and snow for instance), high res textures, actual moss on rocks and trees, shadow casting grass and foliage, etcetera and etcetera…
I think PC gaming needs another Crysis, another title for which a GPU capable of delivering decent “ultra settings” performance it has yet to hit the market.

http://vgsage.com Azix

low fps in this benchmark is not good. The benchmark doesn’t represent gameplay. It would be much worse in gameplay.

obababoy

Played it and you are wrong…

http://vgsage.com Azix

at 4k ultra? below 30fps? cool

obababoy

1440p Ultra and proportionally my FPS was better than the benchmark.

obababoy

Oh and 4K is fucking stupid right now with ANY single GPU..4k monitors are not worth it right now and I can enjoy 95% of the experience with better FPS at 1440p..Not to mention windows scaling issues etc etc.

obababoy

This doesnt really help AMD because I am going to keep my R9 290 longer since it performs great lol :) Good stuff!

Isa

That’s a big issue for AMD right now. I’ll be keeping my Dual R9 290x configuration for some time to come. Only Greenland and Pascal might sway me at this point in time.

obababoy

What CPU on your X99 test platform…PCper.com used the 6700 skylake and Nvidia seemed to fare better for them.

Ashley Gann

where can we download this benchmark???

pranav0091

@joel : What CPU did you use ?

Joel Hruska

The Core i7-5960X.

rp1367

Hey Joel you also should also perform bechmark using FX8350 or 8370 just to show AMD FX consumer the performance of AMD products. This would allow your tech site to be balance in the eyes of its avid fan users.

Joel Hruska

Doubles the workload. Maybe with Zen.

Domaldel

Yes please do when Zen arrives. =)
The 8XXX series is old and we know roughly what performance it has in different situations.
Zen is a different matter entirely.

Joel Hruska

Rest assured, when Zen ships, I intend to benchmark the snot out of it.

Domaldel

Good. =)
I’m looking forward to seeing how it performs together with the different drivers for the GPUs and in DX12 and 11 as well as vulkan and openGL in various games. =)

Also nongamecentric uses is also interesting.
I fully intend to run BOINC on it pretty much 24/7 when I get it just like I do with 8350 right now. =)

Sweetie

Everyone will be happy to wait over a year. (lol)

Or, maybe you could find an intern who is willing to bench an FX at 4.5 GHz.

Joel Hruska

Don’t have one. I see no reason to do so.

John

I wonder if AMD’s almost theoretical advantage in DX12 has to do with the hardware being optimized for their Mantle API that could be similar to DX12 since theyre both low level API’s?

Ext3h

Yes. DX12 is largely the same as Mantle, regarding many most core principles. AMD relies on massive concurrency and handling most of the workload in hardware, not in the driver. Which simply wasn’t exactly possible to provide with plain DX11. But it is also getting a lot easier for developers to express what they “actually mean”, regarding dependencies in the graphic pipeline and alike. AMDs hardware was designed to interpret that purely in hardware.

Michael F

There’s that, but Mantle was designed for the already existing GCN architecture, not the other way around. AMD already had GPUs heavily designed for GPGPU work (think asynchronous compute and independent parallel tasks that are the big features of DX12, Mantle, and Vulkan/openGL 5.0).

AMD made Mantle because they knew that their GPUs had more raw performance (comparing R9 200 to GTX 700 series), but DX11 and openGL 4 couldn’t take advantage of it.

Vlad001

How (or where) can i download this, to bench my pc?

rock1m1

720p made me chuckle lol. Anyways, until the game is released and final set of DX 12 drivers are delivered by Nvidia, it is still too early to make final judgments. But great show from AMD none the less.

loguerto

Wow!, Am I seeing a 390 ahead of a 980?! Looks like AMD smashes nvidia in dx12.

Robert Johnson

Can anyone get this benchmark and if so where can it be downloaded?

http://www.gpuandmore.com/ Gregster

I feel there is something up with your testing, as looking around at several sites who have benched Fable Legends have the 980Ti out in front and by quite a bit.

Postcards

From Anandtech
“AMD sent us a note that there is a new driver available specifically for this benchmark which should improve the scores on the Fury X, although it arrived too late for this pre-release look at Fable Legends”

I think PCPer is using the newer beta drivers.

Sweetie

Don’t forget that they’re waiting for a better time to look at Ashes.

and to deliver the GTX 960 review

(lol)

Joel Hruska

I have amended the story to address these issues.

Jigar

You are talking about Tech Report ? They are using highly factory overclocked GTX 980Ti against mildly Overclocked Fury X plus old driver.

Ninja Squirrel

According to many other websites Anandtech, The Tech Report and PC Perspective, the 980 TI got slightly better results than the Fury X on 4K and 1080P resolutions.

Surprisingly, some AMD high-end cards such as 390X and 390 are clearly beating Nvidia’s high-end cards such as the GTX 980 and 970 at cheap prices.

Undoubtedly, the 980 TI is still holding the performance crown. I don’t think the Fury will exactly beat the 980 TI even with DX 12. But It is glad to see 980 TI and Fury X are head to head in DX12 games. AMD has strong mid / high-end cards which can destroy entire Nvidia’s mid / high-end cards.

Ext3h

All I can say about GTX 980 Ti vs Fury is: Wait for it.

The 980 Ti isn’t winning by raw performance, nor by driver latency any more. (Actually quite the opposite in both cases.) It’s only leading in two domains now: Geometry throughput (AMDs cards hate small polygons), and shader compilation. Nvidias driver is just much better at optimizing the provided shader programs, while AMD is mostly executing the shader programs as provided – which usually isn’t fitting the hardware very well or may simply contain unnecessary computations.

But the 980 Ti also tends to break in completely if the driver isn’t getting enough CPU.

Joel Hruska

This story has been updated to address both of these points and amend performance figures that were improperly low on both cards.

Chaos

Tech Report 980Ti scored that much higher then Fury X cause it is Asus Strix GTX 980 Ti that easily boosts to 1350Mhz. Rest of them aren’t all that better (5-10%).

rp1367

980ti small lead will not hold that long. You already seen the fate of both GTX970 and 980 wimping. It is beaten hard to the ground by a large margin against r9 390 which is cheaper and even for an old AMD GCN technology.

loguerto

Other websites have compared the fury x to custom 980ti overclocked cards, the small lead comes from that. Techreport got 37 fps with te 980 ti in 4k, that is 20% more than the average numbers the others got. However, for dx12 gaming people should clearly buy a 290 290x 390 390x and play everything at very good framerates spending from 100 to 200 dollars less than how nvidia rivals are priced.

mart mean

This benchmark removes the advantage amd have by removing the need for super fast memory and asyc shaders because its that slow the pipeline dx12 opens up to use more gpu on amd cards is clearly not being used,No draw calls happening here,
This benchmark need a kick, Lets speed it up with more movement because its allowing nvidia to keep there pipeline open .. Your all being lied to and you dont realize it.

Carlos Oliveira

The 390 outperforms the 980… how is it removing the advantage AMD has?

Georg Dirr

Where is amd cpu figures? I don’t have a Intel 6 core cpu.

Miura Mestre do hiato

AMD FTW

Hans Olo

Is Nvidia still around?

Ninja Squirrel

No Async computing enabled, so AMD can’t win a huge margin.

Joel Hruska

Async computing is enabled.

http://www.flickr.com/photos/catchphotography/ H23

to what degree, and how heavily integrated seems to be the discussion. Not to much if at all seems to be the opinion so far.

Joel Hruska

About 5%. I have paperwork. Would’ve had that written up already, but I’ was sidetracked by having to re-test everything.

3R45U5

wait! i thought only the xbox version of the UE engine can do async compute Oo now i am confused

No, the Windows version does as well. But doesn’t matter much with Fable Legends. It’s just some ultra light load on the compute queue where timing constraints don’t even matter. A single, tiny compute command every few graphic batches.

3R45U5

thanks for the clarification.

Chaos

If you can find some time would you please clarify what exactly did you meant by 5%? It’s kinda vague.

Joel Hruska

Yes.

The average asynchronous workload per frame (meaning work that could be handled by an asynchronous compute engine) is 5%. It’s not a large amount, in other words.

Chaos

Thank you.

Sweetie

So is it fair to say this benchmark is the anti-Ashes?

Joel Hruska

No. Ashes used about 10% async. And the gap between AMD and NV was very small in that test, as in this one.

People made Ashes out to represent far more than it shows. The GTX 980 to and the Fury X tie in that title.

I don’t understand how he’s turning async on or off. That option is not exposed in the benchmark anywhere I could see.

Sweetie

So why does the 290X do so well against the 980?

Behrouz Sedigh

Because of removing CPU overhead in AMD cards.

Sweetie

“Why does Hawaii perform so close to the Fury family?”

Because HBM isn’t all that yet.

Joel Hruska

I think the better question is “Why does Hawaii perform so close to the Fury family?” Normally the gaps between the cards is much greater. ]

I see two things from these results:

1). AMD’s performance is bottlenecked by something that Fury doesn’t address well. I’m not sure what it is. But you can see that Hawaii puts up excellent performance relative to its higher-priced replacement.

2). Nvidia cards are at a relative disadvantage here. The GTX 980 is still absolutely playable, but it doesn’t match the Hawaii family very well. Still, the 980 Ti scales much better relative to the 980 than the Fury X does relative to the older 390X. That’s what creates the odd skew in the results, IMO.

UltronGabeN

Lel the 390 smokes the 980

rp1367

GTX owners are weeping now. They are hiding in their respective toilets like grimlins.

Phobos

I’m starting to wonder if the UE4 was created for consoles in mind kind like UE3 did.

William Fenton

The Fury X costs almost the same as a 980 ti, those two are pretty much equal. The real questions are why are the 980 and 970 doing so badly, and why isn’t the Fury X taking more of a lead, if it’s supposedly the case the AMD trounce Nvidia in Dx12– and by extension why isn’t there actually more of a lead between the 390x and the Fury X, considering the Fury X costs about twice the price.

tachyonzero

“why isn’t the Fury X taking more of a lead?”
Gameworks

Joel Hruska

In Fable Legends? Not that I’m aware of.

William Fenton

Oh you poor AMD fanboy. Then how does that explain the other results? You know the ones where the 390x and other AMD cards do outperform the more expensive 980.

tachyonzero

no, I have a MSI GTX 970 and not a fanboy. I want a equal level of playing field, not a some trickery.

Sweetie

I can understand that after their 970 trickery (fewer ROPs, less cache, 1/2 a GB of VRAM that runs at half the speed of a 2007 midrange card, false VRAM performance number)…

Rp stop posting misinformation, i have explained what is going on with nvidia async compute, and how its there but its so bad in mixed workloads that it is better to not use it.

rp1367

Hey Ben, tell that to TPU and not to me. And how come the so called cheaper “rebadge” R9 390 is kicking the ass of both GTX970 and GTX 980? Could you explain that to me?

Ben Mitchell

I dont need to since its just the removal of the overhead you are seeing, besides TPU arent going to just delete the article. the misinfo was about you saying that nvidia doesnt support dx12, technically everyone does now, intel supports it at 12_0, maxwell 2 has the most advanced dx12 tier support and gcn has 12_0 but some tier 3 stuff and the better async solution. extremetech also used a beta amd driver specifically made for this game while using the last whql certified nvidia driver and that will likely have something to do with it as anand didnt and their results are slightly worse for amd while nvidia is mostly the same.

rp1367

You did not answer my question.

And how come the so called cheaper “rebadge” R9 390 is kicking the ass
of both GTX970 and GTX 980? Could you explain that to me?

Ben Mitchell

As i said, its just the removal of the driver overhead and the fact amd gave out a driver specifically for the bench, theres almost no async compute going on here, if you were to add more compute tasks to the bench the amd cards would pull ahead more so there is that to consider.

rp1367

Even if Nvidia will provide a beta driver for this benchmarking it will not fix the performance hit specially the 970 as the performance hit is almost 1000 pts difference. The 980 could “possibly” be fixed through driver update but still it will be an under performing card in terms of price/performance ratio against the R9 390.

Joel Hruska

Claiming that Maxwell has no support for DX12 is patently untrue. Its async compute support is still being determined, but it is clearly capable of handling light asynchronous workloads.

rp1367

“but it is clearly capable of handling light asynchronous workloads.”

If indeed Maxwell support DX12 and async then it is just half baked unlike GCN cards based on your benchmark here.

Joel Hruska

Maxwell absolutely supports DX12. That’s not even a question. And it supports limited amounts of async compute.

Isa

Maxwell 2.0 supports Async Compute but with quite a few caveats. An async command can only be utilized at the end of a draw call and a long draw call can lead to a performance hit. Programmers need to be careful when using Async Compute in their titles as the performance hit, on nVIDIAs Maxwell 2 architecture, can be quite large. One has to remember that Fable Legends relies only on minimal use of Async Compute, even compared to Ashes of the Singularity.

shadowhedgehogz

x99 ftw.. skylake was kinda a flop. Also glad to see neck and neck performance, i have the MSI 980Ti gaming which should plow through UE4 @ 1080p. Won’t be changing my card until 1-2 years now unless something with a good price and large performance leap comes out. 4k performance is still looking dire, probably be a few yrs before any single cards come out to handle it.

rp1367

GTX 970 currently priced at Newegg for $350 is running hard to death to beat AMD r9 390 priced for $310. Funny how Nvidia card are performing poorly in DX12 games. Anywhere you go from Oxide-Ashes to Unreal-Engine without Async support for PC is loosing every new DX12 benchmark releases.

Yes it does. He also said that it only uses a minimum amount of it. Also, he also said that Maxwell can do Async Compute, although very limited. Your point is?

http://www.flickr.com/photos/catchphotography/ H23

is this bench available for public download.

Joel Hruska

I do not think so. Though it’s possible MS may release it in the future.

don681

Useless though since im not gonna buy
Fable or Ashes. By the time the compelling DX12 games are out, there would be new generation cards from both camps already. Current gen performance is not an indicator of future gen performance.

Isa

Precisely what I had assumed would occur. Something tells me that Deus Ex is going to be a rather interesting title once it releases in Feb 2016. It should be quite an eye opener in terms of its performance for Team Red relative to Team Green.

I have a feeling that tomb raider might also follow this trend as well.

druout1944 .

Wonder what happens when a DX12 game comes out that uses more than 4GB of VRAM.

Domaldel

Well…
That’ll lead to interesting times…

prtskrg

since both dx12 and vulkan supports stacking of vram, by the time games arrive which will require more than 4gb of hbm vram, they’ll have coding to support stacking of vram too, since it’ll make xfire/sli very good option.

druout1944 .

There are already games that use well above 4GB of VRAM at 1440p and above, and CF/SLI have never been a mainstream thing among pc gamers due to cost and/or performance issues.

prtskrg

4gb of HBM vram is not yet a problem. low level api like dx12 can make cf/sli a thing. i don’t expect it to happen within a year but it’ll happen as pc gaming industry gets more experience in dx12/vulkan coding.

amilayajr

Just for fun Joel, can you bench mark older cards? That would make old card holders a nice feel on how their cards are performing in dx12. What do think?

exjohn

So… fable legends uses the nvidia dedicated dx12 fl 1… and amd is slower with its “old” global illumination algo… this is a gfx showcase and not an actual gameplay bench and it showcases how well global illumination works hw accellerated vs “emulated” still the scores are so narrow it is hillarious. Nvidia and its bs marketing at its best.

Can’t wait for true ps4/xb1 ports to hit desktops and use async compute – a STANDARD dx12 feature btw(unlike this global illumination using rov&cr)- nvidia will be in a world of hurt. But don’t worry they will have pascal out and it will take care of it, it will cost you exactly 1k$ to upgrade your last years titanx to the new titan to have the best performance.

godrilla

1440p is more popular amongst gamers with those gpus more than 4k and 720p combined please include that resolution for future benchmarks thanks.

Jigar

Since DX 12 almost kills driver overhead, we are seeing increased vigor from GCN, that’s nice for all of us.

Benchen

Nvidia is in trouble.

dy

i knew it that amd have always been the more futureproof brand.

ExtremeTech Newsletter

Subscribe Today to get the latest ExtremeTech news delivered right to your inbox.

Use of this site is governed by our Terms of Use and Privacy Policy. Copyright 1996-2016 Ziff Davis, LLC.PCMag Digital Group All Rights Reserved. ExtremeTech is a registered trademark of Ziff Davis, LLC. Reproduction in whole or in part in any form or medium without express written permission of Ziff Davis, LLC. is prohibited.