Nvidia’s GameWorks program usurps power from developers, end-users, and AMD

Share This article

Over the past few months, Nvidia has made a number of high-profile announcements regarding game development and new gaming technologies. One of the most significant is a new developer support program, called GameWorks. The GameWorks program offers access to Nvidia’s CUDA development tools, GPU profiling software, and other developer resources. One of the features of GameWorks is a set of optimized libraries that developers can use to implement certain effects in game. Unfortunately, these same libraries also tilt the performance landscape in Nvidia’s favor in a way that neither developers nor AMD can prevent.

Update (1/3/2014): According to Nvidia, developers can, under certain licensing circumstances, gain access to (and optimize) the GameWorks code, but cannot share that code with AMD for optimization purposes. While we apologize for the error, the net impact remains substantially identical. Game developers are not driver authors and much of the performance optimization for any given title is handled by rapid-fire beta driver releases from AMD or Nvidia in the weeks immediately following a title’s launch. When developers do patch GPU performance directly, it’s often after working with AMD or Nvidia to create the relevant code paths.

Understanding libraries

Simply put, a library is a collection of implemented behaviors. They are not application specific — libraries are designed to be called by multiple programs in order to simplify development. Instead of implementing a GPU feature five times in five different games, you can just point the same five titles at one library. Game engines like Unreal Engine 3 are typically capable of integrating with third party libraries to ensure maximum compatibility and flexibility. Nvidia’s GameWorks contains libraries that tell the GPU how to render shadows, implement ambient occlusion, or illuminate objects.

In Nvidia’s GameWorks program, though, the libraries are effectively black boxes. Nvidia has clarified that developers can see the code under certain licensing restrictions, but they cannot share that code with AMD — which means AMD can’t optimize its own drivers to optimally run the functions or make suggestions to the developer that would improve the library’s performance on GCN hardware. This is fundamentally different from how most optimization is done today, where Nvidia and AMD might both work with a developer to optimize HLSL code for their respective products.

Is GameWorks distorting problems in today’s shipping games?

To answer this question, I’ve spent several weeks testing Arkham Origins, Assassin’s Creed IV, and Splinter Cell: Blacklist. Blacklist appears to only use GameWorks libraries for its HBAO+ implementation, and early benchmarks of this last game showed a distinct advantage for Nvidia hardware when running in that mode. Later driver updates and a massive set of game patches appear to have resolved these issues; the R9 290X is about 16% faster than the GTX 770 at Ultra detail with FXAA enabled. Assassin’s Creed IV is more difficult to test — it’s engine is hard-locked to 63 FPS — but it showed the R9 290X as being 22% faster than the GTX 770 as well, roughly on par with expectations.

Arkham Origins’ performance is substantially different.

Like its predecessor, Arkham City, it takes place in an open-world version of Gotham, is built on the Unreal 3 engine, and uses DirectX 11. Both games are TWIMTBP titles. I’ve played both games all the way through — many of the animations, attacks, and visual effects of Arkham City carry over to Arkham Origins. Because the two games are so similar, we’re going to start with a comparison of the two games side-by-side in their respective benchmarks; first in DX11, then in DX9.

Previous Arkham titles favored Nvidia, but never to this degree. In Arkham City, the R9 290X has a 24% advantage over the GTX 770 in DX11, and a 14% improvement in DX9. In Arkham Origins, they tie. Can this be traced directly back to GameWorks? Technically, no it can’t — all of our feature-specific tests showed the GTX 770 and the R9 290X taking near-identical performance hits with GameWorks features set to various detail levels. If DX11 Enhanced Ambient Occlusion costs the GTX 770 10% of its performance, it cost the R9 290X 10% of its performance.

The problem with that “no,” though, is twofold. First, because AMD can’t examine or optimize the shader code, there’s no way of knowing what performance could look like. In a situation where neither the developer nor AMD ever has access to the shader code to start with, this is a valid point. Arkham Origins offers an equal performance hit to the GTX 770 and the R9 290X, but control of AMD’s performance in these features no longer rests with AMD’s driver team — it’s sitting with Nvidia.

There’s a second reason to be dubious of Arkham Origins: it pulls the same tricks with tessellation that Nvidia has been playing since Fermi launched. One of the differences between AMD and Nvidia hardware is that Nvidia has a stronger tessellation engine. In most games, this doesn’t matter, but Nvidia has periodically backed games and benchmarks that include huge amounts of tessellation to no discernible purpose. Arkham Origins is one such title.

Tagged In

Post a Comment

The article keeps saying that Nvidia is favored in these games, but your percentages and graphs all show the R9 290X outperforming the 770. Am I missing something?

Bruno Gonçalves

Yeah, the gap should be wider, the R9 290X is capable of outperforming even the 780.

Carlton Moore

The R9 290X is a higher-end card that typically performs about even with the GTX 780 (occasionally beating the GTX Titan), as reviews from Anandtech, Techpowerup, Guru3D, Tom’s Hardware and Bit-Tech have shown.

The GTX 770 outperformed the R9 280X, but both of those cards are below the GTX 780 and R9 290X performance tier. So this isn’t good news for AMD.

davidcianorris

I must say that this are not good news to developpers and final users neather. I keep seing the same “I win, everyone else loose” politics that make me move away from nVIDIA years ago, but now reinforced.

Joel Hruska

I answered this question by checking the GTX 770’s performance against the R9 290X in aggregate. I referenced multiple reviews and performance figures, including my own. Statistically, at 1920×1080, the R9 290X is 1.24x faster than the GTX 770 across a broad spectrum of tests.

There are exceptions to this. The gap between the two cards is smallest in games like BF4, where I believe it’s just 11%. (I’m on Christmas break and don’t have my PC in front of me). But they’re never just equal.

Video Game Chat

Still, why did you not put the gtx 780 to the test in this article?

Joel Hruska

Because I don’t have one.

Tito John

Nobody seems to have answered your question, which is the same one I have. The graphs show AMD outperforming Nvidia. Are the legends on the two graphs reversed, maybe?

Joel Hruska

The graphs show an AMD vs. NV matchup between two cards with a price delta of $200. That’s a $550 card versus a $350 NV card. And in Arkham City, the AMD card is about 20% faster.

In Arkham Origins, they tie, despite being based on the same engine and the same version of DirectX. That tie is an erroneous performance result.

HisDivineOrder

Instead of comparing unalike cards, why not just compare similar cards and then let us see the real comparison?

Why make us jump through hoops?

Joel Hruska

Because a $350 GPU tying a $550 GPU is a bigger story than a $350 GPU beating a $300 GPU. Every story is a balance of time and research.

Harley J.

wow you’re dump, have you read the article or you just limited yourself to read the graphs?

Hru Setya R. Soekardi

its not apple to apple comparisons.. if amd fair, it shoud compare with gtx780 as a higher flagship of nvidia.. LOL…

Joel Hruska

If the GTX 770, a $330 card, ties the R9 290X, how is it unfair to show that comparison, given that the R9 290X is typically 25% faster? The point is that we see unusual performance here.

A GTX 789 might show a greater than expected Nv advantage, but the results would be unusual to the same degree.

e92m3

Sorry you have to spend so much time explaining things like this :(

People need to read. I don’t think any other tech publication that has covered the gameworks topic actually read what you wrote, or at least they do not fully understand what you said. That forbes guy who backlinked you needs an unbiased analysis of how the situation is pretty much unchanged by nvidia’s statements, you clearly have the more complete knowledge.

With the latest NV PR spin, I don’t feel that nvidia has told us anything you didn’t already know about gameworks.

It’s still bad, and I don’t really care for the whole ‘allow ubisoft to produce the next game even faster’ trend in the first place…

I’m not buying their stuff and I honestly don’t want to.

It’s just the principal of the thing…. This is not good for consumers.

ShaneMcGrath

Clearly you are, 770 card is mid range R9 290x is high end, I sure as hell hope the 770 gets pounded by the 290x, And this is coming from an Nvidia user!
290x will beat a 780 also, But then the latest 780ti now beats the 290x.
Times are good for all, Competition brings them all back into line!
I look forward to 20nm series from both manufacturers.

andrewi

I’m pretty sure they meant to confuse us or maybe they made mistakes themselves. Why else would you make a AMD card green and the Nvidia one red???

pelov lov

Is this really a wise decision on nVidia’s part? Given AMD’s advantage of providing hardware for the next gen consoles, I reckon this may not be a very wise move. One studio and a few games on the PC would be a small blip compared to the developer attention that the consoles receive.

Heath Parsons

Clearly you know nothing about PC gaming.

HisDivineOrder

AMD provided hardware for the 360, which was the lead platform for most multiplatform games last generation. It also happened to be the platform most PC games were ported from.

nVidia still won the lion’s share of the ports to their developer program until the last year and a half when AMD switched from focusing on pushing out new hardware to maximizing as much with the same hardware for over two years as possible.

Who makes the console hardware matters far less than the money a GPU maker is willing to invest into the developer/publisher. AMD is a company that is losing money hand over fist. They had to sell their headquarters, lay off huge swaths of people, and make the most money on console hardware they’re going to make in one quarter just to mostly break even.

They’re also about to fail to buy the proper number of chips from GloFo, which is going to lead to a charge for this coming quarter. They’re focused on the short term, trying to stay afloat, so don’t expect investments in developers to continue.

And since the hardware used by so-called “next-gen” consoles is antiquated even by the PC standards of last year or the year before, you should see very little time before any optimizations toward hardware mean very little at all with regards to porting.

The only thing that current GPU’s are going to have any trouble at all keeping up with are potential memory issues and moving 3/4GB cards is going to fix that. Meanwhile, PC ports are going to continue well outstrip their console versions in every way.

Yeah, console hardware means less this generation than ever before, especially if developers continue to develop for PC first and then port to the other two mostly as a matter of convenience.

e92m3

Ah, another clueless fool who bends reality to fit his own distortions.

They just increased their orders at GF considerably. The wafer agreement charge you’re mentioning was from ~2009, and the last portion of the required payments was paid last quarter.
Done. It’s over. Go look it up now.

Somehow you had all the relevant details completely mixed up until it fit your poorly-informed agenda of spreading ignorance like a disease.

Your clear lack of knowledge about the vastly superior compute performance per dollar offered by the latest consoles is obvious. I’m not going to write a whole bunch of paragraphs trying to teach you, your ignorance is palpable.

Kids, if you don’t know what the hell you’re talking about, just be silent.

Gdom

I wouldn’t call that unfair. Nvidia spent the time to develop their own libraries. AMD could do the same. If they don’t want to, then that’s the price that they have to pay. And then you have to factor in the fact that AMD is in every new major console, so games will get optimized for their platform. The Intel scumbags though… That just hurts everyone.

Joel Hruska

Gdom,

If the only goal was to provide optimized code paths for NV hardware, NV could provide open-source libraries to do it. Closed libraries lock off control and prevent developers from doing their own optimization. It’s possible that some developers have insight into how the libraries get written (I can’t speak to that), but this doesn’t help the wider gaming world. While AMD could theoretically create its own closed-source libraries, that’s going to represent a substantial amount of work for both the developer and AMD.

Chris Wraith

So because AMD would have to actually put some effort and time into supporting their customers like Nvidia do, then that’s wrong for Nvidia to do so? Because it makes AMD look bad? Why should Nvidia let their main competitor gain from their hard work? This is a business first and foremost, and competitors in business don’t help each other, Mercedes doesn’t help BMW by giving away it’s technology and research for free, they don’t allow BMW to simply copy and implement Mercedes specific features and so on. To expect Nvidia to do so is illogical.

Joel Hruska

This has nothing to do with customer support. NV has created a system in which developers have no insight into their own code base. That’s vendor lock-in.

Chris Wraith

It has everything to do with customer support. It’s also nothing at all to do with vendor lock-in, Nvidia is locking no one out of anything, they just aren’t doing AMDs work for them, if Nvidia are responsible for writing particular shaders/code that runs well on their architecture and developers are happy for them to do that, then there is no problem. All AMD has to do is pull their finger out and work with the same devs to get their own optimisations in. As i said, people don’t complain when other companies refuse to share their hard work and research with their main competitor.

Joel Hruska

“Nvidia is locking no one out of anything, they just aren’t doing AMDs work for them,”

In the past, developers could share code with AMD and NV. Code could then be optimized for both solutions.

Gameworks code cannot be shared with AMD for optimization to the best of my knowledge. That’s a fundamental difference. In the past, AMD and NV did their own optimizations working with the developer.

Let me try to give you an analogy. When AMD and NV build a GPU, they build a reference card. That reference design is available to all the third-party board partners. MSI, Asus, Visiontek — they all get to use that design. Some vendors choose to build their own designs as well, for really high-end boards. That’s fine. But they aren’t *required* to do that to bring a product to market.

Imagine that AMD built a reference design and then said: “Ok, we’re only selling this design to Asus. MSI, we’ll sell you chips, but you have to do your own design from scratch.”

This would be a distinct change from the status quo — and it would represent a significant challenge for MSI, which is now forced to do a great deal of additional work that other companies aren’t required to do. Since non-reference designs take longer than reference cards, AMD would be preferentially giving market share to its preferred customers, and locking others out.

It wouldn’t be accurate to claim that AMD “just doesn’t want to do MSI’s work for them.” AMD would have changed the shape of the market and the ability of one company to quickly launch a product.

SAIPRASAD P M

What I understood from this article was, Gameworks enabled games are not running optimally on AMD hardware. Because they dont know how the thing works, the libraries being inaccessible. If AMD did the same to NVIDIA, you would be complaining as well. I may be wrong but from your post it looks like a developer has to create 2 different versions of the same game, one optimised for Nvidia and the other for AMD. Games are there for everyone to enjoy, no one should have to get better preference.

J. Andrew Lanz-O’Brien

Just what the computing world needs: more closed code. :/

john

I say physx all over again… nothing to see here it will be adopted by a few games then rapidly dropped in favor of something with whider support… it is just one developer… ubi was always nvidia’s little bitch so nothing new here…

Elijah Daugherty

Physx was offered to AMD a few years ago, but they turned it down.
I foresee the same thing happening regarding mantle. Just because it’s offered to a company, doesn’t mean they will accept it.
The companies wouldn’t exist without competition. Accepting other peoples tech isn’t very competitive.

Aurel

That is what NV fans are saying but things were not like that.
Ever since NV announced Physx the big question was “Will AMD adopt this technology??
At first it was a question both AMD and NV tried to ignore but at a certain time the people from AMD had to give an answer. And they said NO. At that point the Nvidia also came out and said they would give Physx to AMD but they don’t want it. Nvidia did not directly offered Physx to AMD it was just a media game.
And how would Physx work with AMD when it’s tied to CUDA?? AMD should also license CUDA. Thing are no as simple as “Physx was offered to AMD”.

Mantele is practically opened and Nvidia can decide to support it or not. But Mantle doesn’t need NV’s support because it brings bigger advantages that Physx ever did. For example it easily allows developers to more easily target laptops for their games. I bet future Kaveri laptops will be able to run Mantle games without any problems at near maximum sittings. And I’m talking about budget laptops.

Physx has been out there for what 7 years now and it’s still irrelevant. And now with Mantle and next gen consoles it seems a lost cause.

Elijah Daugherty

Who’s to say the offering of mantle isn’t a media game also.
No one knows. They say it isn’t tied to a certain architecture in mind, but we’re not engineers. So it is really difficult to tell if it would be so easy to support mantle.

‘We are committed to an open PhysX platform that encourages innovation and participation,’ and added that Nvidia would be ‘open to talking with any GPU vendor about support for their ARCHITECTURE.’

Consoles already have their own api’s. That argument is moot.

nvidiasucks

That article mentions Open so many times I lost count. Tell me one thing Open about Nvidia, Just one? Good luck with that one.

The bit at the end is speculation and guess at best. Nothing has been offered from Nvidia to AMD and vice versa to this day.

John Mellinger

Nvidia has brought more to the table for pc gaming then AMD has ever dreamed off. And GAmeworks is even more proof… WTF has amd done!??! Hair FX and Mantle that’s not even out yet cause they picked the worse publisher to deal with… EA.

http://www.hikingmike.com/ hikingmike

I don’t think Gameworks helps your case. Did you read the article?

Elijah Daugherty

Well.
It sounded like they wanted to make physx open.

Also, your name doesn’t help your argument.

Elijah Daugherty

You didn’t read the article.

Also, your name doesn’t help you regarding arguments with amd vs nvidia.

George

What a bunch of AMD fanboi’s.
It’s been known for years that nvidia offered physx to ati but they refused.

The pc gaming community is sad.
Who the fuck gives a shit about a company?
My telargo vacuum cleaner is better your ikea.
Stupid bull.

Stacey Bright

There was once a time where you could run Physx on an Nvidia card while your main card was an AMD. Then later revisions Nvidia’s drivers would disable that ability if it detected the use of an AMD card. That should be a big clue to Nvidia’s true intentions with Physx.

EtaYorius

AMD did not accepted because PhysX was not an OPEN STANDARD, nVidia offered PhysX… yes, but they wanted A HELL LOT OF CASH for sharing their Locked PhysX code.

AMD will support anything that is an open standard, PhysX was not and there for they won`t.

djnforce9

I wish AMD had accepted PhysX because the few games that utilized it produced some fantastic looking effects. Look no further than Assassin’s Creed 4, Borderlands 2, and the Batman Arkham series. It would have been used a LOT more if it worked on both card brands. I am hoping nVidia does not make the same mistake as AMD with PhysX should THEY be offered full Mantle API support.

e92m3

Tell us more about how you implement GPU physx without CUDA, and how much was that CUDA license?

You have the incomplete nvidia version of events.

John Mellinger

At least Nvidia’s tools give PC gamer’s a true edge in the gaming market.. And don’t bash on Ubi at least with NVIDIA they bring out the performance and are a way better partner then AMD’s cough cough… DICE

Phobos

If Nvidia take’s that route then I think AMD should the same with mantle.

john

That would be stupid… in an ego battle the 3rd wins invariably. If amd opens the sources on mantle and makes it totaly open and is as advertised it will crush these gameworks and push them to obsoletion really fast because mantle would have a much whider addoption. It would be another physix like nvidia badge nobody cares about… in fact amd has little choice but to make mantle open source if it wants it to succede…

Joel Hruska

It might be tempting to link them, but this isn’t about Mantle vs. Gameworks. A game that supports Mantle does not penalize DX11 on any other solution. Nvidia retains full control over their own DX11 performance and can optimize the title in all the usual ways.

john

And now let’s remember amd has reverse engeneered silicon… decompiling some libraries and optimizing them for their hardware should be childs play compared to that…

Joel Hruska

You can’t decompile an encrypted DLL in that fashion. Not simply, easily, or quickly. And even if you do, you still aren’t getting source code that you’d need for optimization.

Nvidia writes the HLSL and compiles it into a blob. You can maybe break the encryption of the blob, but you can’t turn the blob back into HLSL.

john

You’re kind of right but you need not necessarily do that… after all those things need to get to the gpu decompiled, true it’s still compiled code but anybody with decent assembly experience will be able to read and interprete it. I’m not saying it would be easy, just saying if amd was poised to do it they deffinetelly have the resources and know how to do so… plus we’re talking about shaders, not THAT much code considering that most shadera are below 100 lines of code

john

You’re kind of right but you need not necessarily do that… after all those things need to get to the gpu decompiled, true it’s still compiled code but anybody with decent assembly experience will be able to read and interprete it. I’m not saying it would be easy, just saying if amd was poised to do it they deffinetelly have the resources and know how to do so… plus we’re talking about shaders, not THAT much code considering that most shadera are below 100 lines of code

Joel Hruska

There are shaders in certain Gameworks titles that are over 6000 lines long. I can’t say more than that.

Edited to add: You may be able to read a decompiled library, but you still have to write your own implementation of the entire function. Some of the Gameworks libraries are 18-25MB. That’s a lot of code — and keep in mind, you’re doing that for every single GameWorks title.

john

Yes there usually are a few big ones but even 6k lines is not THAT much considering that entire software stacks were decompiled before… i’m just saying it’s not impossible and amd is amongst the ones that can do it if need be.

john

Well at a certain point things are bound to repete so after you decompiled 25% you’re actually some 90% through the work.

My point here was that amd has options here as opposed to the intel case. Here they can look at the tricks done by nvidia and offer something similar… maybe they don’t even need to decompile just offer a competing stack of functionality and be done with it. With intel amd had really no option then suing them or beating intels compiles both in performance and addoption which wasn’t a few months work like the case here.

Peng Tuck Kwok

I don’t think it will be terribly difficult for AMD engineers to figure out what Nvidia are doing. They probably could just watch and observe what calls are being made while the game code is running on a Nvidia card. No need to decompile anything (this usually leads to a jumble of nonsense :D)

Joel Hruska

Peng Tuck,

They can get some very limited information from this but cannot see the shader code directly, and therefore cannot optimize the driver.

http://www.hikingmike.com/ hikingmike

Well hopefully AMD is gifted a leak then. Fight fire with fire.

Any chance they can reduce triangles in places where they don’t matter much? like if the triangles are basically part of the same plane? Cut some of the penalty that Gameworks is applying.

Joel Hruska

This is what happens when you manually set tessellation levels in the Catalyst Control Panel.

http://www.hikingmike.com/ hikingmike

Cool, then they may need to update Catalyst to do this adjustment automatically for some Gameworks games. Bam, better performance woo, buy AMD cards. What’s that Nvidia? You say they cheated and just reduced polygons. Welllllll let’s tell the story.

Phobos

I just think NVidia might have a point, what’s so different about AMD and NVidia when they perform very similar? If each one has a unique features I do believe it will attract more people because of now I hardly don’t see enough. Unless I’m missing something by all means clarify.

Joel Hruska

Phobos,

Here’s what you’re missing: If you write a game ‘the normal way”, partnering with AMD or NV means that one company has better, more optimized drivers ready for launch. Nothing prevents the other company from optimizing drivers post launch. So in the long run, games get optimized on both platforms.

Optimized through GameWorks, games are never optimized for AMD at all. That’s a fundamental change from how we used to do things. Instead of working with a developer to add support for specific NV functions, Gameworks actively works against the implementation of any AMD-specific functions.

Nvidia can optimize their drivers. AMD can’t. That’s not an “Nvidia advantage” like PhysX, or TXAA, or G-Sync.

Phobos

Ah ok, make’s sense now when you put it that way. If that is the case I don’t think developers will jump in, that will only limit their audience even more.

Elijah Daugherty

What do you think mantle is then?
I don’t see how nvidia are the bad guys when AMD are trying to get the performance crown in other ways than just making new cards.

Joel Hruska

Mantle: Optimized for AMD. Does not prevent Nvidia from optimizing its drivers for DX11 games.

GameWorks: Optimized for Nvidia. Prevents AMD from optimizing its drivers for DX11 games.

If you do not understand how these things are different when it’s broken down in that fashion, I do not know how to explain it to you. GameWorks prevents AMD from ensuring games run well on AMD hardware. Mantle does NOT prevent NV from optimizing games for NV hardware.

Elijah Daugherty

I’m saying both companies need some way to be competitive.
Otherwise, they wouldn’t be here. Mantle unevens the playing field and inspires proprietary tech. Just like physx.

Joel Hruska

Proprietary technology, like PhysX, isn’t automatically bad. Let me try to give you an example of what I see as the distinction.

PhysX was definitely an NV-only feature. Some people really liked it. It made some games look a good deal better. But PhysX didn’t change anything about DX11 optimization for AMD.

In the DX10 era, AMD’s HD 4000 cards had a built-in tessellation unit. No games took advantage of it — but if they had, and if that had made the games look better, this would not have been unfair to Nvidia. The GPU API playing field remained level.

Yes, AMD has Mantle, and yes, AMD is forecasting really great gaming performance increases from *using* Mantle. Will those improvements materialize? Unknown. Will many games use Mantle? Unknown. Does Mantle prevent DX11 optimization? No.

Nvidia’s ability to improve the performance of their product is not hampered or diluted by Mantle. It may be that they have a weaker performance position *because* of Mantle, but that’s not the same thing as removing their ability to optimize.

Elijah Daugherty

I can agree with that.
I just feel that these technologies make it tough on the consumer.
Take star citizen for example. Has both physx and mantle. I can’t truly play star citizen the way it was meant to be played without getting both brands.

Joel Hruska

Depending on the PhysX implementation, it may run on the CPU. Alternately, I know there’s a crowd of people that focus on using a combined AMD/NV solution, though NV isn’t fond of this and support may have dried up. I’m not certain.

Reckoning

So what you’re saying is that you want them to give up their competitive advantage? How is this any different than every other manufacturer on earth for everything? They all spend money on their R&D, they all want to have features that set them apart from the other products in the same category. It’s hilarious that you could even think to side with AMD on this.

Should Nvidia share their whole architechture with AMD? Should Intel have to give AMD all their tech so they can play catch up? This is a failure on AMD’s part to compete, not Nvidia being unfair and it’s disgusting that everyone is too busy coddling AMD to see this. If they can’t compete properly then they’ll fail and go bankrupt. That’s how the market works.

Joel Hruska

“How is this any different than every other manufacturer on earth for everything? “

Call me when you have to buy Ford gas for a Ford car in order to get optimum performance, when your Philips LED lights only work at full potential in a Philips branded light, when you have to use manufacturer-branded parts as a replacement for any product with no recourse to third parties, ever, or when Microsoft is allowed to mandate that only programs compiled with its own Visual Studio will run on a Windows platform.

I do not want Nvidia to give up its competitive advantage. I want Nvidia to create competitive advantages for itself that give it an edge without harming other companies. The distinction between “We build the best product on Earth” and “We design products that purposefully handicap our competition” is not a small one.

This is why I have no problem with PhysX, CUDA, or G-Sync, three NV programs that were designed to give it a competitive advantage but do not harm the function of AMD or Intel chips in any fashion.

Goran Petrevski

You know that your name is the same as D’s standard library?

massau

mantle replaces openGL / DX api, it is less cpu bound and better threaded witch shifts the bottle neck more to the gpu than the cpu. Nvidia can support mantle because it is just an open standard.

These liberaries are precompiled binairy blobs (DLL) wich makes it really hard to optimise for. Because AMD cannot change some parameters in the source to make it run as good as Nvidia.

Pat D.

The problem is, apparently, that Mantle HEAVILY favors the GCN architecture, meaning Nvidia would have to retool their hardware to use it.

Its technically “open, but only if you use our architecture”, LOL.

Joel Hruska

Pat D,

Do you have a source for that? I suspect it’s true, but haven’t seen confirmation.

Pat D.

Just various unconfirmed sources around the web…theoretically, Nvidia *could* adapt it to their hardware, but one would think if AMD went out of their way to usurp DirectX completely that Mantle would provide a significant boost to their architecture.

Otherwise, why wouldnt MS make something very similar in DirectX 12? (and thereby make Mantle irrelevant)?

Joel Hruska

Everything I know points to Mantle being very similar to the Xbox’s DX implementation. I’m assuming it’s much like HSA, where the Xbox one and Sony clearly implement some of that functionality, but don’t claim to be HSA compatible.

However it’s absolutely possible that MS tuned the API to make it as GCN-friendly as possible.

Pat D.

Agreed…I dont see why they wouldnt as the video chip is highly unlikely to ever change throughout the console’s lifetime.

Joel Hruska

The fundamental difference between Mantle and GW, to the best of my knowledge, is this: mantle does not hurt NVs ability to optimize games in DX11. Developers who agree to use Mantle can still optimize for NV. There are no new hurdles.

GW creates near-impossible hurdles for AMD. I seriously doubt a GW title can support Mantle without developers committing to enormous additional work.

Joel Hruska

The fundamental difference between Mantle and GW, to the best of my knowledge, is this: mantle does not hurt NVs ability to optimize games in DX11. Developers who agree to use Mantle can still optimize for NV. There are no new hurdles.

GW creates near-impossible hurdles for AMD. I seriously doubt a GW title can support Mantle without developers committing to enormous additional work.

Ken Luskin

Joel, You really need to update your understanding of HSA and Mantle… this is really getting embarrassing.

HSA is for APUs. HSA creates standards for how different type chips are integrated to work better together on the same dye.

HSA is implemented in APU chip architecture.

The first innovations that are in the NEW KAVERI chip are:

1) Unified memory of CPU and GPU.
2) heterogeneous queuing of orders between the CPU and GPU

Both of these innovations make APUs more efficient.

Mantle is an API!
It is NOT a chip architecture

The Kaveri chip has GCN architecture, and support for Mantle.

Because Mantle reduces the need for a more powerful CPU, the Kaveri chip will perform much better than other APUs that do NOT support Mantle.

The Kaveri chip is the first major step towards the elimination of low end discrete GPUs.

HSA innovations will destroy Nvidia low end discrete GPU business.

Meanwhile AMDs’ new high end GPUs are a much better value than Nvidia’s high end GPUs.

AND.. once ALL the new games are using Mantle the value/performance of AMD’s GPUs will greatly exceed Nvidia’s lineup.

Wall st is CLUELESS about this situation!

If a so called “techie” that writes for Extreme Tech does NOT understand, it makes sense that few on Wall st understand this situation.

Joel Hruska

I did not say HSA and Mantle were identical or equivalent. I said the the degree of technological implementation between consoles and the PC were likely similar. Specifically:

The Xbox One and PS4 both appear to implement HSA-like functionality, but do not call their capabilities “HSA.”

The Xbox One is rumored to use an API that’s very similar to Mantle, but Microsoft does not call it “Mantle.”

That’s all.

Ken Luskin

Since Almost ALL the NEW games are being optimized for AMD, this article does not make any real sense to me.

Instead confusing people with discussing this nonsense from Nvidia, you might want to try explain why AMD will be dominating the gaming chip market.

The BIGGEST thing that is happening in gaming are:

1) The NEW Consoles

2) Mantle

3) HSA implementations in Kaveri

The combination of #2 and #3 is what you should be thinking about for the present and the future.

Review the Oxide video about Mantle.

They “down clocked” the CPU from 4mhz to 2mhz, and there was NO degradation in performance.

That is HUGE!

It means that people do NOT need super powerful Intel CPUs!

Nvidia is acting like a cornered animal.. thrashing about and sucking you into writing articles that are irrelevant to the biggest trends in gaming.

The future differentiating factor in computing will be GRAPHICALLY oriented functionality.

Recognizing and analyzing the real world is a task that is more efficiently handled by the GPU.

AMD and its large and powerful partners in the HSA are producing standards that will increase the use and efficiency of the GPU in SOCs.

That is completely wrong. What makes Mantle so good is that it Multithreads the GPU completely, gives developers fine grained control in how the resources are used, and eliminates the checks and rechecks of windows and DX.
It can be implimented with Cuda cores the same way it is used with GCN. The performance benefits of Mantle are not tied to GCN in any way. The only reason it is not comming to NV in the begining is that AMD is not going to do NV’s dirty work. If NV wants it they have to write their own front end. In the end the benefits of Mantle for NV far outweigh the cost and effort for NV to come up with their own proprietary solution.
Another thing is the Devs that support Mantle are well educated about it and will likely tell NV to make a mantle driver. Developers are sick of NV’s bullshit.

Pat D.

Completely disagree. If it was as easy as you say, you can bet MS would make it a part of DX 12.0. It simply does not make sense for AMD to spend the R&D on an API that doesn’t favor their architecture, especially if it was so simple that MS could implement it (and thus save AMD the development/support costs) in DirectX.

If Mantle was architecturally agnostic, there would be no need for it to exist—there would be absolutely no reason for Microsoft to not develop an equivalent in Direct3D.

UNLESS….being more low-level, it would significantly degrade the stability of Windows in the case of bugs, in which a higher level abstraction layer like DirectX would be a wiser choice. Sort of like the old DirectSound depreciation from XP to Vista.

Mike H

You are arguing with the assumption that Microsoft has a vested interest in making an API for AMD and Nvidia. I’m pretty sure MS would never spend that much money on something that could ultimately end windows.

As for why NV never created something quite like Mantle. Who knows. NV has always been tied to MS finantially in the form of chipset licenses, and creating an API that is designed to run on any os would be kind of like biting the hand that feeds it. That was years ago but the mindset could have persisted.

IMO NV was pretty content with its NVTWIMBP/Phycs/Cuda advantages that sit on top of DX. Mantle completely bypasses DX entirely.

AMD was the only player in a position to influence so many developers because of complete console dominance. Mantle shares a huge amount of similarities to the PS4 low level API, and the Xbox1 API as well. I’m not talking about code per say but the balancing act of optimizing game engines C2M(Close to metal). Once lets say optimizations are done for PS4 it is a lot easier to port to PC-Mantle/Xbox1, and vice versa because of the common architecture.

Ken Yap

I believe the advantage of NV developing their own API isn’t as great compared to AMD is simply because they don’t have a CPU line to worry about. By making games less CPU bound with mantel, CPUs will be less relevant in benchmarking games thus holding off intel’s advantage in gaming. And given that GPUs are getting integrated into CPU computing in the form of APUs, mantel could also improve performance on that.

Ken Luskin

>>>”By making games less CPU bound with mantel, CPUs will be less relevant in benchmarking games thus holding off intel’s advantage in gaming”<<<

In the Mantle demo by Oxide, they "down clock" the CPU from 4mhz to 2mhz, and there is NO degradation….

This is HUGE!!!

Mantle completely eliminates the need for a super powerful CPU in gaming machines.

Ken Luskin

>>>”Mantle completely bypasses DX entirely.”<<>>”AMD was the only player in a position to influence so many developers because of complete console dominance. Mantle shares a huge amount of similarities to the PS4 low level API, and the Xbox1 API as well. I’m not talking about code per say but the balancing act of optimizing game engines C2M(Close to metal). Once lets say optimizations are done for PS4 it is a lot easier to port to PC-Mantle/Xbox1, and vice versa because of the common architecture.”<<<

EXACTLY!

It will be difficult and take time for Nvidia to "support" Mantle, in my opinion.

It is doubtful that Nvidia will be able to support Mantle until 2015.

Phobos

That was one of my suspicion when they were talking about mantle, specially when one of AMD representatives was saying that developers were asking AMD for it.

Pat D.

Considering how much money MS has invested in DirectX you can bet they DO have a vested interest in making sure it is the API of choice for developers. If MS hadnt thrown all the money into DX development and SDKs, there would be more than one (IIRC) Triple A title using OpenGL in the past couple of years (Rage).

OpenGL is the ideal—if MS didnt care about keeping DX the prime API, it could be the standard, as it is portable to just about any gaming-worthy platform you can think of. But the MS cash infusion keeps DX on top. And thus, you can bet that MS would have a vested interest in keeping it that way. If Mantle is as much a performance boost as we think it is, suddenly Redmond has a problem on their hands.

Russell Collins

Pat, Microsoft has already admitted that they’ve been neglecting DirectX for sometime now. OpenGL had tessellation way before MS decided to implement it in DirectX.

Moreover you assumed many erroneous conclusions to which I’m not sure how you’ve arrived there. Part of the reason AMD has been developing a low level API is for their CPU/APUs. AMD has great multi-threading processing speeds when on an even plane as Intel (meaning no Intel compilers are being used), however especially in games they have terrible multi-threading support which makes AMD looks especially terrible in benchmarks.

I’d highly recommend you watch the video of Oxide displaying their 64 bit engine. They go over all the reasons of why Mantle will rock your socks.

You are completely missing my point—the only thing I am trying to say here is that Mantle is most likely highly beneficial to the GCN architecture. This is not in doubt. The question is if it can be anywheres near as beneficial to the Nvidia Kepler architecture, or whatever their new chips will be. If this API is completely Vendor-Agnostic, dont you think that MS, with its desire to keep a stranglehold with its DirectX API, would implement such features in DirectX?

I agree on OpenGL—and that also proves my point: The only reason that OGL isnt the standard for gaming, due to its universitality, is MS spending its boatloads of cash in funding easy to use DirectX SDKs for developers, and (mostly) keeping it modern. Thus, why would MS NOT implement Mantle-style performance improvements, if, like you say, it is able to do for any vendor’s GPU? Suddenly, nobody will want to use Direct3D anymore, and MS’s near-monopoly and billions of dollars invested will go down the drain.

And if it IS easy to implement for any GPU, Mantle doesnt have any purpose, because MS will implement it in a future DirectX.

Russell Collins

MS hasn’t done those things in my opinion for the same reason Vista, Windows 8 and ME were created. Because sometimes they make poor decisions.

Pat D.

And yet, Win 8/8.1 for all its *interface* faults, is at its core a more optimized, more efficient kernel. Which is along the lines of what Mantle provides over current DX.

Russell Collins

I’m glad we agree.

Russell Collins

I’m glad we agree.

Pat D.

Sort of, I guess, although my point was that MS has shown at least *architecturally* they have gotten their act together since the bloated mess that was Vista.

Trying to draw a parallel between that and the possible next version of DirectX, showing that they would be open to streamlining technology (assuming that what Mantle’s advantages ARE actually doable on all GPU architectures and not just GCN).

Seems pretty straightforward….unless MS wants to lose its stranglehold on the graphics API, and Mantle is really that much better, AND its advances are doable for all current GPUs, they would have to adapt these advantages in DirectX.

Russell Collins

I really have to find the source on that. It was fairly recent that MS had been neglecting DirectX for other ventures and vowed they’d be working on it more soon.

Massively over simplifying things. Mantle is a very low level API, anyone who’s worked with such a thing or understand the basic concept would know that you can’t have a truly low-level API that’s hardware agnostic. I’m not saying it wouldn’t work on an NVidia GPU, but it’d likely need some form of translation layer(If it’s well made, it could still be quite a thin one, since GCN and CUDA aren’t miles apart). Due to the translation layer, there would be some minor performance hit over using an AMD card. Also, NVidia have developed a kind of similar API in the past, just no one wanted to use it.
Also, never think that the reason NVidia won’t use it is purely technical. NVidia has a great set of engineers and coders and the benefit, even if very small, would definitely be cost effective and fairly easy to obtain. The only real thing stopping them is the sub-par management and JHH’s ego.

Ken Luskin

>>>”Mantle is a very low level API, anyone who’s worked with such a thing or understand the basic concept would know that you can’t have a truly low-level API that’s hardware agnostic”<<>>”The only real thing stopping them is the sub-par management and JHH’s ego.”<<<

I don't think its " fairly easy to obtain", but I do agree with the above statement about Nvidiia management's runaway hubris.

Another layer has to have some negative effects, even if they are small.

Nvidia will be FORCED to support Mantle after AMD's discrete GPU market share exceeds 50% in Q1 2014.

john

Well… mantle still has an underying driver that does the gcn talk… what would be the problem for nvidia to addapt their dx driver to mantle and ta-da nvidia support… i would say about 1-2 months work for a driver team… with amds support even less.

tgrech

Except, Mantle doesn’t have an underlying driver for the GPU to talk. The whole point of MANTLE is that it skips the driver, and it does this by allowing the game engine to communicate directly with the GPU. This is why things like Crossfire are no longer driver dependent on MANTLE(It has been renamed Crossfire Unleashed I believe), the developers now have the freedom to move away from AFR with multi-GPU set ups if they wish.

A driver which is called the Mantle driver… because it’s part of the Mantle layer, at no point did I state I was referring to the Mantle API layer only in my post . The driver here is far from your traditional GPU drivers, and even naming it a driver is so oversimplified and confused that it’s more of a marketing term than anything. I doubt there is much, if any translation/interpretation going on in the Mantle driver, which is normally the main purpose of a driver. Instead it is likely just used to refer to a light suite of software required to run games with the Mantle API for several reasons such as verification and settings(Of course different cards, even on the same architecture, will need to do things slightly differently), and possibly to give users a few more low level options.

john

I do understand your point, and it may very well be valid, you may have a dummy driver that just exposes, in a library, all the functionality of the GPU without doing itself anything. In this case NVIDIA would have more to do but given the syntax similarities to DX (From wat I’ve seen so far it’s really just DX on steroids with better underlying support & functionality) they would just need to update it and widen the support. If it is not and the driver does the compiling then NVIDIA needs to replace the driver only. Either way it’s not that much work as nvidia most probably has all those low level functionality anyway so exposing them and extending the compiler with mantle function calls would most likely just be a somewhat bigger driver update

Russell Collins

Wrong Pat. They already talked about it in the Dice presentation at the AMD Summit. It’s open source so to speak and does not rely on the GNC. AMD said that originally, but later changed their minds and announced it during the aforementioned presentation.

Pat D.

Sorry, that doesnt make any sense. If it were truly vendor-independent, (and isnt a massive stability issue, due to so “close to the metal” coding) such improvements will almost certainly be in a future DirectX , and thus, AMD just wasted money for nothing, because if Mantle has no benefits over a future DirectX, nobody will use it.

john

Well it might just be, as you can see on all mantle info it is a layer on top of the driver so… mantle does not write into gcn language it is a better api to the driver. It is basically what dx11 should have been.

Everybody talks of mantle as if it would write directly to gcn… it is not! The code still needs to get compiled loaded and run. Having the driver abstraction layer in may very well even be hw agnostic… it most likelly has a few requirements from the driver. These however may usually be implemented or otherwise emulated. So yeah it’s only natural that mantle is optimized for the gcn driver… that doesn’t mean nvidia can’t offer the driver requirements of mantle with its own driver… it might even be it actually already does, addapting datastructs and function calls is not a childs play but not rocketscience either….

Pat D.

And if all of that is true, there is no reason why MS cant and wont include its special optimization features in future DX revisions, and thus, there is no reason for Mantle to exist.

Therefore, for Mantle to make any sense, it either (a) goes so close to the hardware that MS believes it seriously threatens Windows stability, and thus would never make a DX version with its ideas and/or (b) despite the iffy statements by AMD, it really DOES give a significant boost to GCN cards due to architecture-specific optimizations.

I cant think of any other reason for Mantle being developed.

john

How about this one: ms has neglected dx for years to keep the xbox competitive? As i see it mantle is what dx11 should have been…and dx 12 might still be if ms picks up the ball it dropped with last versions… i doubt mantle is this close to metal game engines wouldn’t jump aboard with something like this for 20-30% mor fps… hell not even double… a factor improvement would convince them possibly or another huge motive.

The way i suppose it is is it will be addaptable to any moder gpu probably not pre gcn or cuda. Nvidia will have to expose same kind of functionality. If it will ms will have to decide to either launch a new dx to include mantle functionality and transparency on compatible hardware or to drop it in favor for an open standard… given ms trackrecord id say opt 1.

Pat D.

“How about this one: ms has neglected dx for years to keep the xbox competitive?”

Again, that makes no sense. If MS purposely seriously neglected DirectX development to allow the 360 to “keep up” with monster gaming PCs, then developers might start looking at OpenGL or another alternative if such a HUGE performance boost was available.

Master OpenGL coder and supporter Carmack used OpenGL instead of DirectX 10/11 for Rage, and while the game did look nice, IMHO, it didnt look nearly as impressive as Crysis, a DX10 game released 3 years prior to it.

Ken Luskin

Almost ALL the NEW games are being OPTIMIZED for AMD, because AMD is in ALL the major Consoles!

ALMOST ALL the major engines are adopting MANTLE for their NEW games.

These games will run significantly better on newer AMD GPUs and Kaveri APU.

What Nvidia is doing is going to backfire in their face in a big way!

AMD is EMPOWERING developers, while Nvidia is trying to abuse them.

Developers are adopting MANTLE because it allows them better and direct access to the GPU.

The APIs on the consoles are very similar.

AMD created MANTLE so as to give developers a similar API for PC games.

The R9 290X is already outperforming Nvidia 770…

Just wait until these games are supporting Mantle!!!

IKROWNI

Except for the fact that the console scumbags won’t use mantle because it would give pc another leg up on the consoles as always.once again making life harder on the devs.

IKROWNI

Except for the fact that the console scumbags won’t use mantle because it would give pc another leg up on the consoles as always.once again making life harder on the devs.

Ken Luskin

Who are the “console scumbags” ? The same developers who create games for consoles, port them for PCs.

AMD created Mantle so developers could have a similar API for PCs, because a similar API is ALREADY available for the Consoles.

It is NOT wise to let your lack of knowledge and enemy based mind set to rule your thinking…..

IKROWNI

The console scumbags are microsoft and Sony. They could have easily jumped on board with mantle making the devs jobs way way easier. But they wouldn’t want it to be easy to create ports for another system. They are scumbags holding back development making lives harder. Plain and simple.

Ken Luskin

It is relatively easy to port NEW games from the NEW consoles because they use PC architecture!

This whole enemy thing between consoles and PCs is really infantile.

Live and let live!

John Mellinger

yeah and NVIDIA is working on making games 1st on PC and then ported to console.. INSTEAD of the way it has been for DECADES… As for TEC tools Mantle is just a low lvl API that has yet to PROVE it’self. If you pay attention to what NVIDIA has been doing with game dev’s then you will notice that they are doing everything they can to make pc gaming stronger and more mainstream.

Russell Collins

You can’t use Mantle on the consoles. Doesn’t work that way. Besides the API’s that exist for consoles are already very similar to Mantle as its low level and allows for the developer to program directly for their specific hardware. Life isn’t harder for the devs due to different API’s, life is far easier due to the new consoles being build on X86 architecture.

Carlton Moore

The R9 290X outperforming the GTX 770 isn’t an accomplishment, since it was designed to compete against the higher-end GTX 780 & GTX Titan. I agree this will probably backfire but i don’t see console development trends being much of an issue.

John Mellinger

You talk about consoles like they are going to be the saving grace to pc gamers… WTF dude at least Nvidia is trying to get developers to make games on PC the best they can then port to console instead of the other way around like it has been for way to many years. Mantle has yet to prove it’self so please stop sucking it off cause it has not proven jack yet. And the R9 280X is spose to out perform a 770 you idiot.. They are showing that cause of how close the 770 is to it in the game… Meaning the 780ti would crush the ever loving hell out of it. Get a CLUE!

john

@joel… calling this a new cripple amd instruction is a bit much don’t you think?

Joel Hruska

John,

I did not use that phrasing, and stated only that the situation is similar. I agree with you that this is not as egregious, but the end impact is actually quite similar. In both cases, consumers who buy a product expecting a certain level of consistent performance are disappointed, with no insight into why.

Back when Intel was crippling AMD, the conclusion people drew was: “Well, AMD just can’t build consistently good chips,” or “AMD can’t get their code right.” Programmers tended to use the Intel compiler because it had a reputation for producing the fastest x86 code. The idea that Intel was deliberately crippling AMD’s functions didn’t occur to anyone.

Now, we have a situation where AMD’s performance cannot be optimized for these DX11 functions. Writing new libraries may be technically possible, just as it was technically possible for AMD to write its own compiler, but the costs are prohibitive. Again, AMD’s performance is resting in the hands of a company other than AMD.

The reason this situation isn’t as bad as Intel’s compilers is because AMD hasn’t paid Nvidia for the right to use GameWorks. Nevertheless, I believe it creates a similar impact. People look at DX11 or the poor performance of Crossfire in Arkham Origins, and they blame AMD’s drivers without realizing that AMD *cannot* optimize the drivers for those functions without access to libraries and support from the developer.

john

I guess it’s high time to start open source project with shaders and ocl cores for any and all triks in the book…

Dozerman

That would be great to have. +1 for interest.

john

Btw… no idea where that idea was born but back then i was using the borland c compiler and it was a pice of art realy… many were saying back then that better then borland is only assembly. Many rtoses were compiled with it, qnx springs to mind… when and where the intel compiler became the norm is beyond me. I switched to java somewhere around 00’s. To be frank when i was first told about the “cripple amd instruction” i was very skeptical especially because i thought that very little software would be affected… boy was i wrong…

massau

damn it who switched the colours on the graphs.

i was just looking at the graphs and i was thinking green will probably be nvidia, orange/red amd.

Dozerman

Hilarious.

simon

Nvidia are doing what they can to make AMD’s mantle look bad. Suck it Nvidia, you had your day ripping consumers off. But that world is no longer here. Your proprietry software sucks. You don’t even have the decency to make anything open source in order to benefit everyone you money grabbing a-holes

Ken Luskin

Nividia has LOST, and they will try anything even a scorched earth policy.,

But, it won’t work!

Developers do NOT need Nvidia!!!!

Elijah Daugherty

And having an amd only monopoly is so much better.
You guys are genius.

John Mellinger

Sorry but Dev’s do need Nvidia.. So keep your ignorant AMD only love out of the concept of what NVIDIA has been doing for PC gaming. If you like hair fx then stick to AMD and go buy a Ps4/xbox1…

Shahnewaz Maqbul Ahmed

Yes they needed Nvidia, and guess what Nvidia replied when devs came crying to them begging to break away from the horrible DirectX API: “GTFO!”
Microsoft had already shown the middle finger to devs, and no love from Nvidia as well. Then they got to AMD, and they got what they wanted. Mantle.

Dozerman

Looks like in another year, I’ll be buying based off of what games I want to play, not what company I like…

john

you should never buy based of what company you like… What ridiculous criteria is that even? It delivers what I need and want: buy… it doesn’t: don’t… simple.

I hate Intel for what it did with the “marketing budgets” and “cripple amd instruction”… I hate NVIDIA for fermi and all the locked sources and marketing stunts it has pulled over the years, I hate AMD for bulldozer and the hype around it and for the good enough approach… And the list goes on and on… yet I still buy intel powered laptops & servers, nvidia workstation graphics, I buy AMD desktop APUs for pretty much anything not needing dedicated GPU grunt or high IPC.

A purchase decision has nothing to do with personal preferences, other then everything is really equal which it never is considering sufficient documentation…

havor

- you should never buy based of what company you like… What ridiculous criteria is that even? -

Actually its not, not wane buy nVidia because of all its locked sources and marketing stunts is a good and legit reason to punish them for it.

And we as end consumers can only do that with our wallet, its the main reason i also don’t play Arkham.

Its also the reason i prefer AMD over nVidia.

Dozerman

Well, aside from what you have already stated, I also chose AMD for improved GPGPU performance, lower prices, FOSS drivers under Linux, and because there is just something “romantic” (so to speak) about tiny little AMD taking on the behemoths that make up the rest of the tech industry.

And yes, it is perfectly acceptable to buy based on personal preference and liking a company “just because”. There’s a distinction between that and rampant fanboisim, of which I think you are mistaking my OP.

john

well … AMD may be small compared to Intel (50BN/6BN) but compared to NVIDIA with <4BN … well AMD is slightly bidder :D

However I do agree the distributed focus in AMD probably means that the entire AMD GPU department is about half of NVIDIA…

+ AMD fired 15% of its workforce 2013 if my memory serves so the Employees number should be closer to NVIDIA’s now.

zapper

Why not put all these damn libraries in sillicon and be done with it forever. All you need to do then is just to make a call.

Joel Hruska

You can’t implement a software library in hardware. Then you can’t patch it. You want to do that, you build an FPGA.

zapper

I am not such a low level techie ; all I can say is that if PC Bios & Routers can be upgraded/patched then why not the GPU cards with H/W implementation. And Network routers have Linux so why not create an external H/W device with all i.e. Linux as OS, all low level libraries , graphics engine etc. so that this device (like network routers) can work on any supported game of future. This will greatly reduce the development time , effort & cost of the game developers who can concentrate more on the core activities.

Joel Hruska

The BIOS is a small block of memory that tells your computer how to turn on. It is utterly different from a graphics card. Implementing an arrangement like this externally would add unacceptable latency penalties.

john

This is kind of idiotic you know… precompiled shaders & cores are as close to silicon as you’re ever going to get… Pulling them into silicon as CISC like instructions is just stupid with current tech… plus you would need something like a standard like x86 and cross license it … and by then you’re so deep into shit you’ll never get out… OpenCL / OpenGL offers an abstraction layer much higher, leaving to independent vendors to implement drivers & compilers for their own arch… and it’s much better like this… Maybe in the future we will be able to have this level of hardware independence on CPU’s too… not in the foreseeable future that is… HSA & x86 macrops may pave the way to it but there still is a long way to go…

David Stanley

oh so in some games Nvida has reduced the gap by 10%
where as with mantle games AMD adds to the gap by as much as 50%
Sounds like marketing mumbo jumbo to stop the people from running to AMD
and mantle
tune in to CES 14 and watch HSA and mantle out perform the world
be ready for a shock

ArtGlu

They should have named it “Gameworks for Windows Live”

David Stanley

MAntlr and HSA are so far advanced Nvidia and Intel are crapping themselves

john

Lol… you do realize both ar working on pretymuch the same tech… amd is just first in consumer products and has gained a lot of traction…

Ken Luskin

Almost ALL the NEW AAA games are being OPTIMIZED for AMD!

Very few games are NOT first written for one or BOTH of the NEW consoles!

Nvidia has LOST!!!!

Its GAME OVER!

Mantle is the STAKE thru the heart of Nvidia in PCs.

quasibaka

I like AMD too , but can we please stop the SHOUTING IN CAPS.
Your rhetoric is so strong that I can’t figure out if you are a poe ;D

Gregster

So nobody has thought that AMD are too busy to send out any devs to WB Montreal, as they are all working on getting Mantle working. Did Extremetech follow this up or is this just a way to get site hits.
Non story is a non story.

Joel Hruska

WB Montreal refused game code updates and improvements when AMD attempted to contribute them. The actions of the studio, however , are secondary to the shape of the program and it’s impact.

Gregster

Not sure if you have seen the actual frame rates but this game is massively playable on a 7870 at stock/1080P, so I don’t see what the issue is? TWIMTBP having more fps than AMD cards is to be expected in the majority of times. I am not a fan of anything that is proprietary but AMD are not missing out on anything except Batman smoke (something I keep getting told is rubbish) and a few fps.

I would love to see this followed up with a statement from WB Montreal and until any clarification from them, I see this as a non issue.

Joel Hruska

If you don’t understand how this is different from TWIMTBP, you didn’t read the article.

Gregster

You have released half a statement on AMD being locked out and the code they sent for AMD optimizations was apparently refused. Now why was it refused? Did Nvidia instruct WB Montreal to refuse it or was there something else? AMD have the capability of stripping down the GameWorks libraries and seeing what is what.

Conspiracy theories a plenty but why have you not dug deeper?

Joel Hruska

WB Montreal refused to comment on the situation.

Gregster

I looked for poor performance in B:AO and apart from the early teething problems, I couldn’t find anyone unhappy. It seems that AMD optimized the game to run better on their hardware back in October http://www.hardwarepal.com/amd-release-batman-arkham-origins-fix/ I still don’t see what the issue is and my point stands about it being playable on low end AMD hardware. If Nvidia are purposefully crippling performance, this article would have been golden but from what I see, they haven’t. The snow looks fantastic to me and because AMD can’t handle tesselation so well, I would hate to see it dumbed down.

On a local forum to me, an AMD user has expressed his delight in B:AO running far better on his GPU than either of the previous incarnates. Nvidia are renowned for locking down their designs and designs they have invested money into but I would think that they have no desire to cripple performance on AMD hardware and maybe WB Montreal had other reasons for declining the code from AMD?

I firmly believe that there will be a statement from WB Montreal clearing up what happened (for good or bad) and feel this article has little substance.

Joel Hruska

And having given WBM a month to reply, and multiple emails, plus talked to AMD about the situation, I think any statement will be CYA.

But since you want more info.

When AMD contacted WBM in October and offered to contribute code to improve tessellation and multi-GPU scaling, they were given three days to do so. AMD sent the code for both fixes over and was subsequently informed that the code would not be included.

That was early November. WBM has gone radio silent since.

You could call that hearsay, and you’d be right. That’s why I don’t lean on it. I present two statements I can personally verify and a third I have no reason to distrust:

1). WBM did not return my emails.
2). WBM couldn’t optimize the GW libraries, even if it wanted to. (Meaning the greater issue exists and is problematic regardless of developer friendliness to AMD).
3). AMDs ability to improve Crossfire or tessellation without WBM’s assistance is limited.

Edited to add: I spent a month on this story. It’s easily one of the longer efforts I made. I investigated multiple titles and performed a great deal of performance testing to arrive at the conclusion that overt sabotage was not, in fact , occurring.

Gregster

Fair comments and appreciate your answers. I still find it strange that AMD can’t optimize this at driver level (like they did previous) and feel WB would only damage themselves if they were to alienate AMD owners. If WB Montreal have no answer, I can see many AMD owners not buying future WB Montreal games. With Nvidia having over 300 developers, I can see how they would have sent a few of those devs to work alongside WB Montreal in Batman AO and it is common practice for AAA titles to get the same from both competitors.

I also remember AMD owners kicking up a stink when a 680 was hammering a 7970 in BF3 (pre 12.11 drivers) and the calls of Dice not allowing AMD near the code, which in turn turned out to be untrue and was simply a case of AMD didn’t bother sending out any devs to work with Dice.I can’t help feeling the same thing is happening here and AMD are happy to see these kinds of articles.

I am sure it will all come out in the wash :)

ffsfsf

LOOOOOOOOL you seriously trying to suggest that nvidia losing ALL CONSOLE HARDWARE SALES FOR THE NEXT 10 YEARS is suddenly usurped by a fucking dev kit/ tool kit pandering ? LOOOOOOOOOOOOOL get fucked you pathetic blogger

Mike H

Um…that’s very mature.

allmabaconwashere

290x vs 780ti will be more fair or 290 vc 780 something like that

Silviu

GTX 780 all the way for hardcore gaming in 1080p 24/24h!!!!

waltc3

I’m not sure I understand this article…;) It seems as if you’re simply talking about a crummy, lazy game developer who doesn’t care about how his game performs on his customer’s hardware–or else you’re talking about a crummy, lazy game developer who takes bribes from IHVs. But, this article reinforced my decisions to pass on the Arkham games as they never much interested me from the start. So, thanks for that…;)

How the game runs on everyone’s hardware is completely the decision of the game developer, and imo he’s nuts if he turns down IHV optimizations from either IHV. Why should he? Unless he’s being paid a lot of money under the table, his real customers are the people buying his games–not the IHVs.

BTW, hey, Joel! Didn’t realize you wrote this at first!

Joel Hruska

What about IHV optimization a from Vendor 1 that lockout IHV optimization a from Vendor 2? The only way AMD can match this is if the developer agrees to work with them from Day 1 to include AMD optimizations. By the time the game launches, it’s too late.

Remember, it’s publishers making this call more than developers. And that matters for the devs that aren’t big enough to call their own shots.

Effm

You’re crying about some libraries when AMD is pushing Mantle, a completely different near-metal API?

NV is committed to PC gaming, and has been for years. It shows in everything they do. PhysX, adaptive vsync/gsync, shadowplay, shield (with streaming). AMD has nothing like it. Quit taking a dump on nVidia’s head and start recognizing the companies that are really innovating in the PC gaming space. Intel + nVidia aren’t riding consoles and closed APIs to profits, they’re cranking away in the PC gaming space, creating new technologies and advancing the state of our passion.

Joel Hruska

Supporting Mantle does not hurt DX11 performance on NV or AMD hardware. It does not prevent Nv from optimizing DX11.

Gameworks does prevent AMD from optimizing its own performance.

Pyrophosphate

Mantle is not “near-metal”. It has also not been shown to be GCN specific, beyond the fact that drivers currently only exist for GCN GPUs. It would be absolutely insane for AMD to lock themselves into GCN by pushing an API made specifically for it, and they couldn’t even PAY a developer to support something so stupid.

Conceptually, AMD and Nvidia’s GPUs are very similar to each other, and very different from how Direct3D works. Mantle throws out the rendering pipeline built into D3D and OpenGL and allows a developer to define their own pipeline, composed of some number of shaders.

If the basic data structures of the API are shaders (written in HLSL) and pipelines (composed of shaders), along with memory management and CPU-to-GPU communication functions, as is loosely described in the Oxide demo, any GPU made in the last 6 years should be able to efficiently run a Mantle driver. It is AMD’s intention that Mantle becomes an open standard, available anywhere. Given AMD’s track record of supporting open systems (HSA, OpenCL, VESA standards, etc.), I’d say that is almost guaranteed.

Nvidia’s software creations have almost exclusively been either useless gimmicks (PhysX) or ideas better served by open standards (gsync). Every single one of them has been a closed system with no benefit for the wider gaming community. Nvidia actively pushes CUDA over the overall superior OpenCL and PhysX over the OpenCL-based (open to anyone) Bullet library. Their GPUs do support OpenGL and OpenCL, but only because they would literally be laughed out of the professional market if they didn’t.

Brian Ornawka

This just seems so blatantly anticompetitive I can’t believe they can be allowed to do this. Even more reason for me to avoid NVIDIA.

nobodyspecial

I see this as no different than Mantle. NV can’t use Mantle optimizations without making a GCN GPU (at least for now, maybe the only way forever), so AMD working directly with devs (DICE, cough cough) to get Mantle used is the same thing. It’s an optimization that NV can’t make. NV is just responding to AMD’s Mantle here. Until Mantle works on ANY GPU, NV can’t optimize for it and this leaves NV at a disadvantage for any Mantle game right? I wouldn’t expect NV to do anything but this gameworks crap. It doesn’t matter if I like either or not, they are the same. Programming anything for one means the other guy is screwed.

Mantle HURTS NV because they can’t use close to metal stuff that AMD can right (at least without coming up with their own version of it or making GCN gpus which will never happen…LOL)? How is that any different? A dev can ignore gameworks just like they can Mantle correct? Is NV forcing anyone to use this stuff? Supporting mantle makes AMD run faster right? NV gains nothing from that right? Same with gameworks.

The Montreal studio decided NV paid them for TWIMTBP, so had to pass on AMD help, probably just as BF4 would pass on NV help as it’s an AMD Mantle title. I pay you to make my stuff run better on game X, I don’t expect you to go helping my enemy. That’s why I paid you to begin with…LOL. NO surprises here, happens all the time a few times a year for both sides (gaming evolved titles, and TWIMTBP titles).

I don’t quite understand why as any hardware maker you wouldn’t push devs to use the features you do best (tessellation, physx etc). Whatever I dominate I will push. That’s just good business sense. Accentuate your strengths, highlight your enemies weakness and do whatever you can to hide your own weakness…Well duh…Business 101…LOL. Why would ANY company want all of their enemies on equal footing? That’s just nonsense. Nobody forces exclusives on any console, but if MS or Sony pays enough you get one.

And to the MANTLE lovers: Having Mantle in the engine means nothing. You still have to code to use those parts of the engine. If AMD doesn’t pay a dev to do this, they will not be used. Nobody codes for a small part of 33% of the discrete market for FREE. Meaning if you code for Mantle it doesn’t magically allow you to charge $10 more to AMD users…You don’t make an extra dime. Might as well code ONCE for both sides and forget it unless AMD pays you extra to do it. It is the same with physx etc. That is the beauty of a technology like Gsync. No dev help needed to support it. If they have the right hardware the user just gets a better experience with no extra dev coding needed. Support for Mantle in the engine just means you can GET IT TO WORK if you code using those parts. Your game won’t be magically mantle optimized without you putting in some time to code for it (That’s why they’re patching AFTER the fact). If I was NV I’d put out my own “mantle” just to discourage game devs from using either option. AMD is effectively forcing NV to split the devs time even more as NV has no choice but to play just as dirty by making their own “mantle” which I guess is gameworks currently :( AMD should have expected nothing less from a company who has more money and no debt. You will get a response. Here it is.

Will you be complaining about Mantle speeding up games 20% (doubt this but whatever) that NV can’t optimize for? Will you be massively complaining that Mantle gives AMD a serious advantage that isn’t fair? ROFL. I doubt it. AMD started the proprietary wars again with Mantle, expect NV to end it as they have the money to fund whatever they come up with FAR better than AMD who is billions in debt, running out of cash etc. NV has 2.7B in cash, no debt etc. Who do you think wins a proprietary war? The guy with more money in almost all cases (I’d say all but I’m sure there is an exception at some point in history). The $8 million AMD spent to get Frostbite 3 to use Mantle, NV can do 20 times and still profit. IE – if you can afford to get your tech (mantle) into ONE engine for $8mil, expect them to be able to get into 2 dozen engines with whatever they come up with. While that would break AMD and put them back into red, for NV they’d still come home with ~400mil in profits even after driving you into the ground with game engine optimizations. Mantle was a bad move for AMD especially with it NOT in consoles. You started a war for nothing without consoles, and its a war you can’t FUND.

Joel Hruska

Including Mantle support does not prevent Nvidia from optimizing for DirectX 11. It does not harm Nvidia’s ability to work with developers. It does not prevent Nvidia from building a software solution that can execute a game optimally.

Using closed libraries prevents AMD from optimizing a game to run on its own hardware because it cannot see the shader code. If a developer uses Mantle, shader code remains open and available to NV.

The meaningful difference here is which company is in charge of its own performance. Nvidia can continue to optimize its own drivers for DX11 and its own hardware, whether Mantle support is built into a game or not.

The “bridge too far” is not that AMD might have a hardware advantage over Nvidia, or vice versa. It’s the question of who gets to decide if that advantage exists or not.

nobodyspecial

The developers decide if they support one side or the other NV doesn’t get to tell devs what to do. So I don’t see how your point matters. I doubt a dev will use gameworks but not do a path for AMD to optimize also (include Intel too). They won’t want the majority of the PC audience at NV’s mercy. NV total market share of gpus is ~16% so ridiculous to optimize for that and think you’ll get away from paying any attention to the other 84% which are mostly integrated crap etc. Today you certainly want to include Intel/AMD integrated stuff that can at least play your title albeit at reduced settings. Years ago it wasn’t worth chasing integrated junk- not true today they can really play to some extent (I don’t call 1366/768 gaming at all, but people CAN actually game there reasonably and some cases above this). Decent integrated stuff (if you call it that) has opened up a much larger PC audience to sell to and game devs are noticing as GDC shows PC 2nd only to mobile for game devs.

I suspect in the end both moves will mean next to nothing for these precise reasons. Helping either side is pretty much NOT optimizing for anyone else. Any resource spent (time, money etc) on one or the other means less for everyone else.

You are still kind of acting as though NV tells a dev, “hey look, you need to use this or you can’t make games anymore.” That isn’t the case. A dev can say, “umm this gameworks stuff is crap. Have a nice day Nvidia.” No different than a dev telling AMD to go fly a kite on Mantle support. You end up saying hey, here’s a CHECK and now please support my tech. :)

I really don’t see NV pushing this past ensuring AMD’s Mantle fails. This type of paying people off to win crap is an expensive fight that can just plain fail and waste profits (surely NV doesn’t see physx as a huge money maker right? – differentiators at best nothing more profit wise). Something like Gsync is something hardware makers will sell FOR YOU. You don’t have to sell gsync, it sells itself as soon as a user sees it. You have to SELL Physx, Mantle, Gameworks etc to devs and it costs you dearly. I highly doubt NV has to do much to get monitor devs on board for gsync (4 already, not many major ones left to sign up) as it will allow a premium price to be attached to a NEW monitor sale, which of course sells more NV cards. I don’t have to beg someone to use something that will sell more of THEIR own stuff easily, they will do it on their own. If it gets them nothing though (mantle, physx etc, no extra earnings), I have to pay them a lot to favor me and screw everyone else right?

I hope AMD starts spending money on tech that sells itself on merits (gsync) rather than stuff they have to fund to eternity when their enemy can outspend them for ages. In an arms race you have to join the race no matter what. I think this is what NV is doing here. If they don’t respond to Mantle and AMD successfully pays enough devs to actually cause more cards to shift their way (say 50/50 discrete split instead of current 30/65 AMD/NV split), it would become an issue big time. Gameworks will either hurt AMD (worst case) or just be used to stifle Mantle use (best case for all of us, as this junk steals resources from GAMES themselves).

To be clear, I hope BOTH ideas go down in flames big time. I hope NOBODY adopts Mantle past BF4, and I hope Gameworks DIES the second Mantle does. We don’t (as gamers) want either side getting their way here as both will (generally speaking) steal resources from our actual game (at least for a portion of the audience anyway). I hope more tech like Gsync (which doesn’t steal game devs resources to support) comes out and survives.

Joel Hruska

Publishers have far more influence on what tech gets incorporated into a game than small dev studios do. Obviously this depends on the studio in question.

id Software picks their own tech. The Fallout creators probably do, too. Ditto for Blizzard. But if you work for UnprovenStudios and you’re trying to launch a title with a midrange budget, and your publisher says: “We can cut two months off estimated dev time and save $5M if we incorporate X technology,” then yes, you incorporate that tech.

Alex B

Devs don’t need to support one or another GPU manufacturer… they need to support CUSTOMERS, gamers. And gamers uses both platforms. And devs wins, if they use techs that are not limiting they ability to provide to more users.

nobodyspecial

And writing for Mantle supports how many gamers? A VERY small subset of AMD. Meanwhile the largest portion of users (Intel+NV+non AMD GCN) see zero gains. You’re proving my point. Thanks. They will write for OpenGL or DX and call it a day without a check from AMD, because both of these cover ALL cards/apus etc.

Alex B

What an ignorant fool.
– “NV can’t use Mantle optimizations without making a GCN GPU” – fallacy! NV CAN do optimizations with Mantle, none of hurdles.
– ” I pay you to make my stuff run better on game X” – if NV pays devs, it’s clear corruption. I’m sure game devs are most concerned for their product run best on ANY platform, not only on one of them. If devs takes payments from NV to use NV’s proprietary bonding solution, devs are shooting into own leg.
– “And to the MANTLE lovers: Having Mantle in the engine means nothing.
You still have to code to use those parts of the engine. If AMD doesn’t
pay a dev to do this, they will not be used.” – why devs need to be payed by someone? Devs are most concerned to do their best to create flawlessly running software. And they will use Mantle or whatever tech without any payment. Mantle (and other techs) in itself gives devs advantages.
Will not comment rest of BS, based on false assumption in the beginning.

nobodyspecial

8 months later after my post I’m still correct. Let me know when MANTLE runs on anything OTHER than GCN (even AMD, whatever). Until then, I’m right. GCN only. There are ZERO cards that are NOT GCN that are optimized for Mantle, even AMD’s own NON GCN cards do NOT work with mantle. Let me know when that is NOT the case. You should look in the mirror before calling someone ignorant.

AMD paid 8mil for frostbite to be optimized for mantle. This isn’t corruption unless you’re paying someone to SLOW DOWN the other guy. That’s not the same as I’m saying which is paying to OPTIMIZE for your cards etc. You apparently don’t get the point (this happens all the time for both sides, but usually not that expensive, but then mantle is a whole API, not just a feature like physx, etc). Also, if you have to code already for OpenGL, DirectX, etc, and there is such a SMALL share of mantle cards it isn’t worth your time. You have to be PAID to support crap like that (or physx etc) purely because they gain ZERO financially for using it. You can’t charge an AMD GCN card owner an extra $10 because they use mantle in their game. You charge everyone the same. So not a lot of inspiration to program for something ON TOP of everything else that runs EVERYWHERE. This is why you don’t see many physx games. You only see them when NV pays to get it used in a presumably big hit they want to showcase it in. That might be different in mantle’s case if AMD owned 2/3’s of the entire gpu market or something, but they don’t.

There are no cards that do NOT have GCN (even from AMD) that can use MANTLE so far. Until they can, you are the ignorant fool. Currently I’m correct. Please explain why ALL AMD cards can’t use mantle today? Oh, because it takes GCN cores to do it. Thanks for playing. Saying NV can optimize for MANTLE, when even AMD can’t get it done for all their OWN cards, and Intel can’t either since AMD isn’t giving them squat either (denied 4 times), really makes you sound ridiculous.

My proof is even AMD can’t do it for NON GCN cards so far. What is your proof? Please site an example of a NON GCN product that is optimized for Mantle (from anyone).

Looking here http://forums.overclockers.co.uk/showthread.php?t=18567859&page=10 it looks to me like AMD should be begging Nvidia to do more optimizations, as they have done a better job than AMD can. The 290X, clearly beats my Titan at the same settings. AMD users should feel happy that AMD are shut out. Look at the mess that is BF4 lmfao!!!

Please provide your proof on this. I benchmarked these titles in multiple driver sets using the default benchmark. I used the last updated version of AO that was available as of late Nov / early Dec. I benchmarked the game across three different system builds to ensure I wasn’t seeing other issues.

I benchmarked play tests. I benchmarked the official test. I tested every single DX11 feature. I benchmarked at driver defaults, I benchmarked using AMD driver settings. And that’s just for the tests you see in graph. You don’t see the tests I ran in Splinter Cell or Assassins’ Creed, because I chose to summarize that data.

I ran about 30 different benchmarks on Arkham Origins alone. I ran them all three times each. I ran them on both cards.

So please, tell me how these results are “massively wrong.”

Gregster

Just read that thread on OcUK. You can clearly see that the 290X beats me and a 780 and a 780Ti with the GPU on stock clocks (290X/Titan/780/780Ti all at stock) and the 290X beat us all out.

TommyBhoy ran the bench on a stock 290X and a 3770K @ 4.5 and got min 70 – avg 102 – Max 136
Gregster ran the bench on a stock Titan SC and a 3930K @ 4.625 and got min 67 – avg 94 – max 127
We both had the same settings in game.

AMD released a driver update which gave AMD much better frames it seems but if they are locked out, how can they? GameWorks devs visit on site and teach the games dev’s how to use the GameWorks libraries but you say that only Nvidia are allowed access to these.

Your article took a month to do and yet it took me a couple of hours with google to find this basic stuff. Like I said earlier to you, this article is a non issue and made for site hits (which has worked :))

Edit:

Why did you not run with 8XMSAA in this bench? FXAA is weak and will skew the results to Nvidia as well.

Joel Hruska

What drivers are you using? Because while your AMD results are not far off, your Titan results are low.

They log 141, I log 145. They log the R9 290X at 136, I log 148. My review was written with more updated drivers.

I benchmarked Arkham Origins on a GTX 680 and a GTX 770 on an Ivy Bridge-E, Haswell, and Sandy Bridge-E system using Nvidia Forceware 331.93 and 320.49 on a Windows 7 system using SP1 and all additional patches. Driver versions were removed using DriverSweeper and then installed clean. Windows 7 was installed fresh to every testbed. AMD’s Catalyst 13.11b2 and 13.11b9 were also tested.

Furthermore, I worked closely with AMD to confirm numbers on both platforms. I additionally tested a 7970 GHz Edition to spot-check data.

Your performance results for the Titan are far too low. Either you haven’t updated your drivers, you updated without a full previous removal, or you’ve got another program running that interfered with test results.

Gregster

My drivers are 331.93 and others who posted results on a 780 and a 780Ti made my results valid. You benched without MSAA and used FXAA for some reason. I am sure 3 of us can’t have messed up systems.

Also, if Nvidia are the ones who control the GameWorks libraries and lock out AMD, how come AMD released drivers which gave a 35% performance increase? Read this:

Greg failed to realise that AMD improved AA performance in Batman with that driver he mentioned earlier. GCN is more powerful than Kepler with AA activated so they can overpower the GameWorks advantage and pull ahead with that feature activated. MSAA is something you don’t need game code/dev cooperation for. Greg also seems to be under the impression that GameWorks is an Open Standard. No idea where he got that from LOL. I think he may have been drinking, again.

Gregster

Why did Joel run it with just FXAA if GCN is better than Kepler? Seems a daft thing to do to me, As for the Open Standards, Nvidia have said this:

“

GameWorks in Action

We’ve dispatched our engineers to work onsite with top game developers and add effects, fix bugs, tweak performance, and train developers in open standards and work hand-in-hand with our game laboratory.”

So you expect us to believe Nvidia over three different non biased journalists from three completely different sites? Also it says they will ‘train devs’ in ‘Open Standards’. Seems a bit rich coming from Nvidia. It does not say anything about GameWorks being an ‘Open Standard’. If that was the case then why can neither the dev, nor amd optimize for it? If AMD cannot optimize their own drivers on a game using GameWorks how can that be described as an Open Standard?

Gregster

Believe what you want, I don’t care either way.

LtMatt

Yes you do, we both do. Now lets make up and have a hug you big softie. xx

Gregster

lol, I certainly will not fall out with you over some silly lock outs/not locked out/driver optimizations/no driver optimizations. This is a good discussion and I look forward to nVidia/AMD/WB Montreal releasing something (if they care enough) about all this cafuffle xxx

LtMatt

That would allow us to sleep easy at night. As it stands i expect Nvidia to deny it and AMD to remain quiet. Just to annoy us Lol.

Gregster

It has kept me awake and stewing for the last couple of days…Third world problems and all that :D

LtMatt

Hehe we love it.

Gregster

I think AMD fed Joel this, to detract from the fact that Mantle isn’t ready and it takes your mind off it and away from AMD :P

Joel Hruska

I used FXAA because switching to MSAA is precisely what allows the 290X to leverage enough raw power to overcome the GTX 770. My benchmark results are not “massively wrong,” simply because you don’t like the tests I chose. My test results accurately summarize performance in the mode tested.

When you turn MSAA to full, you push fill rate hard enough that card performance comes down to brute force. Yes, the 290X wins that comparison. This does not explain the performance gap between FXAA performance of AO and AC under identical circumstances.

Finally: GameWorks locks out optimization of specific functions. It does lock out everything — just the cutting-edge parts. In Arkham Origins, the following Gameworks libraries are used:

Clearly the GW library loadout is customized and tailored depending on the title. These are the libraries and functions AMD cannot optimize. The fact that AMD can optimize the game and improve performance 35% due to other changes does not change the fact that GW-specific changes are locked out. And I believe the original story makes this distinction quite clear.

LtMatt

Joel the 35% AMD improved performance by is largely (if not completely) due to improving AA performance, as far as i know.

Gregster

So you are saying that you ran FXAA to skew the results. Ok that’s cool and you are also saying that GameWorks is proprietary, which is also cool but Until AMD tell me different, as clearly you can’t be bothered to read anything I am showing as evidence to the contrary. You have told me I have something up with my computer but strangely, LtMatt has just popped a bench thread which shows how a 290X beats a Titan and weirdly, me and the guy who tested are getting the same frames as that review.

Did you ask nVidia how come the tech they say is ‘Open Standards’ isn’t?

Joel Hruska

Testing in FXAA is not “skewing” the results. FXAA is an option in the game menu.

The “something up” with your computer is that you’re testing in 8x MSAA.

Finally:

The R9 290X wins the 8x MSAA tests precisely because once you hammer the GPU *enough*, the heaviest-hitting solution barely manages to eke out a win. That does not change the fact that the results are quite different when we *don’t* crank up the MSAA enough to counteract the various engine optimizations that are tilting the game towards Nvidia.

Why is AMD concerned about these results? Probably because a growing number of titles rely on FXAA as the default anti-aliasing method. A number of games released in the last few years either default to it or prioritize it.

If you always play every game in 8x MSAA, I acknowledge that you may not find these results particularly relevant. That does not make them *wrong.*

Gregster

You tested 2 cards that are capable of playing full 8XMSAA, so that makes the results stupid. Why would you drop down to FXAA in the first place is beyond me, as anything averaging over 60fps is more than playable. Infact, I am willing to wager that the low end cards like a 7850 or 660 would need settings lowered more so on the 660, purely because the game runs better with AMD hardware.

So you are now saying that AMD are concerned about these results…. That doesn’t make for an unbiased read and looks like they have set you up nicely for this article ;)

The bottom line is if you own a lower end card such as a 7850 or a 660, you should be prepared to drop settings.

As for this whole Article, it has all been said before in the Batmangate scandal.

AMD did not “set me up” for this article. They made me aware of their concerns. I investigated the issue and believe the concerns warranted. I also spoke to Nvidia and attempted to speak to WBM.

The benchmarks, data collection, and research for this story are entirely my own. Vendors regularly communicate with journalists regarding product performance of both their own hardware and that of their competition.

Gregster

Interested to know what nVidia had to say on this.

LtMatt

I’d like to thank Joel for bringing attention to Nvidia’s shady practices. Its not the first time they’ve been caught deliberately harming AMD performance, even at the cost of their own users before. However this time it appears it’s only affecting AMD users. The ‘Dark’ side of GameWorks in full force. I can only hope we don’t see too many engines using this. It would be nice to see some other sites pick up and report on this. That could be hard though as i imagine most are in Nvidia’s pocket.

Nightmare106

Very good article. Not many end users knows this so it’s good that this article is published. It offers a short summary of information for the end user in the simplest way. What’s also interesting that PhysX only works with Nvidia card while AMD actually took the time to make their TressFX open source and available to both manufacturers Nvidia and AMD alike. While Intel and Nvidia are basically doing everything they can to undermine their competition. I am not favoring AMD. In fact, I’d rather buy an Nvidia card seeing how cheap they are now and offer equal performance to AMD’s counterparts. I think choosing between AMD and Nvidia should just be a matter of taste. It would also offer much more to the end customers and developers alike.

Alex B

At the beginning you said that it’s good to know such info, but at the end you sa that buying decision doesn’t need to be made based on this info. Weird conclusion. Then I need to ask you: why users need to know this, if consumers buys malpractice vendors’ products? I see that I need to vote with my wallet against NV practice and buy AMD.

Magio

One of the best articles I’ve read in a while.
The explanation is simple and easy to understand and that is why it’s just sickening to read Nvidia’s fans failing attempts to defend it.

Whether Mantle is the future of PC gaming is an open question. But I’ll tell you what I expect.

I expect Mantle will deliver a performance advantage for AMD hardware over DX11 hardware. I would expect that performance advantage in shipping titles to be somewhere between 10% and 30%. I expect the gain to be in the 20-30% range only because I think that’s more or less the minimum amount developers would need to see to make them interested in the product.

If Mantle performs as advertised, it should weaken the impact of the CPU on performance testing and, by extension, make AMD’s CPUs a better gaming option.

But none of that makes it the future of PC gaming. Historically, single-vendor standards have a poor track record. DX 10.1 was AMD only. Supported in a handful of titles. Nvidia threw a *lot* of development cash at hardware PhysX. Supported in a larger handful of titles, and not much more.

DX11.2? No one cares. The original tessellation unit that debuted in the HD 2000 family and was present for both that GPU and the HD 4000? No one used it. ATI TruForm? No one used it. Even Glide was on the way out by the time 3dfx died.

Historically, the market favors evolutionary tech over revolutionary and dual-vendor solutions to single-vendor ones. If Mantle improves performance in DX11 games by 50% I’ll be the first person to say “Holy crap,” but that doesn’t mean Mantle will automatically replace DX11 in next-gen titles.

γιαννης

i really dont understand the point of that thing..
you start the article by saying about a low level api developed from nvidia in order to close even more the industry to a shitty protocol but then you do a benchmark with 2 game that favors massively nvidia and put a card that costs 300+ against a card that costs 500
and then i wonder since its a biased benchmark really who you didnt do with avp or metro last light or another games? to show how the low level nvidia cards can match up the best of amd?
how come you do a bench in an article about a closed low level api with games that only support that?
finally this article is a joke really very biased and it seems like someone got a paycheck hehe

Joel Hruska

The point of the article was to compare Gameworks titles. I did compare non-GameWorks titles, which is why I mention that statistically, an R9 290X is roughly 24% faster than a GTX 770. That’s why it’s notable that the gap is nonexistent in GW titles.

Comparing a $550 GPU to a $330 GPU when the two cards are performing equally was to show that this is statistically unusual.

When the point of the article is to investigate the GameWorks program, you cannot complain that the article tests and focuses on the four games that are part of that program, with the further focus on the one title with concerning issues.

I was paid a standard fee for this story and received no other compensation, overt or implied, from AMD.

Alex B

joke is you with your comment. If you don’t understand, stay silent and don’t write this BS.
P.S: read this article again and again, until you understand. I have to read once to understand.

RuskiSnajper

Mantle will smoke this shady approach … Nvidia seems like it wants the industry to be chained down in the DX cage for even longer, maybe they get some of that bailout money hmmm ?

Kelly Michaels

Mantle has so far been proven to be another AMD paper tiger, and with MS firing back with the new DX12 API solutions to come, this move by Nvidia is all the better. You AMD users got your Mantle, now Nvidia has one upped AMD and you don’t like it.

SPARTdAN

This is bullshit but not surprising. Do Nvidia just want to lock down PC gaming?

Kelly Michaels

Mantle has so far been proven to be another AMD paper tiger, and with MS firing back with the new DX12 API solutions to come, this move by Nvidia is all the better. You AMD users got your Mantle, now Nvidia has one upped AMD and you don’t like it.

Joel Hruska

I wrote this story in December. My opinion of GW has nothing to do with whether AMD’s Mantle is a paper tiger or a runaway success. I do not view them as equivalents.

Use of this site is governed by our Terms of Use and Privacy Policy. Copyright 1996-2015 Ziff Davis, LLC.PCMag Digital Group All Rights Reserved. ExtremeTech is a registered trademark of Ziff Davis, LLC. Reproduction in whole or in part in any form or medium without express written permission of Ziff Davis, LLC. is prohibited.