Why not, if Mantle makes that playable? I've been playing COD Ghosts on a Core 2 Duo 2.66GHz E8500, and an E7300 before that. Do you know why? Because that's one of the gaming machines I have! Both were above minimum requirements but I do want to make a new build for that now as it is certainly underpowered. I had already upgraded the video card which is much easier, but now I'll have to replace CPU, mobo, memory, reinstall OS and all software.Reply

Cool it fanboy, cool it. We all know AMD gets the free pass on things way more often than any other tech company out there because they're the underdog, and well... because of people like you. If Nvidia or Intel had announced up to 50% performance boosts in the months leading up the release of their new API, then missed their target release window twice, and then these results came out, I'm pretty sure it would've been declared a failure by the community on the spot. Hell, if Nvidia had even announced Mantle the same way AMD did it would've been over before it began. But since it's AMD, it doesn't count. Objectivity, get some.Reply

No, AMD's beta driver came 2nd, the BF4 Mantle patch was first. And since it's STILL a beta driver we're still waiting on AMD's finished product here right?

I'm unimpressed. I have no intention of buying anything less than quad core Intel today, and a high end card on top so would yield about what every driver rev gives in games.

Johan Andersson said they worked for 2years with AMD on Mantle for Frostibe3, so I doubt many will use this if AMD isn't paying them. Such a small market, and earns you NOTHING above everyone else. You can't charge more for an AMD game buyer vs. everyone else. You are essentially coding to speed up some people that won't give you a dime. Maybe if they start selling Mantle patches for games for $5-10 we'll see it used. AMD can't afford $8mil per game, or even $2mil.

I'm guessing Nvidia is prepping their response to AMD's proprietary Mantle as we speak. I don't think they want it used really, but it's what you'd do to get Mantle killed. Just put something out there so nobody wants to code for either then let them both get killed. We need games in OpenGL, WebGL, HTML5, Java etc that are portable to everywhere in days or weeks. Not MONTHS or years. Making anything Mantle is the same as DirectX. It isn't easy to port from either to anywhere else. Nobody wants to code for DirectX (xbox1), Mantle, then to OpenGL (ps3/ps4), then to whatever on mobile, then port to PC (a game made on consoles is hard to port from Directx, but easier from PS3/PS4 OpenGL now that K1 is coming out for mobile).

Any time wasted on Mantle junk, takes away from the GAME itself. Think about it. If you could possibly code ONCE (say openGL) and run everywhere you get a ton more sales, and have a lot more resources to say, making a 30hr game vs. today's 7-15hr games. You might have the money to put into a decent AI for your game etc.

Now that steam will be pushing linux, you have yet another reason to use OpenGL etc that can port easily to anywhere. This was a waste of money by AMD. They should have spent this on avoiding the phase1,2,3 (more?) drivers this year and last, and on making better GPU's (290/290x was a botched launch and hardocp says this chip wasn't really designed for 28nm and needs 20) and also enhancing the CPU's. Instead we got console crap (which also stole from gpu/cpu/drivers) and Mantle that they just can't afford to support, along with Free-Sync which may never even come out as AMD even said it isn't a product for market yet, if ever.Reply

It's proprietary, the SDK and source code isn't available for download and AMD has said it might be open to releasing/licensing it to other IHVs at some point in the future.

AMD's talk about opening up to other IHVs is just lip service so they don't sound hypocritical after beating the Open standard drum for so long and to help alleviate industry concerns by likening it to an "open standard".Reply

Mantle can run on anyone's hardware, it's just that no one else has an architecture design like GCN that would actually support it. It's the same reason that Mantle doesn't run on AMD's older GPUs.Reply

Andy why wouldn't they offer it to Nvidia to use? It would be like handing someone who only reads/writes English an instruction manual written in German. They cant use it.

If Mantle offered the widespread support of DX or Ogl then I'd be more excited. Sadly I fear that with only moderate gains (when you need them) it will fall by the wayside like other fast API's already have. I suppose if it's build is close enough to console API's then we might see enough support (developers tend to do the least work they can get away with) for it to be successful. Reply

Can OpenGL implement these optimizations? If it's abstracted by a good graphics engine/api then it wouldn't affect the game designer. Frostbite is a full blown game "engine" for game designers, but if they can do it why can't it be implemented into OpenGL for game developers?Reply

1. Mantle cost is 2-3% of a games budget.2. Nvidia has no response, that is why they talk about how openGL can do stuff like enable more frame rates.3. even if Nvidia has a response it won't do anything. __Mantle only works because the consoles both use AMD GCN GPUs__ which mean programmers are already programing GCN close to the metal.4. Mantle closely remembers options the PS4 uses, it is not as close to the Xbox One due to the ESRAM. You also realize that the K1 is not going to be on any real mobile devices? What did Nvidia talk up with the K1? Oh, yeah, __CARS__ and how those are the future. Remember the Tegra 4 comparisons to the PS3/Xbox 360... Oh, yeah. Not to mention power draw.5. Time "wasted" on Mantle is small compared to everything else. Especially when you realize mantle is extremely similar to the coding done for the PS4 and fairly similar to the Xbox One.6. You do realize that "console crap" is such a stupid thing to say.7. You do know that "free-sync" isn't some proprietary technology? It is supported by VESA. You know, the group that is the de facto leader in what monitors and such use.8. You do realize that AMD didn't make Mantle because it decided to. It did so because developers asked for it.Reply

Actually Mantle is very similar to ps4 low level APIs so porting games form the console to Mantle should be much easier than porting them to direct x, or at least it wont take much more time for compiling the libraries in both APIs. Console low lever APIs however are completely different from high level pc APIs, directx and opengl need a lot of optimization when games are ported from either consoles. Besides, Mantle was demanded by the developers because as they have said, direct x crap is heavily blocking them from expanding their vision for newer video games, they have made the same demand to nvidia but amd was the only one to take action, for now. You should really take a look at the mantle presentation made by oxide, because you have a lot of wrong/contrasted ideas if compared with what i heard form expert developers.Reply

"The plan is, long term, once we have developed Mantle into a state where it’s stable and in a state where it can be shared openly [we will make it available]. The long term plan is to share and create the spec and SDK and make it widely available. Our thinking is: there’s nothing that says that someone else could develop their own version of Mantle and mirror what we’ve done in how to access the lower levels of their own silicon. I think what it does is it forges the way, the easiest way,” explained Mr. Corpus."

Summary: Mantle is proprietary. I know AMD has been feeding you the Open Standard mantra far too long, but it does help to read and think critically for once.Reply

I think you might have a few misconceptions, here. First of all, AMD isn't going to pay per game. That would just be silly. The developers aren't going to, either. Instead, AMD pays the people who make the engines (e.g. EPIC for UE3/4, DICE for Frostbite). They do the work once, and then the vast majority of developers don't have to touch it. That's kinda the whole point of an Engine, so you don't have to do that work and can focus on the game.

Incidentally, changing just the rendering backend really isn't difficult at all. It's only about 6000 lines of code. Once you throw in cross platform methods for input, sound, multitasking, etc, it becomes a significant amount of work, but mantle by itself is something like a man-week's worth of work. Maybe a man-month, if significant debugging is required.

On top of that, for heavy CPU-based games (like what oxide studio's trying to do) the CPU resources freed by Mantle enable you to do things you couldn't with DirectX / OpenGL. It also can vastly improve the play experience by preventing CPU spikes from interfering with the rendering and can help with frame pacing. Ultimately, the games probably feel like they're getting more than an 8% FPS boost... which is still more than the difference between an R9 290 and a Titan, but people were willing to pay an extra $600 for that.

Mantle isn't perfect, though. The CPU usage boost is nice, but that 8% or so GPU increase absolutely locks AMD to the GCN architecture, which means they can't make significant strides, like Nvidia has between, say, Fermi and Keppler, without losing the mantle advantage over Nvidia.

Honestly, what needs to be done, probably, is just taking OpenGL and take the heavy lifting off the driver's hands and shifting it to the program itself. You'd get the same CPU boost but with wider hardware support.

As far as WebGL, HTML5, and (ugh) Java go, these platforms tend to offer less security and tend to be absolutely terrible at multithreading. Java especially, the buggy piece of junk it is. Frankly, you could not make Crysis 3 in Java. If you wanted to make it run on Java, you'd probably need a high-end machine from circa 2020. Do you want all your games to look like they're from 2008 right now? Nooo? Then dont' develop for Java!

I also seriously doubt Nvdia is developing a "Mantle killer" in any large capacity. No doubt they've looked into it, but considering how much better their drivers tend to be at managing the CPU and wot, they would not get as large of boost from Mantle, Nvidia almost never does open type designs willingly, and they're not the first to this market. Mantle would need to provide a consistent 20%+ bonus before I can see Nvidia really trying to tackle it.

As far as Free-Sync vs G-Sync is concerned, I imagine it'll be something along the lines of CUDA, where the Nvidia-specific solution is actually inferior, but popular because of marketing and because it was "first". Still, Nvidia cards will undoubtedly be able to support both, whereas G-Sync monitors will only work with Nvidia cards, so they'll make some money that way... and I wouldn't exactly call G-Sync market ready, either.Reply

By amd right?And just because it is similar, doesn't mean it is easy to program for.That's like saying I wrote a program in c; so it must be like copy and paste to java?They're similar, but not the same.Reply

Every driver rev produces a 7-10% performance increase? And I've been replacing my hardware all these years! Silly me!

Developers have been asking for a lower-level API for years. OpenGL isn't any lower than DirectX. I fail to see what kind of major advantage SteamOS is going to have over DirectX or Mantle. In fact, the stuff that you're saying about Mantle actually applies MORE to SteamOS: developers have to waste time and resources porting their code over Linux and OpenGL in order to support a software platform that is fragmenting their PC userbase. OpenGL has been around for years and still isn't commonly used for PC games. Ever wonder why that is?

Dice releases a patch for a DirectX game that improves performance anywhere from 7-30%, or possibly more for slower CPUs. Seems like a no-brainer. Especially when considering that the paradigm that this benefits matches the new consoles exactly. To me, it seems that SteamOS is has much more to overcome than Mantle.

If Mantle gains support, then there won't be any reason to buy expensive Intel CPUs for gaming computers that don't need them. Not that there is a good reason as it is, since most games aren't really CPU-bound with modern CPUs anyway, as these benchmarks clearly indicate.Reply

@TheJian - uhhhh did you notice the performance increase? Because your comment made it sound like there is none. That could be a reason why developers would want it. It's a competitive advantage, not something like hypothetically trying to charge more for AMD users vs everyone else. And you probably could charge more if your game runs better, and you can make the graphics look better... or you could keep it the same price and just enjoy the broader gamer base it enables.Reply

How could you possibly think that when EA launched Mantle update on time, while AMD has repeatedly delayed their driver launch the last few days, constantly changing support levels and acknowledging "nasty bugs" that prevented their Mantle driver from releasing?Reply

By that logic, AMD also had a driver running then - since that is a prerequisite for EA/DICE having BF4-Mantle ready.

Hmm.. guess not. Anyway, the real story is that BF4 bugs were more problematic than expected, so man-power was dedicated to repairing those bugs. Mantle was delayed by EA/DICE in case there were compatibility issues with the bug fixes. In this time, Mantle and AMD were in close communications, and AMD took the time to more carefully evaluate the drivers.

It's what joint product development always experiences. One party has a delay, so the other takes the time to improve their product. This happens inside each company as well as between the two companies.Reply

Yes of course AMD had a driver ready, and as we have seen, it is buggy and full of support holes. None of which changes the fact EA hit the targeted release of January and AMD did not. Not sure how this is so hard to understand. EA released on Thursday with Mantle support, AMD did not release until Saturday (February 1) due to their own admissions of "a nasty bug". And that's before you get into their own huge outstanding buglist and reports of performance regression in other titles besides BF4.

No I think most Nvidia users would discourage Nvidia from developing a whole new API for a meager 7-10% gain in GPU limited situations, resources better used elsewhere (G-Sync, GameStream, GameWorks, PhysX, 3D Vision etc.)

I think Nvidia is listening to the devs they had on their GeForce panel back in October. While Johan Andersson clearly wanted a low-level API like Mantle, no one really seems to want to turn back the clock to the days they had to support multiple renderers for every IHV on the same platform.Reply

What a joke. Nvidia users crow about any kind of performance advantage they can. I've even heard a few of them try to explain why a performance disadvantage was actually a GOOD thing. Anyone with a brain knows they'd jump at the chance to brag about something like this.Reply

Not really, 7-10% is what one might expect from a big driver update or optimization, not a 2 year project to build an API from scratch for a small % of vendor-specific hardware.

If AMD feels this is how to best use their resources to the benefit of their customers, more power to them. I think as an Nvidia user I am much more interested in Nvidia's efforts to work within existing APIs like DX and OpenGL and focusing their close-to-metal efforts in hardware, ie. their embedded Denver core in upcoming Maxwell. Another example would be G-Sync, which focuses on improving frame quality especially at low FPS, which would provide a benefit in every game instead of just a handful that require specific hardware, API, or dev support of that API.

I certainly don't want the industry to move to multiple vendor specific APIs, that does no one any good and would certainly be step backwards to the days every game had to support multiple vendor-specific APIs and codepaths. Reply

Would Nvidia users discourage Nvidia from developing a new API for a 30% gain in CPU limited situations? Hell no. Would Nvidia do it? Yes probably since that's a pretty big performance advantage. However they don't make CPUs so they don't have the demand on both sides like AMD does.

You just cut out a small slice of the picture (of course the minimum performance advantage), and used that to argue against it, and even then I don't agree.Reply

Would I discourage Nvidia from developing an API for larger gains in unlikely scenarios??? Absolutely! It's a big performance gain in scenarios that are unlikely to occur (slow CPU + fast GPU) or unlikely to majorly benefit in the real world (low resolution/settings or multi-GPU already at high framerates).

I would certainly prefer the alternative that Nvidia took, developing new technologies like G-Sync that can improve frame and image quality at ANY FPS in ANY game that uses Nvidia hardware.

There's also growing evidence that Nvidia took the steps to improve their drivers in existing APIs that AMD did not, mainly with support of Deferred Contexts and Command Lists in their DX11 Multi-threaded Rendering implementation. It has been known for some time AMD does not support these features while Nvidia does and the evidence is clear, even with Mantle, comparable Nvidia hardware running DX11.2 MTR (Win8.1) is faster.

Hate fanboys like you. So far it seems like Mantle works best in certain scenarios. If I have to run my games in low settings to get a 25%+ increase in FPS, who cares. 180FPS+ to 210FPS+ is nothing anyone should care about. What matters is good cpu with good card and if they can increase performance by 25+% then that good. 7-10% is not good enough to have the developers just spend time to make a Mantle API. Developers not not even using DX11 fully yet, it will take them forever to use Mantle. If AMD can't make Mantle API also run on nVidia hardware then it's useless.Reply

You can, however, say "nVidia cards got x result in this benchmark. AMD cards with DX11 got y result in this benchmark. AMD cards with Mantle got z result in this benchmark". So long as the benchmarks remain the same, you can still compare performance numbers just the same as comparisons up until now between AMD and nVidia cards..Reply

3DNow was basically an earlier version of SSE, with the disadvantage that it re-used the MMX register file instead of adding new registers. Once SSE became standard and AMD adopted it, 3DNow became redundant - why make something AMD-specific when you can as easily make a version that has performance just as good and works on both Intel and AMD chips?Reply

"We’re also working with our ISV and IHV partners on future efforts, including bringing the lightweight runtime and tooling capabilities of the Xbox One Direct3D implementation to Windows, and identifying the next generation of advanced 3D graphics technologies."Reply

Well when AMD adopted SSE in their CPUs, they were offering parts that were as fast if not faster than equivalent Intel CPUs. It took great performance and made it better.

3DNow! was in the K6/K6-2/K6-3 days and their FPU performance was so bad, 3DNow! was the only way to get decent performance in many games. It's a moot point now anyways but I'm just reminded of that situation when I read about Mantle.Reply

Because companies won't invest on new stuff unless they see the actual usefulness of it.

In this case, 3D Now! was a spit to MMX, so Intel developed SSE1 to counter it. AMD has like 1/8th the push factor Intel has, so no one adopted 3D Now! and the rest is just history.

Same with MANTLE. We had Glide back in the day, 3DFX proprietary, but devs loved it. Now dev ask for the same, MANTLE appears and everybody seems to give it the backslash, even when AMD says anyone can use it if they want. Not open, but at least AMD won't deny access to it to whoever wants to use it. I don't know if they'll charge a petty penny for it, but at least I guess they need to keep the fee very low so it catches up. If Intel or nVidia don't want to adopt it, it will be because they are suckers. Irony is that 3DFX was bought by nVidia and they never got a proper Glide initiative, because it wasn't profitable to push their own standard even if devs asked for it.

glide died because it was proprietary, just like mantle, it was superb, but too costly to develop for it and non proprietary api's. developers did NOT want to develop for multiple api's. now amd is trying to go backwards so that developers will have to spend more time and money developing for multiple api's again, not a good thingReply

Agreed. Busy multiplayer titles will see a large benefit, but most benches will focus on single player scenarios. Low end CPUs will also benefit. So it gives affordable i3 and AMD chips a boost in budget gaming. Nothing wrong with that.

I'd like to see support for Mantle in strategy games, for certain.Reply

The consoles have down to the metal APIs, they don't need Mantle. All the benefits have already been in use there. That's why you hear people like Carmack saying consoles are 2-3x faster with the same hardware.Reply

Agree, I want to see this too as it directly affects me, multiplayer FPS games, and I have been CPU bound. I'm upgrading now but I obviously see a need for it as it happened to me. I could have waited out one more generation of games if I had this CPU bound improvement!Reply

As expected, what this helps most is the weaker CPUs. This should be good for AMD APUs, if Mantle takes off. It also shows what not using high level APIs in consoles can do to the lower end Jaguar cores they use. Reply

Ryan have you enabled/disabled the cores via the BIOS or inside the windows environment (with Task Manager).Is it possible that Mantle can bypass the core affinity settings in Task Manager?I'm asking because the performance scaling with cores & frequency seems strange.

I would also love to see a performance comparison with more CPUs, to see if Mantle levels the playing field between AMD & Intel CPUs.Reply

Thanks for the prelim, I have to add however; I don't see how a 4960x and a 290x are the most "wanted" items to benchmark. I can wait, but I would believe most people would like to see benchs on mainstream hardware, i.e., R9 270x and an FX-6300 w/ 8GB, SSD. Or R9 280x and i5-4670k w/ 16GB. These are the most common setups.Reply

you obviously can't repeat it identically (unless there is some replay option) and it will take maybe too much time. but :

1. You can test the same map with same number of players, as I'm sure you would2. Run your character through the same path in the map, running into a lot of people. You should be dead quickly enough and be respawned, so maybe it will require reasonable time to test3. Even getting a consistent minimal FPS could be nice

4. I know it is hard and not accurate, but if the results are consistent enough (like getting the minimal FPS) in some maps, at least it can give players some idea about BF4 multiplayer performance

I also guess that if the difference between Mantle and non-Mantle gaming is big enough in case of multiplayer, we'll probably see it

Personally, I've tried to benchmark the multiplayer and though it was hard due to a lot of bugs, the FPSs were quite consistent for the same mapReply

Benchmark server populated with 10-20 editors or volunteers, just parked in vehicles and fixed locations. Even have them fire on fixed, non-destructible locations. If the results are different enough to show a solid CPU load and difference in performance from the control run, just go with that.

The server should still have to account for the players even thought they aren't really doing anything, but once you have them acting and destroying things that starts introducing variability.

The argument to validate these results is that even in real multiplayer games, your framerate should still remain relatively consistent when the round opens and you are just running to the first objective vs in the heat of combat, ie. you don't necessarily get more CPU load just by having more actions onscreen.Reply

Now cutting cores on a high end Intel processor is not the best way to come to conclusions. An Intel core is much different to an AMD core and maybe faster than a core in a Pentium processor (more cache available etc.). In the link above Mantle looks really really interesting, much more than what you found here with your preview.Reply

I think it is great that AMD is getting this out finally. I'm curious how much it helps in APU only scenarios like laptops. This looks like a good software workaround for pairing AMD cards with their performance disadvantaged CPUs.

I'm in agreement that just disabling cores on Intel's finest CPU is not exactly equivalent to simulating those power SKUs. Specifically cache is still there. Handicapping the 4C/4T to only 2 GHz is an outlier when the other two scenarios keeps things at 3GHz or higher. I completely understand the benefits of handicapping the CPU in speeding evaluation.

Im curious as well. I bought a MSI GX70 last year. The A10 5750M APU in it isnt all that great and bottlenecks the 8970m quite often. A little disappointed that the driver is limited to a select few GCN cores. Just got to wait a little while longer. Anandt ech did a review on the GX60 a while back and talked about how the APU bottlenecked the GPU. Im really curious to see what kind of performance improvement Ill get. Im happy with the laptop as is, but I wont turn down a free performance upgrade.Reply

I'm still running a Core 2 Quad with a 7870, so something Core 2-based would most interest me. And actually, I've got an old 3.8 gHz Pentium 4 with HT. I may have to get a copy of BF4 and throw the 7870 into it just for fun.Reply

Yes! I have a Core 2 Duo in one of my systems and game in Call of Duty Ghosts. It's still above the system requirements and has a new GPU, but obviously is CPU limited. If I could get 30% improvement with this CPU limited scenario, it would basically prevent me from having to buy new CPU, motherboard, memory, and reinstall OS and all software for at least another year. As it is, I am doing all of that now but this situation exists! My other system is a Phenom II X4 965, which is still playing games great, and something like this with a newer GCN video card would likely keep that one going a long time.Reply

Honestly speaking. I am quite disappointed about improvements by Mantle in GPU limited scenarios. 7-10% better results could have been achieved with a well developed driver too. Why waste resources on building a new API? AMD has more important issues to focus upon like Frame Pacing and better 4K support.

OTOH I am extremely impressed with CPU bound results. It's a slap in the face for Intel since gamers won't need to upgrade CPUs if and only if Mantle takes off.

I strongly believe that Mantle's sole target should have been to alleviate API Overhead and reduce CPU bottleneck. GPU Performance should have been improved via drivers after properly implementing Frame Pacing.Reply

Good job on quick preview. I'd also like to see some benchies with GCN 1.0 GPUs and various CPUs like an i3, FX 6300 and A10 if possible. As much as there is the fact that this is exaggerated by AMD, it's awesome that GCN users can get additional performance for free, be it 7% or 40%. I hope they improve mantle even furtherReply

You may have this planned already so I'm sorry if this is telling you something you already know.I think it may be quite useful to report the standard deviation (or variance, take your pick) of the frame rates. Averages and minimums may hide an overall smoothing effect which could actually count as a performance increase. In other words, Mantle, by removing CPU bottlenecks, may reduce the frequency of frame rate dips. The best way to quantify this is with a measure of spread such as the standard deviation.

Can't see this being worthwhile for devs to implement for a 10% gain on a minority % of hardware (AMD only has 40% of hardware, and only GCN cards are supported). Anyone who has a Intel Quad core since 1st-gen i7 will likely see minimal gains from Mantle and that's before we see how Mantle fares against it's Nvidia counterparts.

I think the real advantage will be for those who use AMD CPUs, but the chances of someone using an AMD CPU and owning a high-end GCN graphics card are pretty slim based known data sets. Steam for example has Intel at a commanding 74% share with only 24% AMD.Reply

But AMD has 100% of the consoles. I know Iknow, Mantle is not what they use on the consoles, but it is similar. Makes porting for the developers a lot easier. Also, relieving that CPU overhead allows developers to now use that freed resource for other things. This is just a quick preview. Wait for the full article.Reply

Mantle is as similar to consoles as they are to each other, and as we know, each platform still takes development time and resources. Can't see how it makes porting easier when they have to port 1 more build for the same platform as DirectX is still not going away.

Relieving CPU overhead is of questionable benefit given many of these games and hardware use cases are already GPU limited, meaning the CPU overhead was never really an issue to begin with. We see significant benefits (>10%) from Mantle only in fringe CPU limited scenarios that are relatively unlikely to begin with (multi-GPU at low resolutions, single GPU at low resolutions/settings).Reply

Nvidia's counterpart would be their competing GPUs for each price/performance category, if Nvidia performs as well or better in BF4 and as well or better in other titles, again, is it really a worthwhile endeavor? Would anyone change their buying tendencies for a single game, or a few games per year if it meant crippling performance in the myriad titles that do not use Mantle?

Mantle resembles PS4 coding as much as PS4 coding resembles XB1 coding, they are all different and as such, take time and resources to produce and validate. The difference of course, is that the PC already has an API that supports 100% of IHVs (AMD, Nvidia, Intel) vs. Mantle which supports just a small fraction (GCN only for a minority AMD share of hardware).

As for 2-3% being a small part of development? I wouldn't consider that a small part, it just introduces more variables that can go wrong during the development process. We have already seen Mantle was delayed nearly 2 months in BF4 and took nearly 2 years to develop in the first place. For what? 10% gains in most use-case scenarios to a small fraction of the user-base?

I guess we will see how it goes, but I can't see widespread adoption for these types of gains.Reply

The real performance gains will be evident when game engines (like nitrous made by oxide) will have their roots deeply into mantle. They said that the amd api has up to 300% performance gain if compared to directx on their star swarm pre-alfa release (downloadable on steam).Reply

That's because they are implementing everything as a separate draw call instead of instancing it like everyone else does. If they do that then they don't have too many draw calls and it all works fine like every other game out there. They are doing it for publicity obviously, but in the end they'll have to program it properly and the benefits will be similar to BF4.Reply

I'm sure it's on your list of things to do for the full review, but just want to make sure there's a comprehensive image comparison done. Not saying there's any foul play in terms of IQ but it's been an issue in the past from both Nvidia and ATI/AMD under the guise of "optimizations".Reply

Thanks Ryan, will be interested in seeing the results. There have already been numerous reports however of graphical artifacts, anomalies and bugs however while running Mantle codepath. Mainly a "fog" that DICE has already said is a bug, but looks an awful lot like the washed-out console "optimizations" that could be the result of reduced shader precision, fewer lighting passes, or lower detail settings. Will be interested to see what you find if/when these bugs are fixed.Reply

Ryan you're probably pretty far along in testing already, but are you doing any regression testing with the 13.12 drivers? There have been a lot of reports of reduced performance in virtually every other game, might be worthwhile to check to make sure BF4 performance hasn't dropped from 13.12 to 14.1 Beta. Ideally I'd like to see at least these 4 tests per CPU and resolution:

@ The Rabid Nvidia Fan boys. And even the AMD fan boys. Get your heads out of your butts. NVidia has already been doing this for years, which is why they didn't take up the offer for mantle. Mantle, and NVApi are effectively the exact same thing.

Nvidia controlled the Xbox, xbox 360, and PS3 on the GPU side. Therefore they had NVapi there to bring out extra performance on cross platform games, as it was set up to automatically bypass certain parts of direct3d and opengl for better performance. Battlefield 3 is a classic example of this, CryEngine games are another one. Nvapi was also part of making it easier to port console to pc on the graphics end.

AMD Now controls the console market. Therefor they're bringing out their own driver low level api to do the same thing Nvidia did with nvapi. This will continue for the next 7 or so years, average console refresh cycle. There will be plenty of games that support mantle, just like there have been plenty of nvidia the way it's meant to be played games over the lives of the previous gen consoles.

On a side note, this does mean AMD CPU's cores will be less of a bottleneck on their own GPU's when it comes to gaming with mantle enabled games. I'm not for example, going to be upset over a free 20ish % performance increase on my own HD7850 Phenom II 1090T setup. Reply

We all love to see those benchmarks on high end systems, but how many gamers nowadays sits home with 8 core CPU:s and 1-3 High end GPU:s? Usually 1-5% play on those monster machines. Ofc. a lot of those are probably reading Anandtech. Anyway...

Lets say the average player has an i5/i7 2500 - 4770 at around 3.4-4.2Ghz and either an AMD Radeon 7870/270X or a Nvidia GTX 670/760 GPU.

Let's say they play BF4. How many play on Ultra Settings when they play Multiplayer? Only the people I would say that are oblivious to their sluggish performance at ~20-30 fps at a worst case scenario on a large Multiplayer map.Any serious and knowledgeable gamer today tries to have minimum 60fps in a worst case scenario and more and more gamers have a 120hz/144hz monitor and aims for 100+fps.

So playing on low-high is the only viable option if you want to have good performance on all maps at all times.

And even if all readers here have better systems than that and see only 8% gains cause you all play on ultra... what will happen when you buy a 4k monitor this or next year with 4x the pixels for the GPU to push out? Will you still play on Ultra?

So it all adds up. I think 8-30% is huge, because this will help 90% of the gamers out there, the rest probably already have 100+ fps at ultra and don't need the boost.

Hard OCP did a test on BF4 with an Radeon R9 270X and got: Average 44.7 fps at 1920x1080 - 4XMSAA - Highest In-game settings.But this is on a Intel 3770k Overclocked to a 4.6GHZ! And Min FPS is 26.

Just like a console that has average hardware compared to high end pc's, Mantle will benefit the largest portion of all gamers and in the future you can probably buy a cheaper CPU and perhaps put more money on the GPU and get even more bang for the buck.Reply

Do you guys still have the MSI GX60 with the AMD A10-5750M and the AMD Radeon HD 7970M? (The 7970M is a pitcarn part similar to a downclocked desktop 7870) If so this would be a very good choice to see if mantle actually improved the performance of that mix matched laptop and if it allowed it to compete more similarly to the intel chips with the 7970M.Reply

Why use a high end cpu? Mantle is to help increase performance for the lower end cpu's iirc. If you already have a cpu that's that fast mantle wouldn't do much for you. Put in some quad kaveri type cpu or Intel i3 or something and test it and see. Mantle was to benfefit lower-mid end systems iirc and not top end systems which will already have high-end fps performance and eye candy and bottlenecks the gpu.

My understanding of mantle may not be correct since I have an NVidia video card (and thus I would get no benefit), so I didn't do a lot of research personally on mantle. But would still like to see it reviewed with a lower/mid-end cpu that is actually in more persons computers than an i7 like that.Reply

Excuse me Ryan, but is it not a mistake to think that the Core i7 in 2C/4T configuration is equival to Core i3 processor, because Core i7 in 2C/4T configuration still have 15MB of L3 cache, and Core i3 only 3MB of L3 cache?(Sorry for my bad English, it is not my native language)Reply

What's causing the DirectX bottlenecks? Shouldn't this effort be put into getting rid of THOSE? Or are they inevitable because Mantle is a lower level API than Direct 3D?

I'm very uncomfortable with Mantle, particularly since AMD STILL needs to get their drivers in order. They're still years (decades?) beyond Nvidia, and...? And instead of doing that, we're doing this presumably marketing driven mantle stuff?

Even if some real games support it for a while, what happens when AMD releases a new architecture that doesn't fit with Mantle? Do they continue supporting it and at that point it's just a high level API? Do they continue supporting it but alter it, so games have to effectively support another version of it?

I'm feeling like this is a bad idea all around. If AMD had 5 years of rock solid driver releases under their belt that would be one thing, but micro-stuttering, Enduro doesn't work, notebook support generally is terrible, they don't support hardware very long, etc. while Nvidia's managing to actually do Optimus well, SLI well, etc., let alone the basics... Get the basics right before you do this...Reply