We all know that driver specific and per-game optimization happens for all major GPU vendors, including AMD and NVIDIA, but also Intel, and even mobile SoC vendors. Working with the game developers and tweaking your own driver is common practice to helping deliver the best possible gaming experience to your customers.

During the launch of the Radeon Vega graphics cards, AMD discussed with the media an initiative to lower the input latency for some key, highly sensitive titles. Those mostly focused around the likes of Counter-Strike: GO, DOTA 2, League of Legends, etc. They targeted very specific use cases, low-hanging fruit, which the engineering team had recognized could improve gameplay. This included better management of buffers and timing windows to decrease the time from input to display, but had a very specific selection of games and situations it could address.

And while AMD continues to tout its dedication to day-zero driver releases and having an optimized gaming experience for Radeon users on the day of release of a new major title, AMD apparently saw fit to focus a portion of its team on another specific project, this time addressing what it called “the best possible eSports experience.”

So Project ReSX was born (Radeon eSports Experience). Its goal was to optimize performance for some of the “most popular” PC games for Radeon GPUs. The efforts included both driver-level fixes, tweaks, and optimizations, as well as direct interaction with the game developer themselves. Depending on the level of involvement that the dev would accept, AMD would either help optimize the engine and game code itself locally or would send out AMD engineering talent to work with the developer on-site for some undisclosed period of time to help address performance concerns.

In PUBG, for example, AMD is seeing an 11% improvement in average frame rate and a 9% improvement in the 99th percentile frame time, an indicator of smoothness. Overwatch and DOTA2 are included as well though the numbers are bit lower at 3% and 6%, respectively, in terms of average frame rate. AMD claims that the “click to response” measurement (using high speed cameras for testing) was as much as 8% faster in DOTA 2.

This is great news for Radeon owners, and not just RX 580 customers. AMD’s Scott Wasson told me that if anything, the gaps may widen with the Radeon Vega lineup but that AMD wanted to focus on where the graphics card lineup struggled more with this level of game. PLAYERUNKNOWN’S BATTLEGROUNDS is known to be a highly unoptimized game, and seeing work from AMD on the driver and at the developer relations level is fantastic.

However, there are a couple of other things to keep in mind. These increases in performance are in comparison to the 17.12.1 release, which was the first Adrenalin launch driver in December of last year. There have been several drivers released between now and today, so we have likely seen SOME of this increase along the way.

Also, while this initiative and project are the right track for AMD to be on, the company isn’t committing to any future releases along these veins. To me, giving this release and direction some kind of marketing name and calling it a “project” indicates that there is or will be continued work on this front: key optimizations and developer work for very popular titles even after the initial launch window. All I was told today was that “there may be” more coming down the pipeline but they had nothing to announce at this time. Hmph.

Also note that NVIDIA hasn’t been sitting idle during this time. In fact, the last email I received from NVIDIA’s driver team indicates that it offers “performance improvements in PlayerUnknown’s Battlegrounds (PUBG), which exhibits performance improvements up to 7% percent” with driver 391.01. In fact, the website lists a specific table with performance uplifts:

While I am very happy to see AMD keeping its continued software promise for further development and optimization for current customers going strong, it simply HAS TO if it wants to keep pace with the efforts of the competition.

You forgot to mention driver the uplift is measured from with the Nvidia drivers?

The website doesn't mention it. Just says,

In addition to delivering the best day-1 experiences possible in the latest releases, our Game Ready driver program continues to optimize and improve the games you care about long after their launch. In today's driver, we're delivering performance improvements of up to 7% in PlayerUnknown's Battlegrounds, on our complete range of GeForce GTX 10-Series graphics cards, giving you an even-faster, even-smoother experience in the world's most popular action game.

If they ment the day-1 driver that is the same time-frame as the AMD 17.12.1 is using as comparison for their 11%

While I am very happy to see AMD keeping its continued software promise for further development and optimization for current customers going strong, it simply HAS TO if it wants to keep pace with the efforts of the competition.

It looks like your comparing development and isn't 11% higher then 7%

Also in the comparison AMD points to Ultra settings where Nvidia doesn't and reference their PUBG graph which is in HIGH settings.

Not sure how one comes to such a conclusion with the information provided in the article and links.

no read between the lines. he is saying Amd is known for shit drivers and even shittier fixes, but looks like they are finally trying to fix the shit storm they created over the years. Oh and by the way, Nvidia has been bragging about driver optimization for years and also has a similar graph to AMDs if you are new to the game and was wondering about the competition.

Would you rather they not even post news about your beloved AMD? I know you guys like to support the underdog so why don't you guys go find another hobby bragging about Walmart and how they are going to take down Amazon?

This just means that AMD now has the funds to invest in better drivers and it's not like AMD currently has that much of the total gaming GPU market share to justify investing even more in gaming. Nvidia still has more money to do more specific base die tapeouts(GP100, GP102, GP104, GP106, GP108) but maybe AMD can afford to start doing 2 or more base die tapeouts instead of the one that they started with(Vega 10) that had to do double duty as a base die for compute/AI(Radeon: WX 9100 and Instinct MI25) and a base die for gaming(Vega 56/64).

That One Vega 10 base die tapeout that AMD only had enough funds at the time to afford had only 64 ROPs max available and was really only able to compete with Nvidia's GP104 base die tapeout that also maxed out at 64 ROPs in the GTX 1080. AMD had already frozen the Vega 10 base die design(to compete with GP104) before Nvidia even pulled out its GP102 base die tapeout that had a total of 96 ROPs available in creating the GP102 based GTX 1080Ti(88 out of 96 ROPs) with the Ti able to really push out the gigapixel fill rates that Vega 10 could never match.

The Vega 10 base die based Vega 64 is not too bad in gaming compared to its targeted rival the GTX 1080, and AMD's shader heavy Vega 10 base die tapeout had an excess of shaders that proved even more popular for the compute side that Vega 10 was purposely designed by AMD/Raja/RTG for and that was for the Vega 10's base die as it was needed for the Radeon Pro WX9100 and Radeon Instinct MI25 AI/Infrencing SKUs.

How fortunate for AMD that the Vega 10 base die, the only tapeout that AMD could afford at the time, was also so popular with the miners that Vega 64(based on that Vega 10 die tapeout) now sells for more than even the GTX1080Ti. And gamers of both Red and Green TEAMs need to get over themselves because the Compute/AI markets are willing to pay much better markups than any gamers ever could. Nvidia needs to watch out for an AMD that's able to afford more base die tapeouts next time around because the Vega GPU micro-arch is not that bad compared to Pascal.

And you can not judge a GPU's micro-arch for gaming on that single Vega 10 base die tapeout that AMD was barely able to afford as the Vega GPU micro-arch. If AMD decides to tapeout a gaming oriented Vega tapeout with more than 64 ROPs available then that’s a different game for sure. AMD would only need to produce a Vega die tapeout with a little less shaders(to use less power) and a little more ROPs to match the GTX1080Ti in pixel fill rates GPixels/Sec.

Nvidia defeats AMD not with better GPU technology but with money because it takes money to tapeout 5 different base die tapeouts like Nvidia does and look at those GP100-GP108 tapeouts that’s 5 different tapeouts that all come with varying numbers of total available ROPs and ROPs are why Nvidia wins the FPS gaming metrics contest every time. Watch out for Navi if it is in fact made up of modular DIEs like AMD's Zen/Zeppelin modular/scalable based die SKUs because then AMD will be free to scale up with one or two modular base die tapeouts and easily compete with Nvidia without having to do 5 different monolithic base die tapeouts for each GPU generation like Nvidia does.

AMD could get buy with one ROP heavy and lower shader count Navi modular/scalable GPU die for gaming oriented GPU builds and one Navi modular die for for compute/AI builds and scale those like it does with the Zen/Zeppelin modular die. And Both Zen and Vega make use of the Infinity Fabric and on GPUs the Infinity Fabric is not tied to any memory clock domain or any other clock domain but an Infinity Fabric clock domain on Vega.

The Vega GPU micro-arch still has a few features that are not made use of widely in Gaming and that's Rapid Packed FP math(FP 16) and Explicit primitive shaders. And AMD was unable to get Implicit primitive shaders working(A complex Coding task) so legacy games could take advantage of primitive shaders but games makers will definitely be making use of AMD explicit code path to using primitive shaders Explicitly. So that and FP 16 will be used more by games makers looking to improve their games' performance on any Vega based GPUs. Mobile/Desktop integrated Vega is already here on APUs and will be arriving on discrete mobile Vega SKUs with 4GB of HBM2 shortly and games makers will be targeting that discrete mobile Vega with its 4GB of HBM2 using Vega's FP16/packed math, Vega's Explicit primitive shaders, and Vega's HBCC/HCB(HBM2) IP. So that’s not so bad for AMD as it get more revenues from GPU sales to whoever and Epyc sales in the server markets(AMD will make more on Epyc than any consumer gaming GPU revenues).

You probably would want an achievement if you read that thing, that monstrous WALL OF TEXT. Because reading is so hard. Let us all go back to play COD with our glorious consoles, no reading or comprehension required.

No. The tone was intended to be sarcastic and the reply was to the person who referred the post as "wall of text". The comment section is at a fault here that it seems that I replied to the person who actually read the text.

You can not wrap that single enfeebled brain cell, floating in that sea of lipids beneath that thick crust of bone, around what was stated in that post. So you must direct your anger at the one who posted the "Wall-O-Text". So you have not really the ability to reason and have very little idea of how the real world works and Engineers and Software/Engineers are not low paid positions in any technology based business/market. So that "Wall-O-Text" is not really a wall of text as there appears to be some breaks in that text that angered you nonetheless with your being so much your usual self so very much in that state that van be defined as mostly an endless partisan rage.

You can not even work up a single cogent reply to what was stated in that post which angered you so much that you even actually could be bothered to reply to. You are so, as they say in the vernacular of the interwebs, very TRIGGERED by that supposed "Wall-O-Text". That little child inside of you that still has control over your actions so very much stomps its feet and gnashes its teeth in a fit of rage.
Stomp stomp stomp, gnash gnash gnash! And be careful not to clench your lower sphincter so tightly as the resulting compressional forces may just cause the nuclear forces in those nuclei of those sphincter atoms to be readily overcome resulting in a rather large release of enegry and widespread devastation out to radius of many miles!

Just on my phone here, but you're 100% correct. And with the modular design for their dies, AMD is able to lower their failure count for working dies from a single wafer. This will reduce their per die cost, thus increasing margins. It also allows them to use more per wafer since the cores they create are smaller and don't leave so much wasted space around the edges. In addition, with new Intel CPUs being shipped with Vega gpus now, they'll need some better drivers for many of the major titles.

Actually that's what Ryan is doing. The article is based on a driver press release. Ryan felt the need to counter the driver press release with a competitors driver press release. Have you heard of anyone ever doing that?

Indeed but you've taken those figures as gospel and by the looks of it are attempting to imply that the fault lies in the reporting of those figures when it's fairly clear that's not the intention of the article.

In case it needs spelling out the intention of the article, from what i can tell, is to highlight that AMD are paying more attention to optimising their drivers and have seen decent uplifts in performance but they can't rest on their laurels as their main competitor are doing the same and have been doing so for quiet sometime.

APUs(Zen/Vega) APUs are where AMD will get the majority of its gaming graphics market share from. And AMD needs to get all of its driver ducks in order for AMDs APUs and even it's discrete mobile Vega GPU variants that come with HBM2(4GB). The desktop flagship contest can wait as that's pretty much in Nvidia's hands until AMD can get some desktop GPU designs with loads of available ROPs to pump up the pixel fill rates to please the Flagship fiends that need their FPS metrics bragging rights even more than they need to actually game.

The Mini-Desktop Form Factor is very popular and AMD needs to focus on getting its APUs into devices in those Mini form factor based sustems that are similar to Intel's NUC series of devices. Even Valve's Steam Machines probably should have waited until AMD had it's Raven Ridge desktop APUs ready as that's where Valve should re-double down on a new line of Zen/Vega RR based Steam Machines going forward, what with mining still takeing most of the AMD higher end desktop GPU SKUs lately.

There are lots of folks purchasing Nintendo's switch and there is enough interest in Vega for some hand held RR APU based SKU that may or may not make it into production. So AMD needs to shepherd its own mini/micro form factor AM4 Motherboard standard among AMD's MB partners and take that to the home built mini-desktop/HTPC market and get a larger market share in graphics via some AMD Zen/x86 APU's and Vega graphics, including games development that can make use of rapid packed math and explicit primitive shaders and thoes "eSports" sorts of gaming titles that millions spend billions of hours playing.