Is Nvidia’s PhysX causing AMD frame rate problems in Gears of War?

Ever since Gears of War Ultimate Edition came out last week, there’s been a rumor floating around that one reason the game runs so poorly, with so much stuttering on AMD hardware, is because Nvidia’s PhysX is actually running on the CPU. We were flagged about this possibility last Wednesday, so I installed the base game and consulted with Jason Evangelho over at Forbes, who had written the initial article on Gears of War’s low performance, to check performance settings and the like.

Update (3/11/2016): I’m inserting a point of clarification here about PhysX and how it functions. Nvidia historically licensed PhysX in two distinct ways — as a general software middleware solution for handling physics that was always intended to execute on the CPU (software PhysX), and as a GeForce-specific physics solution that added in-game visual effects and was intended to execute on Nvidia GPUs (hardware PhysX).

The problem with this distinction is that hardware PhysX can be executed on the CPU as well. This is a distinct third operating case, best referred to as “Hardware PhysX executing in software.” Some websites have claimed that Gears of War uses this mode by default, therefore harming performance on AMD GPUs. Our results refute this claim.

Original story below:

I used the built-in Windows performance monitoring tool, Perfmon, to grab a screen shot of what CPU utilization looked like within Gears of War when benchmarking at 4K on an AMD Radeon Fury X GPU. I also checked the Windows\Apps folder to check the configuration files for PhysX. What I found — and I wish I had screenshots of this — was that every single game-related INI file contained the following: “bDisablePhysXHardwareSupport=True” Since I was testing on an AMD Radeon R9 Fury X, that’s exactly what I wanted to see. I turned the system off and went back to working on other articles. (All tests below were run on a Haswell-E eight-core CPU).

Data from March 2. PhysX disabled according to INI.

Fast forward to today, when reports are still surfacing of the “bDisablePhysXHardwareSupport” variable being set to False, rather than True. I fired the testbed up again, allowed the game to update, checked the same INI files, and found that the value had changed. On Wednesday, five files had defaulted that value to “True,” meaning PhysX should’ve been disabled.On Sunday, the value had changed to “False,” which implies it’s now enabled.

Data from March 6. PhysX enabled according to .INI.

If you compare the CPU graphs of False versus True, however, you’ll note they’re more or less the same. Allowing for some variation in when the benchmark run started, and you’ve got a pattern of high spikes and dips. The average value for the disabled/True run was 13.63% and for the enabled/false run, 14.62%.

What about Nvidia? I dropped in a GTX 980 Ti, installed Nvidia’s latest drivers, and ran the same, simple test. I allowed the benchmark to run twice, then grabbed the final CPU utilization result.

Click to enlarge. Data from March 6. PhysX enabled according to .INI.

The average CPU utilization on this graph isn’t much lower, at 11.77%, but the shape of the graph is distinctly different. The GTX 980 Ti’s frame rate is roughly double that of the R9 Fury X (we benchmarked with ambient occlusion disabled, since that mode causes continual rendering errors on the AMD platform), but the CPU utilization doesn’t keep spiking the way it does with the AMD cards.

Smoking gun or poorly optimized game?

It’s true that the .ini default setting for Gears of War appears to have changed between the original game and the latest update that’s been pushed via the Windows Store. But there’s no evidence that this actually changed anything about how the game performs on AMD cards. Nvidia’s own website acknowledges that Gears of War uses HBAO+, but says nothing about hardware PhysX. Given the age of this version of the Unreal 3 engine, it’s possible that this is a variable left over from when Ageia owned the PhysX API; Unreal 3 was the first game engine to feature Ageia support for hardware physics.

Right now, the situation is reminiscent of Arkham Knight. It’s true, Nvidia cards generally outperformed AMD cards in that title when it shipped, but the game itself was so horrendously optimized, the vendor pulled it altogether. As of this writing, there’s no evidence that hardware PhysX is active or related to this problem.

All we have is evidence that the CPU usage pattern for the AMD GPU is different than the NV GPU. Since we already know that the game isn’t handling AMD GPUs properly, even with ambient occlusion disabled, we can’t draw much information from that. Our ability to gather more detailed performance data is currently curtailed by limitations on the Windows Store. (None of the game’s configuration files can be altered and saved — at least not using any permission techniques I’m familiar with.)

If you’re an AMD gamer, my advice is to stay clear of Gears of War Ultimate Edition for the time being. There’s no evidence that hardware PhysX is causing this problem, but the game runs unacceptably on Radeon hardware.

Update (3/11/2016):

After we ran with this piece, we realized that while we can’t edit the INI files of a Windows Store application, we can change how PhysX runs via the Nvidia Control Panel. Previously, the application was set to “Default,” which means that if hardware PhysX was enabled, the game would execute that code on the GPU.

We retested the game in this mode and saw essentially identical results to our previous tests. The CPU utilization curve for GeForce cards remains somewhat different than it does for AMD GPUs, but it’s consistent whether PhysX is forced to run on the GPU or the CPU.

If Gears of War actually used hardware PhysX, it would increase CPU utilization when we offloaded that task back on to Intel’s Haswell-E. The fact that we see no difference should put to rest any claim that Gears of War is using PhysX to damage AMD’s performance.

Tagged In

Have you tried looking at Tessellation levels? I know that is one area where AMD has always suffered compared to NVIDIA and that it was brought up when looking at NVIDIA’s hair effects in GameWorks. It seems like the first thing they would do on porting an old game to new hardware would be to dramatically up the poly count.

Butcher

Gameworks sucks, whats the point to use over 16x of tessellation levels? to see the boxes smoother? there is no point that Nvidia do that, just only to hurt the competition, the worst thing is that people is supporting a company that is killing the PC gaming, they are trying to be the monopoly of video cards, AMD should sue them for the potential market lost because this anti competitive practices.

Ian Skinner

have to agree, last card I bought from Nvidia was the 8800 GTS, I can’t stomach is unsavory behavior of the company, nothing overt most of the time, just low level, underhanded, consistently poor ethics..

eonvee375

8800 series FTW!

~ 2006

Ian Skinner

yeah i know, first time i had a dual card set up,, pissed me off at the time I had 3 monitors and I couldn’t do SLI in anything other then single monitor..

Mosab Al-Rawi

As far as you can turn Game Works on and off in some games AMD can’t sue nVidia for anti competitive practices.

Rex Lajos

You can’t turn it off in most unreal 4 based games. Also, I would like to see the test done on an old fx 8370 to see what happens.

Mosab Al-Rawi

From legal perspective calling it joint development will finish the problem.

Butcher

What about gow? The last paych allow to turn hbao+ off but everu time gameworks is present in some game it have problems…

John Kiser

Could it be because potentially you are trying to run features that amd doesn’t support on their cards?

bobby

Game works is acting like cancer.

Joel Hruska

It is difficult to perform that kind of analysis on tessellation — you have to be able to use Visual Studio to gather data on the game while it’s running, and I don’t think that’s possible on Windows Store titles. I could try forcing the game to minimal levels of tessellation within the driver, but I don’t think that’s what’s causing the lengthy drop-outs and stutters.

Tessellation can make a game run more slowly, but it shouldn’t make the engine stutter for 0.5 – 1.5s. That’s what I see on the Radeon platform.

Bryan Meyers

That was more in reference to the FPS effects that you saw with the HairWorks in The Witcher 3:

Frame rate counters are so overused on any tech site that once you show the readers any other type of graph in a gaming related article they will instantly assume it must be those pesky FPSes.
So most gamers only know FRAPS and 3DMark and perfmon is just a scary looking tool they might have heard of at some point.

Mosab Al-Rawi

That tells exactly what I suspected since some time. Microsoft didn’t like what AMD did when they lunched Mantle. Mantle itself is nothing but AMD used some methods that joint developed between Microsoft, AMD and maybe other partners these methods utilize a not much used unit existed in many GPUs called for short async to help the poor APU inside Xbox one to draw some extra performance. And to make it worst AMD gave the code lines of these methods to be part of Vulcan. What bad happened? I don’t know.

Joel Hruska

…..

If I understand you correctly, you think that because MS wasn’t thrilled that AMD launched Mantle separately, it somehow did …. something … to “help the poor APU inside Xbox One draw some extra performance.”

This is not an argument for why Gears of War Ultimate Edition would run poorly on AMD hardware.

Mosab Al-Rawi

Microsoft is punishing AMD.

Joel Hruska

….No, they really aren’t.

Why would MS punish AMD? Because AMD launched an API three *years* ago?

Virtually the same GCN hardware that powers the Xbox One is what’s running in AMD GPUs today.

Mosab Al-Rawi

The async is an old unit and in fact Fermi GPUs have async units similar to the ones found in CGN and then nVidia started developing similar unit specifically for CUDA. Async was used in GPGPU jobs along time ago and Microsoft to squeeze all the performance they can get ( from the cheapest offer they get) used async the long forgotten unit existed in all dGPUs at that time. Now AMD used Xbox one strategy without Microsoft permission what do you expect Microsoft would do?

bytetracer

My dear friend, you don’t have a clue what you are talking about…

Mosab Al-Rawi

I do. But do you?

Joel Hruska

We don’t know what Pascal’s async compute capability is, not yet. But unless you’re talking about HyperQ, there’s no evidence of a CUDA async unit. Games aren’t programmed in CUDA or OpenCL.

Async units?
They can execute async compute with preemption because their GPUs doesnt have any hardware for async

Mosab Al-Rawi

It turns out that Maxwell has something like the old async unit but specifically developed for the complicated tasks in CUDA.
We all know that nVidia is going ahead in their work to make CUDA more optimized and the integration between CUDA language and CUDA cores make it logical to make CUDA async unit which is designed for CUDA purpose.

Ok. This is a much older CUDA presentation for much older cards. It’s referring to different ways to perform dedicated compute workloads, mostly with the goal of hiding latencies and keeping the GPU busy. These slides all refer to a dedicated compute environment.

When we talk about asynchronous compute in DX12, we’re talking about a GPUs ability to execute GPU compute workloads while it’s also rendering graphics workloads. Nvidia’s support for asynch compute is very different from AMD’s, and more limited as far as what kinds of workloads it can run asynchronously.

Mosab Al-Rawi

As we can agree that the async units are existed in Fermi we can move on to the next point. if we can’t we can discuss this more.

No. Fermi did not have any specific capabilities for asynchronous compute. Kepler and Maxwell only support this feature in very limited context.

What you are referring to is a compute-specific discussion on launching asynchronous *workloads* within a compute environment.

Asynchronous compute in a DX12 context means: “Compute + Rendering.”

Asynchronous workloads being used to hide memory accesses are an entirely different animal. They’re not related, not the same, not equivalent, not identical, and Fermi does *not* have asynchronous compute units in hardware.

Mosab Al-Rawi

(mostly with the goal of hiding latencies and keeping the GPU busy) isn’t this the async job but with different words?
Problem is that before DX12 fiasco nvidia used asynchronous for every thing starting from memory fill through of loading commands from memory to the GPU and finally debugging jobs. so yes there is no mention to asynchronous units in name but the jobs that done asynchronously in GPUs are too much and too complicated in some time so maybe nVidia didn’t call part of their GPUs asynchronous unit because all their hardware is working asynchronously on hardware and software levels. so the only problem here is that the world out side CUDA programmers didn’t hear the word asynchronous this much and then AMD wanted to make a big show from a long word ((asynchronous)). the oldest mention for the word was for GTX260 https://hashcat.net/forum/thread-383.html )

Joel Hruska

I consulted with Ext3h, who programs GPUs and drops in here on occasion.

Imagine that each GPU has a different set of capabilities, like cars driving down a road. Each car has a different number associated with it.

1). A synchronous GPU with serial execution is like a road with one lane. Each vehicle drives down the road, one after the other. One car can *pass* another (that’s called pre-emption), but two cars cannot drive side-by-side.

2). A synchronous GPU with parallel execution is like a road with multiple lanes. Cars can now drive side-by-side, but they still start and finish at the same time.

3). An *asynchronous* GPU with serial execution is like a road with a manager sitting at the starting line who tells each car when to leave. Each vehicle drives at full speed. They never have to pass each other, but they still can’t drive side-by-side.

Finally:

4). An asynchronous GPU with parallel execution is like a road with multiple lanes and the same manager. Each car is sent to drive when it is time for them to go, each drives at full speed, only now the manager can send cars driving along multiple lanes, not just one.

AMD’s GPUs are like the 4th case. Nvidia’s GPU’s are like the third case. This includes Fermi and Tesla.

This is what I’ve meant when I’ve said that there are fundamental differences between the AMD and NV GPUs. I hope this clears it all up.

Mosab Al-Rawi

You friend is great in GPU programming but need some work on his explanation methods.
I think 1 & 2 are without asynchronous.
second he need to switch cars with trucks. each group of trucks carrying specific type of cargo, so AMD can mix trucks with various types of cargo (D3D, OpenGL, Open CL, …Ect) of course no D3D + OpenGL and in nvidia Fermi case you can send patches of similar cargo trucks each time so no mixing.

Joel Hruska

I gave you four examples of different types of execution to illustrate asynchronous vs. synchronous as well as asynchronous parallel vs. serial.

Mosab Al-Rawi

And I think they were among the best examples for how parallel work flow go under the four major types of async.
It was difficult for me to explain these theories to my people in big machines (not computers) now I hope it will be easier for them to understand. I hope you won’t ask for royalty fees:)

Jigar

LOL, what are you talking about…

pohzzer

AMD enabled Vulcan, a direct competitor to DX12 that is likely to eclipse DX12 into the future.

It’s ridiculous to think an AMD unplayable GOW Ultimate was somehow made available in the Microsoft store by ‘accident’. It’s improbable to the point of IMPOSSIBLE the game wasn’t tested on AMD boards by the developer AND Microsoft before it was released and it’s un-playability wasn’t known.

It was done deliberately and it cast a bad light on AMD cards.

It is what it is, and it stinks to high heaven.

Joel Hruska

AMD also enabled asynchronous compute, a feature that the Xbox One takes advantage of, and is a key partner with MS on DX12.

“It’s ridiculous to think an AMD unplayable GOW Ultimate was somehow made available in the Microsoft store by ‘accident’. ”

Is it? Is it really?

Windows 10’s Store is a vital component of the Windows 10 ecosystem, yet Microsoft continues to demonstrate an utterly terrible lack of quality and curation. This problem is four years old and dates back to Windows 8. It has improved only modestly since then.

The Xbox One was a vital component of Microsoft’s effort to compete in next-gen consoles, yet the company’s initial unveil and subsequent E3 were total disasters. It took Redmond months to backtrack and completely rewrite the book on what the Xbox One would look like.

Kinect 2 was a boat anchor that nobody wanted, yet Microsoft insisted *would* be the linchpin of the Xbox One experience. It took Redmond almost a year to acknowledge that nobody wanted the damned thing, and to stop requiring people buy it. People could’ve told them that while the Xbox One was still in the design phase. How many sales did Microsoft lose to Sony as a result of spending such a huge chunk of its development budget on a meaningless peripheral?

Critics argued that Microsoft’s handset efforts were failing, that the Nokia purchase was a bad idea, and that its OS was hemorrhaging market share. Microsoft ended up writing down the value of Nokia and Windows Mobile is on life support.

Finally:

When Microsoft announced that it would bring Gears of War to the present day, it did not commit to an updated engine or reworked design that would make this practical. Instead, it decided to stuff DX12 into a DX9 version of Unreal Engine 3. That’s a horrific kludge. It’s exactly the opposite of what you’d actually want to do — unless you want a bullet point on a check list about how you now have a DX12 game ready to go.

I have spoken to AMD about the Gears of War situation. I have also talked to other sources with some knowledge of these topics. You are, of course, free to draw your own conclusions. But when I look at Microsoft’s overall performance these days, I see nothing that makes such shoddy work an unusual state of affairs.

Mosab Al-Rawi

OK the problem is with AMD’s cards only.
you can scroll through the DX12 features that Microsoft added and you will find that non is DX12 specific and some are DX12.1. for the first time in history two version of Direct X lunched in the same day (DX 12 and 12.1)

pohzzer

The debacles you delineated occurred or originated under Ballmer’s tender ministrations. There has manifestly been a sea change for the better under Nadella’s guidance. It’s factually erroneous to mash the two together.

I maintain it’s implausible in the extreme the state of playability of this game slipped under the radar. It’s far more plausible the fact AMD cards were severely crippled was known and the game was released anyway. I consider it’s also highly implausible it was an ‘accident’ the end result cast AMD in a very bad DX12 light.

If Pascal is in fact, as rumored, lacking Async hardware and the Pascal cards cannot price/performance compete with Polaris in DX12 games, and by extension in VR games, it’s totally plausible JHH will throw whatever scruples he might possibly still retain into the gutter, break out the brass knuckles and lead pipes, and get to work.

He is, after all, looking at a probable explosion of Zen CPU sales followed by very potent APUs that will finally have the capability, recently verified in AMD slides, of fully additive APU and dGPU graphics. Add in AMD dominating price/performance across the market and Nvidia GPU sales will be in serious trouble.

What better opening move than to position AMD as unplayable on the FIRST AAA ‘DX12’ game to appear in the Microsoft store?

Joel Hruska

Pohzzer,

I have reason to believe that this is not the case in this title. I’m not claiming that NV doesn’t play hardball in other arenas or that your scenario isn’t accurate in other contexts. But I don’t think it applies here, for reasons I’m not at liberty to disclose beyond what I’ve already said.

pohzzer

Understood.

Jigar

LOL… Tinfoil hate on

Bob

Well if Microsoft took the game from the Xbox one and put it on windows id should run perfectly on AMD hardware.

Difference?

Gameworks is integrated in the PC version, it is really that simple. There is no Architectural differences here. Just Gimped Middle-ware designed to bring the competition down.

Mosab Al-Rawi

Joel Hruska worked hard on this article. At least we have to read it carefully before commenting.

Bob

He is a Journalist and Cant spout Articles without evidence to back the claims up.

But we the community can fairly conclude that it is obviously Gameworks that is causing this issue and lets assume that it doesnt directly correlate to this,

PhysX is the an issue because it wont turn off working only on the CPUs and can only be offloaded to NVidia GPUS not AMD (Even though AMDs would be better at doing it). ITs fairly easy to conclude that any ‘PhysX related software that AMD brings on the Xbox works fine.

To conclude, It is Nvidia, Just like Batman before it.

Mosab Al-Rawi

Please read the article again and you will notice that the opposite to your conclusions has been mentioned with evidence.

Rex Lajos

It’s not just this game, it’s many games that have been tainted. This article is inconclusive.

Bob

It doesnt mention that NVidia deliberately put all PhysX to the CPU for AMD hardware.
&
Will gladly hurt their hardware by 10% to ruin AMDs completely.

Mosab Al-Rawi

The maximum CPU usage is less than 15% in any scenario and that isn’t something to talk about.

godrilla

Nvidia will always cause a performance hit for its users if the competition fares worse always.

Joel Hruska

Look, I literally wrote the original article on GameWorks that kick-started a lot of these investigations. But there’s factual analysis and then there’s baseless accusation.

As a reader has already pointed out, the “bDisablePhysXHardwareSupport” is a standard variable included in the UE3 engine. There’s no evidence that enabling PhysX does *anything* in Gears of War. I’m not even certain that GoW *uses* PhysX or, if it does, that uses the hardware version. Nvidia licenses PhysX as a software-only middleware SDK for use on multiple gaming platforms, and the GoW team made a great deal of noise about sticking to the original source code.

Right now, that’s *still* the overwhelmingly likely reason for this problem. Yes, the game uses GameWorks for ambient occlusion, and yes, HBAO+ by NV runs more slowly on AMD cards, but there’s a difference between “Runs more slowly,” and “Results in an unplayable game.”

Right now, I’m standing by what I said last week: This is a terrible port, period.

pohzzer

The real point is it was released in a functionally unplayable condition on AMD cards and frankly, it’s utterly non-sensical to posit this game wasn’t tested/played on AMD high end cards before release and to assume the playability issue wasn’t known to both the developer and Microsoft. It’s grasping at non-existent straws to posit otherwise. ANY game, much less an important Microsoft DX12 game, will be tested and played on various AMD and Nvidia cards before release. PERIOD.

Patrick Proctor

And games continually ship broken, period.

Kulikov Ivan

“I’m not even certain that GoW *uses* PhysX”
It does, as only low level physics calculations, like character controller or triggers, are using custom code, written by Epic.
Anything more advanced – ragdolls, vehicle physics, rigid body physics, destruction – is using software PhysX.

There were few exeptions like Wheelman or Stranglehold, featuring Havok integration, but pretty much every UE3 game are sticking to default PhysX engine.

Also, some titles with only basic physics (DmC, for example) seems to be not using PhysX code as well.

UE4 btw, has much deeper PhysX 3 integration (but purely software) – all the low level collision detection is relaying on PhysX SDK. So every UE4 game is using PhysX one way or another.

spoffle

How can it be a terrible port, when games aren’t ported?!

Jigar

Dude, lets not do this again, you are delusional Nvidia fanboy with no knowledge about technicalities.I do appreciate the effort tho but its getting long in the tooth.

blair houghton

Have you tried running it on an nVidia card?

Joel Hruska

Yes. That’s why I discuss the performance of NV cards.

Kulikov Ivan

“bDisablePhysXHardwareSupport” is a standart variable for any UE3 game, as PhysX SDK is used in this engine as default physics solution.
If the game does not have extra physics effects (like particles in Borderlands), which Gears of War clearly does not have, it should not affect framerate at all.

spoffle

Standard, not standart

godrilla

Even if you are an Nvidia gpu Card owner such as myself 980ti amp extreme i would never purchase this junk or any poor port that follows these practices!

Using dx12 features for promotional purposes instead of bringing something beneficial to the table.

Ninja Squirrel

PhysX hurts AMD so much in DX11 games because DX11 driver has more CPU overhead on AMD than Nvidia. I hope it wouldn’t hurt much to AMD on DX 12.

I guess it should be the developer and Microsoft responsible for the failure of this game. This game runs poorly on both Nvidia and AMD graphic cards.

As we are fans of PC Master Race, we should unite together and fight against greedy Microsoft for the future of PC Master Race.

Tim said all that, without once mentioning GameWorks (Hilarious… NOT). Its just MS trying to capitalize on nVidia Gameworks, something nVidia started.

nVidia: “Do as I say, not as I do”

JamJams

It would depend on the development platform. Some games/software are optimized for NVidia, others for AMD. Such is the case for BFBC2 load times reduced for AMD GPUs and TombRaider running smoother, vs other games that are optimized for NVidia platform. With Khronos Group/Vulkan drivers/ “close to metal”/DirectX12 and the open source development from ‘GPU Open’, I think you’ll start to see more use of C++ AMP and heterogeneous processing.
The software is about to catch up to the hardware finally, so I think some exciting things are to come from both companies.

Natural Gamer

Anyone with an nvidia card claiming hairworks is running flawlessly is either an idiot or simply a liar. My 980 Ti drops to 50fps in intense areas at 1920×1080. Without hairworks, I get a stable 60fps throughout the game.

Phobos

So what are the chances they are going to fix that? None? Then AMD users are screw again.

onstrike112

Does Nvidia gameworks hurt AMD performance? Yes! Is it a poorly optimized system? Yes! Do the games that use it use bad code? Yes!

Carl

I read elsewhere that, if a Gameworks-optimized game detects an AMD GPU, it essentially offloads the PhysX software to run on the CPU and thus gimp the Radeons, because PhysX can NOT be turned off, hurting the CPU performance. That may be why so many digital foundry articles have stated that AMD GPUs require quad-core CPUs to run properly when an Nvidia equivalent does well with just a dual core CPU.

Dickson

If that’s the case, then there’s very little evidence of PhysX causing the game to run badly on the Radeon card in this article. CPU utilisation is low on the Haswell-E 8 core CPU both with AMD and Nvidia cards, yet runs dreadfully on both, so it looks more likely that the game just had really, really bad quality control. A crap port, more than anything else.

“You’ll also note that we’ve actually benchmarked both GPUs twice and offered both minimum and average frame-rate metrics. The standard way of benchmarking graphics cards is to pair them with an overclocked Core i7 in order to isolate pure GPU performance. However, this ignores that entry-level cards are more likely to be paired with less capable processors, so we’ve included Core i3 benchmarks too. You’ll see from the minimum frame-rates that the GTX 750 Ti holds more of its performance than the R7 360, and that’s because AMD’s driver actually consumes quite a lot more CPU power than its Nvidia equivalent – something to be aware of at the budget end of the market.”

0_freaks_0

As far as I’m aware I think it depends on how the game was programmed to have PhysX to run either off the GPU or the CPU. Take for instance The Witcher 3 whose PhysX engine is CPU-based by default (I tested it), even for Nvidia users who haven’t changed anything in the control Panel.

Joel Hruska

There are two varieties of PhysX. Software PhysX is a solution that runs entirely on the CPU, no matter what GPU you own. XCom from 2012 uses software PhysX, and some other games did as well. In this mode, PhysX competed with Havok or Bullet as a physics engine.

Hardware PhysX means code written to run on an NV GPU. Confusingly, hardware PhysX can be run on a CPU. If you do this, you are running hardware PhysX in software — which is different from running software PhysX.

hargs sgrah

important yet slightly annoying to hear xO

do you know any reason as to why the hardware physx would needS to run on cpu?

Carl

Didn’t know all of this, but I still think forcing a hardware PhysX to run in the CPU (when an AMD GPU is detected, nonetheless), is a bad decision. Obviously the blame is on the developer side for implementing a hardware vendor’s software as mandatory in the game engine, no matter what hardware you have. Instead of PhysX they should just implement Havok, which is hardware agnostic, or some other 3rd party physics engine.

Joel Hruska

There’s no evidence that hardware PhysX is running in this title, though. UE3 was capable of integrating multiple physics middleware SDKs.

No game, to the best of my knowledge, *ever* required hardware PhysX to run. Hardware PhysX was used to add debris or rippling cloth, for example. It was never required. Nvidia licensed the PhysX SDK as a software solution in multiple titles, but when used in this mode it gained no special advantage from being run on an NV GPU and was simply a competitor to other physics SDKs like Havok or Bullet.

@joelhruska:disqus Are you sure that PhysX was actually turned off by the ini file? I ask because I’ve seen other reports of the ini file being reset back to it’s default state once you launch the game.

Joel Hruska

Michael,

Yes. I ran the benchmark multiple times on Tues, March 2 and again on Sunday. I checked the initial state of the file only after installing and running the test, since I had to check where to find the files and wanted to get a base-state reading before I went mucking around in that part of the OS installation.

Michael Barton

Gotcha, I was just curious. I remember reading an article else ware that suggested that the file would be restored from the cloud once the game ran again. Since I don’t own the game I was interested in other people’s findings.

Use of this site is governed by our Terms of Use and Privacy Policy. Copyright 1996-2016 Ziff Davis, LLC.PCMag Digital Group All Rights Reserved. ExtremeTech is a registered trademark of Ziff Davis, LLC. Reproduction in whole or in part in any form or medium without express written permission of Ziff Davis, LLC. is prohibited.