AMD: Nvidia PhysX Will Be Irrelevant.

AMD does not consider the lack of PhysX support on systems that feature ATI Radeon graphics processing units (GPUs) a problem. At the end, says AMD, PhysX application programming interface (API) will simply become irrelevant in the future.

Advanced Micro Devices, the world’s second largest developer of x86 central processing units (CPUs) and a leading designer of graphics processing units, said that it hardly regrets about Nvidia Corp.’s decision to disable support of hardware physics effects processing using PhysX API and GeForce GPU or Ageia PhysX PPU in systems where ATI Radeon graphics card is used for graphics rendering. In fact, AMD believes that with the raise of popularity of DirectCompute and OpenCL APIs, proprietary PhysX will soon vanish into oblivion.

“Physics can be a good thing for gamers, but it should be for all gamers. When it’s available for everyone, game developers will be able to make physics an integral part of gameplay, rather than just extra eye candy. This requires a physics solution built on industry standards. That’s why DirectX 11 is such a great inflection point for our industry – DirectCompute allows game physics that can be enjoyed by everyone. There are several initiatives (some open-source) that will deliver awesome GPU-based physics for everyone, using either DirectCompute or OpenCL. Industry standards will make any proprietary standard irrelevant,” said Neal Robison, director of global independent software vendors relationship for AMD, in an interview with Icrontic web-site.

Besides, the representative for AMD also accused Nvidia of not acting in gamers’ best interests: disabling support of both GPU- and PPU-based hardware processing of PhysX is not something that helps end-users.

“There’s a real discrepancy between what Nvidia says, and what they do. They “say” that they are looking out for gamers’ best interests. However, decisions like this are the exact opposite of gamers’ best interests,” added Mr. Robison.

Discussion

The story:
NV bought Physx, no other physics API out there existed. So everyone was using it. Now, with DX11 there's a viable alternative. Soon Physx will die, even if opened up. So in the meantime, NV is being an ass about this by not sharing. They are trying to squeeze the marketing value out of Physx that's left to gain as many customers as possible until time's up.

What should've been:
The best case scenario for them is they could've lobbied to have Physx become integrated into DX11, thus giving them the early advantage they needed and a possible leg-up in understanding how it worked. All while not alienating their (potential) customers.

Consumer result:
I have a 8800GT here, and my Radeon 5870 that arrived today. After reading this article, do you think I'm going to buy Nvidia next time?

Preview

Preview

3.

DX11 does not provide a hardware physics library - in bold just to make this very obvious!
DirectCompute isn't even DX11 - it's a competitor to opencl and CUDA available for DX10 cards too. With it a hardware physics library could be built, but at the moment I don't think anyone is.

If you want hardware physics currently the only option is physx.

In the future one other company is developing the "bullet" physics library on top of opencl. Being as opencl works on both ati and nvidia this will provide a physx competitor. However this is going to take years to take off because:

1) Physx is available for both consoles (software only) so was incorporated into the multi-platform engines. Hence if you want hardware physics on your PC port of some UE3 game you will use physx because this is supported by the engine. That's not going to change till the next round of consoles - software physx for the consoles is cheap and works well, they aren't going to change to software bullet just for better pc hw physics coverage.

2) The only reason we see hardware physics in games at all is because nvidia is leveraging TWIMTBP and getting it put into games. Without nvidia's backing there would be no hardware physics. Obviously nvidia aren't going to spent time and money convincing devs to use hw bullet physics when they already have physx.

Ati talk the talk but aren't willing to actually do anything - they would have to work closely with bullet physics and push hard to get it adopted. The only people currently working with bullet are nvidia (their website says it was developed using nvidia hardware).

Seems that all of Ati's games relations team is busy trying to push some DX11 into games, everything they say regarding hw physics is just a load of FUD at the moment.

(this is not to say that I approve of nvidia disabling physx when an ati card is detected - very bad move. Fortunately it's already been hacked to get it working with the latest drivers)

Preview

Hardware physics has failed to impress us with such an aesthetic display of physic processing power that was not already done in games before such as Half Life 2 several years ago, or Crysis a few years ago. Whatever we see being done on PhysX could be done just as well using Havok physics or similar game engines such as Infernal that uses physics on more than 1 CPU core. Nearly all 4 cores of a quad-core CPU was fully utilized for such an impressive display of "tornado" physics in the Ghostbusters game.

When I played Mirror's Edge with PhysX enabled, it pained me to see how the effects would have been more impressive with Havok, even if it were released in 2004-2005 timeframe.

The mere fact that it is proprietary could mean that it would go the way of the do-do, just like with 3Dfx's Glide API. Glide did the job well for a while, but still failed to impress us after a couple of years with 16-bit color and such (while OpenGL and DirectX were widely supported and rapidly improving).

Software PhysX for the consoles does not mean that the ported games for PC will utilize it. Mass Effect, for instance, has an option in the .ini file for changing PhysX support from default OFF to ON, and does not make any differences whatsoever with either visuals or performance, even if you have a dedicated PhysX card and the latest drivers.

Also, PhysX causes some stutter even if running at 40+ fps in more PhysX supported games than not. There have been numerous complaints across the internet about how PhysX is just so unimpressive compared to games of 4-5 years ago. With the exponentially rising number of CPU cores waiting to be utilized in the future, most of those cores are being left idle, while the GPU will always be used to the fullest for the graphics processing power.

With Intel being so set at competing with ATI and Nvidia, and highly unlikely to start paying Nvidia astronomical royalty fees for PhysX, the future iterations of Larrabee will give us one more reason to accept the fact that PhysX (an API that refuses to utilize multi-threaded CPU cores) will go the way of Glide.