If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Comment

Say what you want, you got no representive Site thats shows that most Linuxers (not only geeks) use Nvidia GPUs, and even if you take that smolt-results. There are only 0.7% users in this statistik who uses Nvidia GPUs, there are 0.9% that use Intel or AMD solutions, so if you ignore the other stuff Nvidia has less than 50% market share. (from this share there are surely also a few nviddia igps or older cards that also dont support physix.

Nobody ever said they were the majority share holder. They are, however the largest piece of the pie. The sites that have stats on the subject coincide. The burden, of proving otherwise is up to you. Otherwise your evaluation of the subject is backed by unfounded speculation on your part with no supporting evidence.

Comment

Also you can look at the steam hardware survey as well. This does represent gamers of course, but that is exactly what Physx is for now isn't it so the stats are somewhat relative. Of all that hardware a vast majority of those Nvidia cards support linux. Lets face it, even in linux people do not go out an buy a intel IGP for gaming.

Physx isn't used for openssl, it isn't used for websurfing, it isn't used for email, etc etc. It's used for gaming, therefore you have to look at the hardware that gamers use. Target product for a target audience.

Comment

The problem with physics on a graphics card is two-fold:
a) the graphics card must also do graphics (funny that)
b) it's really only useful for eye-candy. Actual physics that affect the game summer from a performance bottleneck with reading results back from the graphics card. This was one reason that games would often run slower when "hardware accelerated" physics was enabled (this was when ageia owned it).
Bullet and Havok are more than capable of being used instead of PhysX - naturally I'd recommend Bullet due to its open source nature.

Comment

The problem with physics on a graphics card is two-fold:
a) the graphics card must also do graphics (funny that)
b) it's really only useful for eye-candy. Actual physics that affect the game summer from a performance bottleneck with reading results back from the graphics card. This was one reason that games would often run slower when "hardware accelerated" physics was enabled (this was when ageia owned it).
Bullet and Havok are more than capable of being used instead of PhysX - naturally I'd recommend Bullet due to its open source nature.

One thing you have to remember though as a scene gets more complex with multiple effects you do quickly start running out of threads on a CPU. The developers of Trials HD for example spent many many many hours tweaking trying to get acceptable performance on the Xbox 360 using Bulllet just to get acceptable game play out of the 360's 6 threads. A good point to see where even Bullet slows down is when using it with one of the many 3d rendering apps out there. It does bog a system down quite handily.

Comment

One thing you have to remember though as a scene gets more complex with multiple effects you do quickly start running out of threads on a CPU. The developers of Trials HD for example spent many many many hours tweaking trying to get acceptable performance on the Xbox 360 using Bulllet just to get acceptable game play out of the 360's 6 threads. A good point to see where even Bullet slows down is when using it with one of the many 3d rendering apps out there. It does bog a system down quite handily.

I quite agree - the nature of physics calculations lend themselves nicely to the highly parallel environment of a gpu, but will also slow down and suffer in complex scenes if the data is required for anything more than eye candy. Which of course just means leave the eye candy parts for the gpu in order to free the load from the cpu.
The open physics initiative should hopefully improve Bullet's standing. It will be interesting to see where that goes.

See my above comment about bullet. Bottom line is the more complex the scene gets the faster you approach the CPU's limitations.

Ok, I agree. But since we have powerful CPUs with 4 cores, and actually games barely use 2 of them, and since physics is a perfect streaming application, couldn't be a good idea to use SIMDs and parallel execution on actual x86 multicore cpu's?

Back to PhysX and for all PhysX aficionados, it is proved that use no more than 1 cpu/core (bad) and use no SIMDs at all (very bad):