wondering if there's a way to measure the memory usage of a stand alone physx card ( actual memory used for physx processing only! ). I see there are several gt640's available. Memory ranges from 1-2 or 4gigs per individual card. I thought that it might make a difference by amount of ram used, memory bandwidth or a combination of both. ??

of course this will be used in conjunction with a single higher performing card.......... gtx660 on up.

already thought about using card with and without physx enabled. NOT what I was looking for. I also wasn't looking for opinion. that part you must have missed. I was also NOT looking for fanboyism but if you want some................ physx brings a lot to gaming. I enjoyed it when I used it in the past and will be using it again in the future.......nvidia is finally starting to make the cards they should have made after the gtx200 series. .............. by not using it you are ripping yourself off.

I downloaded the batman demo to try it out, but there was no way to bind your controls so I quit before loading the first level. Apparently it's a good game, but I am not going to bother playing something so consolized when I have so many other games to choose from.

About physx. For the games I play like mirrors edge it adds a couple cloth bits that slow down my frame rates and make it harder to see in the game. I gave it an F and disabled it.

Maybe there is some poster child of how it can add stuff that is good, but then I see something like portal2 and what valve can do with CPU physics, and what ATI does with stuff that is not vendor locked and I just end up disliking nvidia for trying to push people around.

You suggested to the site owners that testing physx would be good. I said it was a waste of time because it's worthless. I wasn't giving you the opinion, I was giving the site owners it.

And yeah, I am a fanboy....I'm all matrox baby. Their 3dfx glide wrapper for Q2 rules.

physx does knock the hell out of performance. that's the reason for this thread. Right now I have no way of testing because my nvidia cards are all "old".

with the 670.......... I have seen some benchmarks ran that show low end cards do deter physx performance. That was the idea of the newer 640 and it's ram/cuda cores in conjunction with a 660 and above.

But I would like to know exactly what the dedicated physic card is actually doing and what it brings to the mix. If I knew for sure it was how much ram it used I would buy a card with "more" ram than I normally would with such a "low" end card. If it were the cuda cores ( depending on how much of that it used ) I would base my purchase on that....

you guys that have nvidia products............. there are physx based demos on nvidias site that might help vs the dreaded console ports. but again, older hardware won't give me what I'm looking for.

would be nice for the pros on this site to test it. I think it's a valid test that really needs to be performed. never saw anything anywhere about this. just benchmark frames per second. FPS never tell the real story.

The problem is that this sort of work has been done.... years ago. The sad truth of it is that physx was measured and found wanting. DX11's DirectCompute has marginalized (not replaced) physx. There are still some companies in fully entrenched with nvidia's TWIMTBP program that are going to work diligently on physx. Per some reviews I've read, Boderlands 2 has a good implementation of Physx. But your requirements and needs regarding a review on Physx.... its just not going to happen. Its a niche market and it just doesn't matter that much anymore due to the reasons above.

Truly, its the opposite of CUDA. CUDA ha the marketshare to garner inclusion in reviews, where Physx doesn't. There just aren't enough games that support Physx and DirectCompute has been tacitly declared "good enough."

I probably read that article years ago. all the information dates back to 2008 and doesn't give me the answers I wanted. the cards have changed, the demands have changed. it doesn't tell me what the parts of the card are doing and how much ( and of what ) each uses in respect to what's happening at any given time. please don't say nobody is ever going to go back and take another look at this. it's something I think needs to be done. In that article it just says physx this physx that and shows a few FPS charts. I'd rather have more information. That's what it's all about. Learning and understanding. I hope physx lives, whether it's the "free" version ( that ATI is trying to push ) or not. I have a hard time grasping nvidia would just cede. It's more than just a gimmick.

back to the FPS thing. this site, and only this site, go beyond FPS when they do benchmarks. something more MORONS on the net should do but don't. it's only one way of measuring performance but the arguments never stop.......... I want to pick the "actual" performance apart. unfortunately I personally can't afford and don't have the time to do that.

edit: I also don't understand the architecture of the cards and how the parts relate to any given performance. re-edit....... actually I do. I understand the processor and what the memory is for but not anything else from a physx standpoint.

south side sammy wrote: please don't say nobody is ever going to go back and take another look at this. it's something I think needs to be done. In that article it just says physx this physx that and shows a few FPS charts. I'd rather have more information. That's what it's all about. Learning and understanding. I hope physx lives, whether it's the "free" version ( that ATI is trying to push ) or not. I have a hard time grasping nvidia would just cede. It's more than just a gimmick.

While I understand your desire to see more physx-related effects, its simply been marginalized. DirectCompute in DX11 just about replaces it's functionality in a hardware-agnostic way. At this point, there just isn't any incentive for Physx to be coded for because it requires specific hardware. Adding Physx support is just that: It has to be added to the game. Once nVidia turns off the incentives for developers to code for it, we'll see an even bigger decrease in the number of titles that give it a place of prominence. Its just not in enough games or can be run on enough hardware adequately to be "the next big thing."

If you want to see those benchmarks, its going to have to be you that runs them. I'm sure there are some disreputable sites that have looked into it for Borderlands 2, but its a crapshoot to see if it would even give you the level of detail you are looking for.

I can understand your frustration. Had nVidia simply opened up the API or had not crippled it for cpu's, you wouldn't have this issue. Physx would have taken off.

The worst problem with PhysX (and GPU Physics in general) is the chicken and egg scenario where it's really only used for eye-candy because it's not widely supported. It could be used to enhance gameplay physics as well as AI in incredible ways, but the support simply isn't there. And sadly, PhysX runs great on AMD GPUs as well as CPUs. Nvidia marketed it into a corner.

If you just want to have a second card for physx, you can go pretty low end. It doesn' t need to be all that good. You won't have any issues with the fps hit. The Physx api is pretty simple. Just have another card and you should be fine. Low end GT 240 should be all you need.

that's not true. if you have a gtx580-gtx660 or above and you toss on a 9600gt or 8800gt or something less than a gt450/550 you'll lose performance. It does matter. that's why I'm trying to find out what parts and how much of those parts really matters for physx.

You might troll the [H] forums, I know I've seen a thread or two testing that in the last year.

I can say that after reading what I've read on PhysX, I decided that trying to maximize game performance while running it wasn't worth the cost. Basically need another dual-slot card to keep up with any real load.

This article has a performance analysis on Mafia 2, which has a lot of physx effects, and they test using dedicated physx versus a single nvidia card and also a hybrid setup (radeon for graphics and dedicated nvidia card for physx), and finally some with just the cpu doing the work. It doesn't have memory information, but Mafia 2 was probably as demanding as any game when it came to physx (that I've seen, with the possible exception of Borderlands 2.

I have Borderlands 2 running on a laptop GT650M with Physx on medium details. It looks good, and the game doesn't take too much of a performance hit (from 60 to 50 fps, typically). If the Physx can be done with what power is left over after a lowly GK107 can output 50+ fps of a shader-heavy Unreal3-engined game, then I wouldn't have thought Physx implementations require much heavy lifting in terms of GPU power.

I suspect nVidia have to consider that most people will not have a dedicated Physx card and therefore don't want to negatively impact the TWIMTBP experience too much with heavy, yet gameplay-irrelevant eye-candy.

Some people ask me why I have always enclosed my signature in spoiler tags; There is a good reason for that, but I can't elaborate without giving away the plot twist.

off and on I would run across a decent article about physx. unfortunately I'm an internet whore and can't find those specific threads/sites as too much time has passed. But they all have the same thing in common, FPS charts and nothing else. I want more information about what specifically the physx reacts to.

here are a couple of vids somebody may find interesting and will answer a couple of things already posted here... hope the admin doesn't mind the information provided. okay to delete, won't hold it against you.

hows about this. you take 2 top end cards ( 680's )........ make one a physx card. play batman and/or borderlands2.

if there was a way to find out what exactly the physx card uses to perform it's part................ you could get your answer. no need to test 47 games. no need to use 47 different cards.

as I said before. performance is a combination of many different factors. unfortunately we only see the frames per second side. I've gotten high frames and rotten performance before and I've gotten low frames and good performance before.

south side sammy wrote:hows about this. you take 2 top end cards ( 680's )........ make one a physx card. play batman and/or borderlands2.

if there was a way to find out what exactly the physx card uses to perform it's part................ you could get your answer. no need to test 47 games. no need to use 47 different cards.

as I said before. performance is a combination of many different factors. unfortunately we only see the frames per second side. I've gotten high frames and rotten performance before and I've gotten low frames and good performance before.

This is way overkill. Good lord, you must really like Borderlands 2/ Batman series. Physx uses a combination of cores and the shader clock. But there is a point of diminishing returns. If you need to get a decent experience, spring for an old 460, and it'll be just fine.

Its a mix of shader clock and core clock. that's about it. If you are looking for something more than "is going to handle it as a secondary card" we've covered that. Other than that, I'm at a loss for what exactly you re looking for. Its just such a niche area.

south side sammy wrote:I don't own and haven't played any of the Batman games nor the borderlands games. You still don't get it. It's no about ANY game. It's about actual usage of whatever physx uses to perform it's tasks.

It's going to depend on the specific game. But in general, the number and speed of shader cores are going to be the most important factors.

The years just pass like trains. I wave, but they don't slow down.-- Steven Wilson

If you like awful, nasty games produced by no-name developers, which score less than 50% on Metacritic, then there are still barely a dozen games on that list, and I wonder how many of those games exist solely because nVidia propped them up with money to complete a "Physx title".

You know they're desperate to artificially pad out their list of games when it includes:

90-second interactive nvidia tech demos like Supersonic Sled

A social networking "app", Nurien which Scrap that, it's just a benchmark - the Physx game portion never made it any further than an alpha release.

Depth hunter, which is barely a game and doesn't have much, if any physics simulation.

Shattered Horizons from Futuremark, which (though fun to play) is a four-map game concept rather than a full game.

In fact, Physx games are so rare, you're lucky to get more than one or two decent titles per graphics generation. Is it any wonder nobody cares enough to do a full, time-consuming benchmark?

south side sammy wrote:You still don't get it. It's no about ANY game. It's about actual usage of whatever physx uses to perform it's tasks.

I already answered you: GPU-Z.Fire up a game, use GPU-Z to log the load and memory use of the Physx card. Though I have no idea what you hope to achieve with those numbers, it may satisfy your curiousity.

Some people ask me why I have always enclosed my signature in spoiler tags; There is a good reason for that, but I can't elaborate without giving away the plot twist.

Chrispy_ wrote:This is nvidia's complete Physx list[/url], I count only nine games worth mentioning on the list.

To be fair, I'm pretty sure that's just recent(ish) releases though because I know of several games that employ Physx and aren't there. I do agree though that gpu-z is probably the only way to get what he wants (although I'm not sure even he knows what he wants).

Physx implementations really are game-specific, and they really diverged after nVidia bought the tech since before it all at least had to run on one of the two dedicated cards and now there's a lot more flexibility.

Here is the thing, phys is dead. Its original incarnation was the agia card and nvidia bought it after it was discovered that with enough software threads you can do the same thing as hardware. So nvidia virtualized it. Killed the hardware. And made it part of the SDK.

in hardware it slowed games by 10-15% and it added eye candy.In software is still slows games by 10% or more even when a dedicated card is used.

you need a 256 cuda core GPU to use it, more than that *WILL NOT* improve the LOSS of frame rate which depending on the SDK used by the game can be minor like in the 10-20 fps loss or huge like early demos where games lost 50+ fps.

And depending on how much coding the Dev does the return can be small like in warfigther from years ago or decent and do more than make pretty leaves and dust clouds or flappy flags.

Bottom line' phys has been virtualized and is run on the GPU as code, and the game determines the resources used.

Your best bet is to email nvidia the owners of the software...

Cybert said: Capitlization and periods are hard for you, aren't they? I've given over $100 to techforums. I should have you banned for my money.

please don't speak about as if I'm not here. I'm sitting in the same room as you. I only stated what I wanted 15 times. I'm sure memory/memory bandwidth is part of it but most certainly NOT all of it. GPU-z won't really give me what I want. A good look at what the architecture of the card puts forth inre to physx... and just how much of a "physx" card would be needed to provide that.

I don't want to know anything about ancient technology ( gtx260, etc. ) I'm talking about now. I can't believe nothing has advanced ( in the past 4-5 years ) to get to the bottom of this. There has to be some kind of calculation to measure this.

and by the way, physx isn't dead.......... maybe for you, but not for the people who are interested.