Post Your Comment

31 Comments

All this review shows is how good the AGEIA software engine is capable of using a second core to it's full advantage. Infact their goals are slightly two sided. Its true the hardware is probably a magnitude faster than cpus but cpus are reaching 4 core and beyond with shared caches and very wide busses. Unless AGEIA puts their hardware implementation on a PCI Express / HTX slot its not gonna outrun even a 4 core core 2 duo or an athlon 64 for that matter. Reply

Ageia's PPU should be integrated with the motherboard to for maximum results ladies and gentz

Can you imagine having the raw horsepower of a motherboard with a local GPU thats powerful as a 7800 GTX? Local components work better with any CPU(meaning have a better data transfer/flow rate) than any serial bus. Motherboard bus speeds(PCI/AGP/PCIe) limit the performance of most video & other cards(even the high end ones) to some degree. However, PCI-e is the next best thing without having it on the die with the CPU. But with chipsets that help with physics, it has to catch it BEFORE it hits the CPU, which is why there is a major performance hits to City of Villians. The CPU has to collect all of the data before everything else gets it providing thats its not integrated. Then it has th elaborious task of dealing with data it isnt prepared to use. Reply

The problem is n! in complexity -- the reason being is each entity interacts with each other. You can't just blindly update n entities independently of each other (consider collision detection for instance).

I won't elaborate here because it is beyond the scope of this article, not to mention these forum posts, but just think about what would happen if you just blindly update the physical attributes of n entities... you would lose information. If you don't get what I'm saying, don't worry about it -- just take my word for it. :)

i have an issue with your review. your testing doesnt include screenshots of the things tested.

quote:There's no question that a PhysX card will give better performance in City of Villains at the highest settings, and at times that difference can be pretty sizable. But as we found out, using a slightly lower quality physics mode will result in graphics similar to the highest mode where the PhysX card shines, but at performance levels nearly equal to the PhysX card just by using a dual-core CPU.

please explain "slightly lower quality physics mode" -- what setting was this? 75%? 50%? without screenshots it's hard to identify what you claim. the slider states "up to" implying that some frames will have the maximum but some have far less. is it hard to tell the diffrence between 500 and 1000 particles? what would be a reasonable number of particles?
also you exclude any test showing what happens when you have the card installed but run the game at minimum phisics, i think in cov it's 100 particles (idk if this is the case, this is also not tested.) Reply

I imagine that most games which have support for offloading work to the PhysX card will also support offloading the same work to a seperate "software-physics" thread. If they've already made the effort to seperate the physics work from the main application so it can be sent to the PhysX card, it should be trivial to run that work in a second (or multiple) threads. Therefore games that make use of the PhysX card will most likely also be able to use a dual-core processor to good effect, as is the case with CoV. Reply

I assume more games support dual cores than PPU right now (and the trend probably favours the dual cores rather than the PPU)
Anyway, a dual core will help with short processor-intensive (even if very short) tasks that appear "out of the blue" - antivirus, some operating system tasks/schedules/other activities Reply

I don't really care for the PPU, but it would be really interesting to see what quad-cores or AMD's 4x4 would do with city of villains. Can the PPU keep its ground and does the game scale to 4 cores. You can emulate the 4x4 platform with 2xx Opterons. Reply

That's a shame because I was wondering the same thing about quad-core processors and whether they can match or even exceed the throughput of the PhysX card. After all quad-core processors should be available this time next year and will be commonplace by 2008.

The application should ideally branch off as many physics-threads as there are cores available, so on a dual-core system I would like to see two physics threads (rather than just one) in addition to the main game thread, thus ensuring all spare CPU power can be used for physics work. Having only two threads in total each performing different types of work will usually result in one of the cores being partially idle.

I personally see the PhysX card as a short-lived product because CPU power is set to rise dramatically in coming years now the focus is on ever more cores (doubling every couple of years or so); there'll be so much CPU power available to easily multi-threaded tasks like physics that there will be no need for a dedicated physics processor chip in any form. Reply

It would be nice with more of a "real world" test. Today when people are playing games online, they usally have: the game, teamspeak, game scanner, browsers for forum game/clan info and perhaps a torrent client running.

Myself I noticed a huge diffence in min fps with a dualcore vs single just with game and teamspeak - just as you tested in a earlier article. So, the question is, witch HW takes the biggest hit in a "real world" situation? Reply

Maybe if DirectX 10 has a physics API that the AGEIA's PhysX card can hook into, we'll see games that can use the card well. Otherwise no one is going to buy a card that only works for a few games that go out of their way to support a 3rd party API that results in a small frame rate increase. Reply

I don't think they're targeting people with more money than sense, it's just like all new products introduced, they're recouping all their R&D costs to bring the product to the market. Once they either do that, or people stop buying or don't buy this PPU at all, you'll see prices go down. It always happens, unless a competing product comes out for a better value.
Reply

Going from no PPU, to using one seems to be about the difference possible of switching from onboard audio, to a dedicated audio card (not very much of a difference). Less than 6 FPS min on a low end (Conroe ?) CPU, just doesnt seem to be worth the additional $250.

Now, since there is no standards for PPUs, I think this makes it even worse. I bet GPU manufactuers will end up winning the race in this arena. Reply

MMX and its successors were used to accelerate the hell out of many photoshop plug-ins (P4 in its worst days was faster running the optimised routines than Athlon64, and in many cases it was faster by a big amount).
I think the video cards could be better at offloading this kind of calculation - maybe even more optimised routines will come soon (in many cases, graphic professionals use top-of-the-line cards, or even workstation-builds like NVidia Quadro and ATI FireGL) Reply

Using Ageia PPU gives you about 35% or more extra performance in CoV, or 6fps. Using a dedicated sound card instead of a lowly integrated sound will give you 6fps in benchmarks of the Quake3 or so engines, for a 5% or so difference in frame rate. For just this purpose, PPU is better than a dedicated sound card (even if more expensive) Reply

You could find a more expensive audio solution certainly - but I don't think you would be able to reduce the frame rate as compared to a $200 Creative 7.1 Channel Sound Blaster X-Fi Fatal1ty FPS (the price from Anandtech's own RealTime Pricing).
Anyway, thanks for the article - nice written, and interesting. Thumbs up! Reply

How abotu trying the PhysX on a board that has intergrated graphics? Would it help those stuck with pos intel onboard or even the better Ati/Nvidia onboard graphics? Onboard usually covers 2d ok and has some 3d, but maybe with a little help onboard can move up with little cost. (cost as in when the PhysX comes down to the real world in pricing)? Reply

Also, a physics card "creates" more debris, which is not only physics intensive to compute (paths and so on), but GPU intensive to render.
Anyway, integrated graphics usually reduce the quality and resolution of possible gaming - using the money for the physics card for a new (or additional) graphic card would be the cheapest solution to fast, quality gaming. Not to mention you could get multimonitor capabilities in the price, maybe DVI and so on. Reply

This is quite true ... if you don't have a gpu that can handle what you are already throwing at it, a PhysX card won't do much. Sure it'll take off some CPU load, but chances are you aren't cpu limited. And if you tried turning up the debris settings you'd just be adding to the load on the GPU. Which could cause some performance decrease.

We will keep this in mind for future tests and try to address the issue later. For now, it's safe to assume that you'll need at least a midrange quality graphics board to gain anything from PhysX. Reply

£107 for an X2 3800+ or £187 for a PhysX accelerator.
I think I made the right choice upgrading from a single core to dual core.
It's nice to see there are some improvements, but money is almost always better spent elsewhere, like with the Killer NIC. Reply