Now for the tank...i've converted it to OBJ format first to ease testing. So i could reuse my existing test applications without changing anything except the filename and maybe the scaling.

I wasn't able to load it with Java3D (it bombed out with a nullpointer exception when loading the materials), so i skipped this. The numbers for JME are taken with the build-in stats tool and backed up by FRAPS (had to write the fps to a file, because FRAPS' onscreen display stops at 999). I've added 5% to the JME score to compensate for the stats tool's display. This seems to be reasonable judging from the results with the car. Of course, lockMeshes() has been used.For 3DzzD, i've tested it in IE8 and Firefox3 and got the same results. An applet isn't a very good environment to test performance IMHO but it's all i got, so...

Here are the numbers:

jPCT: 3500fps3DzzD: 2100fpsJME: 2000fpsxith3D: 1800fps

Edit: Test system was the usual quadcore mentioned above..

Edit2: Updated the 3DzzD scores with the results of the fixed version.

I think Java3D does some optimisations like collapsing TransformGroups, but the impact is minimal. The important optimisation here is compiling display lists of the static geometry. This is done anyway since it checks the capability bits and sees that it is static. I'm a little disappointed with Java3D. I had expected it to run at about 50% of the others. I guess you don't have a moniter set at 120hz? Java3D will try to lock to the refresh rate.

ok you will probably get better results with software... as I said this is the case on my computer (soft is about 2 times faster then the hardware) and the hardware is basically a bunch of display lists (cutted in an octree) so nothing related to renderer it should give arround the same result as other as all is done GPU side, I try to see if I can catch what is going wrong with it.

NB : It will start in software mode press H and accept signed popup to switch to hardware, fps is printed in the java console 3000 means 30fps

EDIT:one interresting thing to notice is that (for 3DzzD) it use 80Mb with IE6 in software mode and arround 100Mb once switched to hardware (but no more than 65Mb of java heap space so it dont exceed Applet memory limit)

These tests make little sense. You are measuring the rendering speed of a single 1mill triangle model, and actually measuring the rendering speed of a single OpenGL Display List when using lockMeshes(). Code written and optimized to render specifically that model would beat all the engines, tho would not make big difference because of Display List rendering. Choose a test with lots of small models, specific features (physics, particle system, animations, shadows, etc), a test where the Java code of the engine actually matters, and not such a minimalistic test. You need a test where different subsystems of the engine are active, then you can see how good those subsystems work together, and do they slow down when used together compared to other engines.

Set a minimal fps requirement (for example 100 fps), then start adding more objects and features, and see what you can add, so that you don't go below the fps requirement. That's a more realistic test.

These tests make little sense. You are measuring the rendering speed of a single 1mill triangle model, and actually measuring the rendering speed of a single OpenGL Display List when using lockMeshes(). Code written and optimized to render specifically that model would beat all the engines, tho would not make big difference because of Display List rendering. Choose a test with lots of small models, specific features (physics, particle system, animations, shadows, etc), a test where the Java code of the engine actually matters, and not such a minimalistic test. You need a test where different subsystems of the engine are active, then you can see how good those subsystems work together, and do they slow down when used together compared to other engines.

Set a minimal fps requirement (for example 100 fps), then start adding more objects and features, and see what you can add, so that you don't go below the fps requirement. That's a more realistic test.

You have a point about this particular test only comparing one part of all the engines (i.e. rendering a single model). However, I believe this test was done correctly - it compared performance of a particular test case for each engine, which was the point. I do not think that throwing together a ton of various features of the engines to see how much you can do within some minimum fps would be a useful way to compare the engines at all. To be scientific and unbiased, you must compare apples to apples in whatever test you run. If you want to see how the engines compare in other situations besides rendering a single model, then you must set up individual minimalistic unbiased tests for each thing you want to compare. For example lots of small models as you suggested might be one such test. In each different type of test the ranking may very well be different (in fact, I would expect it to be), but that doesn't mean the other tests or rankings are invalid. It would just mean that different engines are better at different things (which is probably the point you were trying to make in your post).

We love death. The US loves life. That is the difference between us. -Osama bin Laden, mass murderer

Set a minimal fps requirement (for example 100 fps), then start adding more objects and features, and see what you can add, so that you don't go below the fps requirement. That's a more realistic test.

Go ahead! Nobody prevents you from doing this. Personally, i have no time to learn how to use 4 additional engines properly and write the tests. I've read a lot of talk about why this test is wrong, why i'm biased, that it can't possibly be etc...and i tried my very best to make this test as unbiased as possible and to make each engine perform as good as it gets given my limited knowledge of them...i've not read anything from anybody except from one person from JME about doing his/her own tests.And BTW: The model has 1.1 vertices, not triangles (around half a million IIRC).

I am very interested in how many un-needed opengGL calls are made. Because when i fidled with JME, there were a whole bunch of states set each render, that wasnt needed.

yes, that is one point that make things faster : unecessary opengl call . also and I think that the main difference is using quads rather than triangles:3DzzD use triangles to be homogeneous with the software render , using quads it will be probably faster.

yes, that is one point that make things faster : unecessary opengl call . also and I think that the main difference is using quads rather than triangles:3DzzD use triangles to be homogeneous with the software render , using quads it will be probably faster.

In regards to the quads comment. Thats probably true, But if the model is cached (VBO/Display List) with proper indices used and such. I assume it would be just the same as a Quad, As the video card needs to breakup the quad into triangles before rendering anyway (this used to be the case with old video cards, dont know if it still is).

In regards to the quads comment. Thats probably true, But if the model is cached (VBO/Display List) with proper indices used and such. I assume it would be just the same as a Quad, As the video card needs to breakup the quad into triangles before rendering anyway (this used to be the case with old video cards, dont know if it still is).

dont know a proper test is necessary, opengl compilation is not that good like for the states changes => they are not collapsed even if they are not requiered or meaningless

Egon, let's just be fair. I don't know what you expected besides some rivalry and suggestions on why the tests may not be a proper representation when you kicked the thing off with a comment like this (emphasis is mine):

Quote

To come back to the beginning of this thread, i think it's time now for a little pissing contest. I haven't much experience with Xith3D or jME, so i relied on the tests that come with both engines and modified them (to be exact i simply changed the loaded models) to do some simple benchmarking on two machines.

Still, I think most people were pretty civil and clear headed.

As an aside, I might add that it's a little easier to deflect criticism when you provide the exact test code and such so other people can try.... Trust me, I've been through this before. It's also easier when you are comparing engines that are open source as it is simpler to know that the same features are being used on both sides, etc. *shrug*

Anyway, as was said here before, nice work to all the engines really. The 3d Java world is so far from where it was a few years ago isn't it?

Egon, let's just be fair. I don't know what you expected besides some rivalry and suggestions on why the tests may not be a proper representation when you kicked the thing off with a comment like this (emphasis is mine):

That's a quote from the jPCT-forum. I wouldn't have used the same wording if i would have posted it here. I have no problem with people questioning my tests. After all, it lead to better tests in the end. But it's cheap to say that a test is wrong if you don't do anything but talk to prove (or improve) it. In fact, we have NO comprehensive test for all engines and we never will. The tests and the test procedure is the best that i could come up with given limited time and resources. It may be primitive and doesn't reflect most real world scenarios, but it's still the only test that has been run on all 5 engines so far.

Something every library writer should know is that example code will always be used as base code and/or copy and pasted into a larger app. Example code will be taken as the canonical method, and anything non-standard or non-optimal (like extra debugging output) needs to be clearly flagged as such.

It's entirely reasonable to use the example code from each of the libraries to do a simple model rendering test. Yes, in a way it has elements of being a micro benchmark, since it only tests a small set of functionality but rendering a model is such core functionality it needs to be as optimal as possible since everything else builds on top of it. In theory all the libraries should be boiling it down to just a handful of identical calls, and so see near-identical results. But since obviously this isn't happening it highlights deficiencies in the libraries themselves.

An interesting variation would be to see how many instances of the same model each library could render whilst still maintaining a certain framerate (say, 60fps), which would give the state handling code a bit more of a workout.

hello. I generally keep to the jME forums but I've been following this discussion since it was linked there.I don't have a problem with the author of an engine posting a test which shows that it's well, really good.Nor do I have any interest in pissing contests. As a user such things are unlikely to be very relevant.

However I do think that some things have been taken out of context, because everyone is thinking in performance terms.I will talk only about jME because that is what I know.

jMETest code is there to demonstrate features of the engine. The object loading tests demonstrate loading objects. They are not performance tests. I would say that if you are trying to do something highly unusual (display 1million triangle objects), then you might have to check into possible optimizations. If the loading tests were optimized for this, they would contain code not relevant or perhaps appropriate to their purpose. For example, they would not work with animated meshes.

Quote

Example code will be taken as the canonical method

Which it is, as far as loading obj files go. It is quite possible to write great looking jME games without delving into any performance optimization at all. But again, if you are trying to do a meaningful performance test, you need to do a little more. Let's be honest, there are not many users around who are pushing the limits of these engines like this (I know I'm not).

It is reasonable to point out the flaws in a test if it is flawed. It is preferable to create a better one. But that takes work, and people may not have the time (or inclination to piss).

In my case I thought the jME number very low. So I tried it myself, got a similar result. Added a line and the result was good enough for me. Since rendering ultra high poly models is not what I'm interested in, there would be no reason for me to go any further than that.

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org