They both have strenghts. My advice, look at the starter guide for jME at their web page. Look at how the code is structured. Go through about 2-3 programs. Then look at the Xith3D starter guide at their web page. Look at their code structure. Go through about 2-3 programs. Pick the API who's code feels more natural for you.

Xith3d API is based on java3d, so if you have experience in it, you can start coding almost straight away. Other than that, jME is probably a bit easier to grasp for beginner.

As far as internals are concerned, xith3d state management/sorting is a more sophisticated at the moment, so there is a chance you might get better performance - but this comes at cost of less trivial library code, so changes are rather harder.

Xith3d is a more complete currently, but jME is a lot more active.

jME contains a bit more things inside - while xith3d is mainly visualisation api (game oriented, but focused on graphics/3d world), jME seems to extend in other areas with more game engine functionality outside this area. You have to decide if you want AI routines bundled in same package - there are pros and cons to this.

the AI routines will never be incoorporated into the same package as jME. They will be a seperate jar to download. So if you feel you cant be bothered to write your own AI routines/library, you can use jME's which integrates nicely into jME.

I tested last release version of Xith using latest JOGL against JME 0.7 using LWJGL 0.9, the test consisted of loading 600000 triangles of geometry with one light source. Xith rendered around 16fps, while jME was giving me 4-5fps.

We've recently added a few things in CVS that would give jME a nice in scenesgraphs with many objects, if your scene is one mesh of 600,000 tris... well is that normal for games? Even so, we haven't run into that situation much yet so there's probably room for good improvements.

The model has around 30000 polygons, I was experimenting with large models coz I am working on a visualization system that will have high res models.

I can't provide the source code for the Xith demo coz it is big, but it is pretty much the same concept as the jME test, I loaded all the objects in to the scene graph, added one light source and moved the camera. Xith objects were loaded from 3DS file while on jME side I used MilkShape loader. I have coded my own camera movement in Xith app, while on jME side I used the default stuff.

I just found that I used Spot Light in Xith, and Point Light in jME, could that make such a big difference?

Ah, well are the models static or animated? I'm not sure how/if Xith handles this, but I would recommend enabling VBO support in jME. We ran into a similar thing with large terrains running at 40-45 FPS and then developed VBO support and saw that number jump up to 300-400 FPS. I'd be happy to help you fix that up if you like... Also, you might think about converting your .ms3d file to a .jme file prior to execution for quicker loading (won't likely affect FPS, but the jme loader is easier to use and easier to enable features with.)

Sure, I need 100% static geometry so it would be great if you can give me a hint on how to use VBO, I also suspected that this will improve speed a lot.

Quote

One other thing, if your model is something you are willing to share, I'd love to get a copy to use for stress testing and general improvement of jME.

yes you can get a copy of it, I am willing to share it for the purposes of testing

Quote

Could you please run the jME program in fullscreen and see what FPS you get?

Just tested running in fullscreen but got exactly the same results

Quote

Also, in jME, with a poly count that big, you would generally use LOD on the furthurest models to decrease Polycounts and hence increase FPS.

Well, my point was, jME has all the LOD tools, while Xith has only switch node, which has some problems as I read in Xith forum. I just wanted to compare raw rendering speed. I definetely like LOD stuff in JME but still I will have large polycounts (maxing at 300K with LODs) since I am not building a game.

Email me the model you're using. I wrote the ms3d loader so I may be able to debug if there's something that needs it. I'ld only use the model for testing. There's a newer version of the loader in the current CVS. cep221atgmaildotcom

PS: Is it just me, or do the jME models look way, way better? I guess it's the lighting direction.

I've not use jME (I intend to at some point soon). I have however used Xith quite a bit. jME appears to be more feature rich although most features are bespoke to certain types of games.

jME does seem to be more targeted towards first person shooters (at least thats my perception looking at the feature list). Xith was originally attempting to be more generic (a la Java3D).

The main point I've seen as an issue so far is the superb HUD support in jME and the lack of anything reliable or decent in Xith (the original overlay stuff is unpredictable at best). However, I've just seen that work has started on providing a HUD system for Xith.

In short I don't know, but I thought the viewpoint might be useful.

Kev

PS. I suspect the models are absolutely identical I would say that tho.

jME does seem to be more targeted towards first person shooters (at least thats my perception looking at the feature list). Xith was originally attempting to be more generic (a la Java3D).

I'd say Java3D is still more generic than Xith3D. Java3D is targeted towards all kinds of 3D simulations, whereas Xith3D concentrates more on games (it's thread unsafe, uses only floats, doesn't use capabilities, ...), which leads to a better performance.

It would be good to have people using both, Xith3D and jME, so they can provide feedback of the strengths and weaknesses of both APIs. Maybe some kind of official "project goal comparision" would be helpful.

I tried, pretty hard, to get some people to write up biased articles on their particular engine about 6 months ago. I was then going to work with a couple of less-biased people to mesh these into a series of unbiased articles. The problem with starting out writing unbiased articles is you end up only covering the highest common factor - which is useless to games programmers. I knew I could trust myself to be impartial IF I had the evidence/facts from biased people in front of me (because I could check everything they said!), and I know a couple of people around here who have shown their ability and willingness to be impartial already - and who have useful real experience with using some or all of these 3D engines.

OK, so it was a miserable failure. All I ever achieved was to get a short article from Cas on LWJGL (too short, sadly - needed more biased details. I was surprised he was so unevangelical - he must have been tired or something ).

So. Chalk that one up to experience.

....

Here's another idea, which I suspect will work MUCH better: having thought about it, I wondered what I'd do if I were an expert in each engine already and able to write the entire review series myself. Ummm...aha! Unit tests! Use cases!

The *users*, between us, should be able to work out a series of small independent tests - much like the demos that already exist for each API. The API authors / most prolific users then have to code up those tests. It would be best if they could do two versions of each - one which is just the "off the top of my head, not trying to be clever" and then one which "optimizes performance, and perhaps cheats a little here and there" (e.g. in the screenshots in this thread the plain version would have to render all those polys, and the optimized version could use LOD to its heart's content). This answers two of the most common questions:

- How easy is it to do X in Y (don't care about speed, just as long as it displays correctly)? - How fast is it *possible* to do X in Y using optimizations etc?

I'm too busy to do much on this myself right now, but...I'd suggest the following list for starters (would like to cross-post to Xith forums...):

- a Terrain demo, plain multicoloured shaded polygons, fills the view window mostly to the horizon - ditto, but with blended textures (no harsh edges when texture changes from one to another)

- a clone of the nVidia ToySoldiers (very large number of identical models loaded at once, all animated in lock-step - i.e. under the hood ought to be using onboard T&L etc, as well as testing the SG's culling effectiveness) - ditto but with each model in a different state of animation at the same time (c.f. what happens when the robot appears in the NV demo - they scatter and are no longer in lock-step, so their polygons are no longer identical)

- a 3D maze, from 3rd person floating perspective, with a model running around it at random, and a simple HUD overlayed on top, showing various things - ditto, with a detailed HUD and lots of models running around it (these simple tests would expose current known problems in Xith's HUD/Overlay system)

...etc. Of course, each test needs MUCH more detailed descriptions than that, with a checklist of requirements, and precise dimensions, and actual model files and texture files, to make for fair comparisons (and to make implementation easy!).

I have a feeling this might go the same way as the last idea simply because its too much like hard work

But...surely not!

The test cases can be invented by anyone who has a decent grasp of what they'd like to do in a game, and then they can be implemented by any user of the API - each could easily be written by a differnet person (a la NeHe conversions to JOGL. LWJGL, etc).

The individual tests themselves are pretty trivial - they should be typically hardly any more complex than the demos that each engine already has and maintains, but the advantage is that they come from a common root, allowing an (approximate) like for like comparison (which is what no-one has access to right now).

I agree the best thing to make sure this happened would be to define a couple and implement them myself for one API, and I'm sure other people would then say "what the heck" and have a go at doing it for their own API...but, sadly, I'm too busy.

Unfortunately I think most everyone else is also too busy for this sort of task. There just always seem to be more important tasks at hand.

Tcomparison to the NeHe ports, no disrespect to the providers (of what I'm sure is a fundamentally useful resource), however they're a trivial (in terms of engineering) porting exercise in comparison to developing the sort of demos you're talking about.

As to the validity of these things as a benchmarking tool after development while I don't think there is any reason to believe that there would be anything wrong with them they'd never be kept up to date with the latest developments in the APIs or be 100% understood and accepted.

As to the validity of these things as a benchmarking tool after development

Perhaps this explains our different perspectives a little - I'm not thinking of benchmarking as being part of the purpose (despite references to making versions tuned for performance) but instead ONLY thinking of using them as a comparison for people trying to appreciate the differences as potential users.

For instance, 10 minutes looking at JOGL code side-by-side with LWJGL code to do the same kind of thing is enough for a lot of people to quickly decide which one they want to use. In that case it's easier because you're only considering low-level differences, e.g. method-signatures etc. You either see C syntax and think "yes! Want that!" or you see a more OOP approach to OGL and have similar thoughts (both of those being aspects that leap out at you as soon as you get the code in front of your eyes).

But right now we can only compare apples to oranges with X and J etc - there are no "common" cases .

It's not a joke. They look absolutely nothing alike and is exactly like the example you gave.

" For instance, 10 minutes looking at JOGL code side-by-side with LWJGL code to do the same kind of thing is enough for a lot of people to quickly decide which one they want to use. In that case it's easier because you're only considering low-level differences, e.g. method-signatures etc. You either see C syntax and think "yes! Want that!" or you see a more OOP approach to OGL and have similar thoughts (both of those being aspects that leap out at you as soon as you get the code in front of your eyes). "

One big difference between Xith and jME is that jME have translation, rotation and scaling for each Spatial in the scene graph, even a TriMesh. Xith only have this data for a TransformGroup. To me it seems the jME solution would waste more resources, both memory and CPU.

and the TransformGroup has the translation, rotation and scaling. Its no difference. Actually, Xith3D's is more momery wasteful as you need to create a furthur object (TransformGroup) rather than just 3 Vector3fs as you do in jME.

I have a feeling Phazer was saying that JME effectively has a trasform group built into every node and has to account for each one when rendering.

Example: 5 TriMeshs in a Group. Not transformed in any way. In JME the rendering process has to at least check whether the transform components of the mesh have been used and hence need to be honoured during rendering. In Xith, there would be no transform information since there is no TransformGroup involved.

Memory usage would be higher in JME because every node would use memory for transform information even if its not used.

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org