In the Javagaming forum community there are several efforts to provide 3d model file loaders to various 3d APIs: for JOGL, for LWJGL, for Xith3d, maybe also for Java3d.Would it make sense to try to create a common base for parsing the raw 3d files with all their special stuff (animations, etc), let's call it low level 3d loader, and then provide an interface to offer the data to high level loaders? So that the same low level code for a certain 3d file could be used for several high level "clients" (Xith3d as well as Jogl/Lwjgl, ...) ?

What do you think? If it's silly, just say please, because I'm not an 3d expert and parsing 3d file formats I find extremely difficult. So it's rather a discussion for experts with good knowledge of 3d and 3d file formats. :-)

We already started a discussion about this in a Xith3d forum's threadPlease have a short look to this and let's reply here, please.

These are old formats, but common on the Net. The 3d modeler's current formats (.MB/.MB, .MAX) are very difficult to parse... (commercial converter tools like DeepExploration or PolyTrans need their DLLs to manage the job).

For geometry itself, you can use my geomdata package I have created for xith3d (but only 1 class is dependent on xith3d and it is designed to be substituted by other bindings). But if you start to think about something more than geometry - I don't see a chance of creating anything common. In fact, even creating a reasonable loader for xith3d only is a problem. In NWN format, I have emitters and danglymeshes for example. Both of them require at least simple physics engine. Emitter data is _extremly_ NWN specific, I have not seen half of the options in any other game/format. For java3d, I have added a bunch of Behaviours to created models, which managed all complications - but they have not interacted with rest of application (so for example, no particles bouncing from floor/objects). I don't think there is any way to do it in portable, application-independent manner (not to mention binding-independent).

Even if you will start to think about hierarchical structure, which is common to almost all model formats - you will need to create things like Group, Geometry, TransformGroup, Light, etc in this common binding - which means that you will need to create certain subset of java3d/xith3d scenegraph. If you will start to add material colors, polygon attributes, etc, you will end up with 75% of their scenegraphs. I think it would be just easier to take out xith3d scenegraph from xith3d, change xith3d to work on it's abstract form (by removing things like references to Atoms) and then require all people who want to use 'common loader infrastructure' to be able to render this scenegraph. Alternative option would be to define corresponding set of interfaces and require number of factories to create instances of all classes - but anyway, you will impose scenegraph architecture on all renderers who want to use it.

[thinks of OpenGL and wonders why a similar approach can't be taken here]

Ignoring Cas for the moment (sorry ), what we have here is either a multi-layer protocol problem (hey, I'm a networking guy ), a data-structure problem, or both.

Abies approach really just looks at it as a data structure problem, and I agree with the broad conclusions. However, if you look at it as a layering issue as well (e.g. start thinking about OGL's far-from-perfect-but-still-works-quite-well extensions) you see that whilst it is true that you would have lots of non-shared unique data structures you would ALSO have lots of shared ones.

As to layers, you might have a set such as: 1. file checker: examines file, works out format, chooses a parser 2. parser: parses file into a set of standard data structures. 2.1. first pass: just gets the raw triangle data 2.2. 2nd pass: filters out the "base" tri data, i.e. the non-animated starting point for the model 2.3. nth pass: progressively parses extra info from the parser, sometimes just by going sequentially through the file, sometimes by re-parsing it from scratch (depending upon file format). Each pass implemented as a separate module, and using more and more esoteric data-structures. e.g. for NWN one of the latest passes would be to deal with it's emitter data). 3. ...etc (I haven't looked at more than 2 or 3 3D file formats )

It's taking what Bomb suggests, but instead of gunning for 2 layers, having several more. This is the tried-and-tested approach of translators/compilers which have to solve almost exactly the same kind of problem: have loads of layers!

Can we not just get a quick-and-dirty common XML DTD sorted and then work out how to write exporters for the tools rather than importers for Java?Cas

Exporters increase the number of formats that a given importer can use. However, they do nothing intrinsically to let you use any of the esoteric features of different exporters, unless you have an intermediate format (i.e. your XML *Schema* [...death to DTD's!]) which has *all* the features of *all* formats, and...if you can load it!. If you can write it, great, but it seems pretty tough going AND you're in serious danger of merely shifting the difficult part from "reading arbitrary formats" to "interpreting an incredibly complex format that includes *everything*".

I thought one of the main use-cases here was that people wanted the data loaded into in-memory data structures. This isn't solved by exporters, because people have to write their own importers for the Schema, OR someone has to invent a generic set of data structures, and write an importer from the Schema to this generic set, ... which means effectively doing all the work that Bomb suggested in the first place, only now you've done extra work in writing the exporters and Schema as well - at the benefit of being able to achieve the overall system in two discrete stages.

well, in the very little I can comment in a programming thread, in my experience as artist...The exporter for an application, ie:Max, is very good , as you will be surely supporting lights, cameras, modifiers, lots of very specific stuff. Quite good for the artist, as it gets more wysiwyg, but...Now the package gets updated to whatever.214b version, and you need to at least re compile your plugin. Often it involves much more. I have seen great exporters die because of at a moment the developer was tired of updating each little package update...

just multiply that by n number of packages....

A common format already existing in most softwares (such as 3ds(though have weird limits in the artist side), OBJ or *.x ) imho is better, as you do one time effort only. And as all those *.x exporters already made (Panda Exporter for Max, the exporter for LW, for Maya...) must maintain *.x compatibility, or wont be of much use for directx engines, neither for other engines import (I read recently PandaExporter didn't add certain new feature of *.x just to keep compatibility with other applications.Also, Gamestudio and Darkbasic Pro import *.x animations, there must be maintained a standard, those cost quite bucks as to risk if not...) , it must be quite safe to think that once you keep compatible to dx8, or 9(if can import also more extended v8 exporters), you keep compatible with all those packages...at a time. Just like using OBJ or 3ds. Or md2 and md3. But one thing I haven't said yet is...there are less, or less free exporters for md2 and md3. Still there are...qtip plugin for md2 is comercial, but limits allow you to export... for md3 there are quite a lot more of exporters and importers for Max , though. But less free-cheap tools that export md2 and md3, for example, than *.x files.(besides md2 and 3 don't have bones nor weights)

Well, I should not be talking here, from this point is more a programmers decission.

...and speaking of needed features in importers/exporters in 3d, at least to my knowledge :

- mesh vertices- uv mapped vertices- material vertices (which vertices have applied which material)- material settings (OBJ for example does it nicely: an ascii file, is produced by whatever the 3d software, where is written a list of materials applied to that object, each with specular, shininess, opacity, etc, values.The actual mesh file (with all the other info) is another ascii file. X format has an ascii format, and maybe better or easier than binary. Don't know if somewhere I read binary reads quicker) .Bump map, reflective map,(a relative path to the tga, and a value of the strenght with which it is showed, as any type of map in a material) etc, if the engine can afford it. double sided rendering, and Opacity (both features allow doing leaves, etc for the artist) use to be way more important than bumps...

-smoothing groups. I think in OBJ is just an "s" for a bunch of vertices. It allows to set the normals equal in that area of vertices, while making hard edges in the frontiers. Usually in 3d softwares is exported if marked "export smoothing groups" .Some software do break the mesh (duplicate vertices in that "frontiers") , some do some trick..-smoothing value .I don't know how this is written in a format. Is like the last one, but it automatically set the hard edges for an angle (bewteen a face and its adjacent) thresold, determined by the artist. Usually in 3d softwares is exported if marked "export normals".-Bones (which form an skeleton, and have to many advantages in artist and coder side, as to list here...)-weights. Will do that the bending of the mesh is organically bent, nicely and human like. I consider it essential.(even more, if the option is bones with no weights, I strongly decide to use then md2 or md3, as they will bend shoulders and mesh in general at least ok.)-Animation (i suppose rotations and traslations of joints. SOmwhere I read usually there's a place where mesh vertices is stored, another file or chunk for weights, how(how much strength) each bone(often several) influence each vertex) , and another chunk for joints rotations and traslations. Also, a field for interpolation, linear and/or spline. Frame rate(time scale) also.

For scene, while bones and weights, animation, don't apply, is better if camera, lights and multitexturing is supported. But if not, it's just positioned by the coder.Indeed, yesterday I read to my surprise that 3ds and vrml2 support cameras and lights...unluckily, most 3d softwares use only a subset of each format capability, being 3ds an specially dramatic case of this.The multitexturing issue. Seems dx8 only support 2 levels of texture, while, dx9 supports a lot more. Again, it depends on compatibility...c++ dx9 engines I tested with (3 right now) were loading ok my dx8 files, so I guess if compatibility is not lost...but this easily could end in a non compatble problem at the end, if finally this format is added I suggest some previous tests or something, with softwares x exports...

Sure I left away many settings...I think those were the more important...

I was thinking about the same thing a couple of months ago. Lets see, this is an encoding for vrml but it could be xVrml or xml just as well. It would support every basic stuff a game needs, plus it could be extended adhoc. A parser and tree builder for this can be created with JavaCC in minutes. An example inspired in vrml and xml:

There are some very cool and FREE modelers like Wings3d or Blender. I dont know about wings but Blender supports all the framework model / texture / light / animate / render. And since it is fully scriptable in python and and as absolute control over datablock references (linking, duplication, etc) it would be easy to write an exporter in Python for it.

Something i would like (this is a bit though however) is to have two scenegraph representations. One closer to the model (the classes returned by JJTree would do fine for the example above) and another closer to the implementation (jogl, java3d, xith3d, software render, whatever). Each one doesnt have to know about the other. The first has parts of the scenegraph cached and mapped into the second representation. Once the first is changed representation is automatically invalidated and reconstructed for that branch. This the would be very good for the freadom and independence of the 3d model, since the cached scenegraph would have the freadom to be something completely different than the model scenegraph. It would be the responsability of the cached scenegraph (from the mapping between the two scenegraph branches) to know when and to what extent of its own graph nodes would have to be rebuilt. Like I said this would be something though but it wouldnt be such a weight on speed if well done.

mmh - the discussion seems to go into the direction of an xml format. hmm doesn't there already exist such a format?:

collada ?!?

I believe this also going to be one of standards in near future.

Correct me if I'm wrong...

Collada definitely will become a standard! (or may is even today)

BUT its primary purpose is to make it easier transfering animted models (with soft-skining, morph-targets, ..) from one 3d modelling tool to another (much like FBX, beside the missing SDK). It's great seeing such an effort going on, since it is really a pain to export an animted human with skeletal and facial animation, from one DDC and import it into another.

It is NOT designed for real-time applications like games. One major reason is that loading a collada file takes too long. Nearly every upcoming game-engine features seamless (re-)loading of objects from files when the players moves to another location. Using a format like collada would be a great waste of computational power in that case.

Can we not justeasy get a quick-and-dirty common XML DTD sorted and then work out how to write exporters for the tools rather than importers for Java? It makes more sense to code exporters than importers.

I totally agree with that: just write a (simple) exporter to a format (e.g. xml based) that fits your requirements and extend it when new ones are coming up. Actually, it is not very complicated, since most DDCs have nice scripting languages (Maya:mel,3DSMAX:maxscript, Blender:python,..) and a relatively clear API.

Instead of having a common xml format, I think what you need is a common API, much like the SAX API that can read any xml document easily. I'll try to give an example and give you the similarities with SAX:

SAX knows events like "tags", "attributes", and "characters". That's basically all there is in an XML file. In a 3D model file, we'd have events like "vertices", "coordinates", "groups", "animation", ...

for every file format, you'll need a parser. SAX readers are basically different implementations that read the same kind of documents. However, when reading 3D files, we would need one parser for every format. (For example, an "OBJParser", a "3DSParser", a "COLLADAParser", etc) Every time this parser encounters a vertex in the 3D file, it will fire a "vertex" event. The parser logic does not need to know who is interested in this vertex, and it does not need how to store this vertex. It just says "hey, I've found a vertex!"

on the receiving end of this pipeline, are the sax handlers. In sax, if you want to parse xml documents, you simply write a small handler that looks only for the xml-tags you're interested in. For our 3D models, you'll need to write a handler for every scene-graph notation you wish to store the files. A game engine like Xith would have a "XithHandler", for example, and JME would have a single "JMEHandler". And when one of the parsers shouts "I've found a vertex!", then the XithHandler will reply: "So you've found a vertex? Here, let me handle it and store it into my proprietary format..."

So when somebody invents a new 3D file format, they would only have to write the "parser" part. This parser would fire a bunch of events, just like all the other parsers. By writing one parser class, the file can now be parsed by all game engines that have a Handler!

If Java3D wants to have a model loader, they'll have to write a handler, and for every kind of event fired, they would have to decide how to store that internally. By writing one Java3DHandler, they can read all 3D formats someone bothered to write a parser for!

The Java3D team need not know about the newly invented 3D format, just like the parser for the new format does not need to know about Java3D. Any programmer who wishes to read the new format inside a Java3D scene, simply downloads the parser for the new format, and the handler for Java3D. Their code would look like this:

I've listed the methods in the order they would typically be executed. If your program is interested in triangles, you can allocate memory each time you get a startTriangle() method call, store the vertices as they fly by, and save the triangle() into your own format every time you get a endTriangle() method.

On the other hand, if your program is not interested in triangles, but only in vertices (far-fetched example, but the same would hold for smoothing groups, animation frames, etc), the startTriangle() and endTriangle() methods would do nothing, and you would only code the start/endVertex() methods.

EDIT2: You could also easily have Handlers that write to a file, like an "OBJHandler", a "3DSHandler" and a "COLLADAHandler"... This way, you could read ANY format 3D file, and write it to ANY format. Theoretically, you could read an 3DS file and store it as an OBJ file, if you constructed a 3DSReader, and coupled it with an OBJHandler.

Unfortunately, I don't have NEARLY enough experience with 3D data formats to distill a good, common API. One thing is for certain: the API would require a great deal more methods than the SAX API :-)

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org