Opinions very, but I would like to get experienced ones on which 3D object file formats-including for terrain, vehicles, trees, and so forth-result in the best performance, all other things being equal. This is the general question.

The more specific question is, which file formats for 3D objects make better performance for the C# based games which I will be creating?

Why in the world would any game developer use 3ds file format? Is there some kind of performance advantage because it conforms to the old naming conventions?

The 3D model game file formats which I have used are 3ds, obj, x, and a proprietary one not published yet, so I have some experience.

Any and all comments, discussions, and criticism is welcome as long as it is somehow related.

Clinton

Personal life and your private thoughts always effect your career. Research is the intellectual backbone of game development and the first order. Version Control is crucial for full management of applications and software. The better the workflow pipeline, then the greater the potential output for a quality game. Completing projects is the last but finest order.

Opinions very, but I would like to get experienced ones on which 3D object file formats-including for terrain, vehicles, trees, and so forth-result in the best performance, all other things being equal. This is the general question.

The more specific question is, which file formats for 3D objects make better performance for the C# based games which I will be creating?

Why in the world would any game developer use 3ds file format? Is there some kind of performance advantage because it conforms to the old naming conventions?

The 3D model game file formats which I have used are 3ds, obj, x, and a proprietary one not published yet, so I have some experience.

Any and all comments, discussions, and criticism is welcome as long as it is somehow related.

Clinton

Most exchange formats are not good for real time 3d. Common exchange formats being, fbx, 3ds, collada ect. You will want to use a format made specifically for real time, such as x, mesh (ogre3d) ect... The difference is in load times mostly but also depending on how the program stores the data internally, you may also get better performance.

If this post or signature was helpful and/or constructive please give rep.

Uh, the file format will have zero impact in actual rendering performance, as the model will be converted to the program's internal format regardless of the original format. As for loading performance, it really depends, binary formats are usually the fastest, and also take the least space on disk, but are comparatively much harder to parse.

For instance, many people use .obj not because it is fast, but because it is very easy to parse, readily editable by any text editor, and is usually good enough for most models even if it is quite wasteful in terms of storage. But loading the same model from a .obj and from a .3ds will yield the exact same model representation in your game's memory and there will be no performance difference after loading is complete.

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

binary formats are usually the fastest, and also take the least space on disk, but are comparatively much harder to parse.

Hm, I would say the exact opposite (talking about the parsing part).

Text formats are easier for human to read and understand, but reading them from a code is harder because you must do some string processing, finding keywords etc. and last but not least - converting textual representations of numbers into actual numbers.

Binary formats are nearly impossible for a human to read. But for a computer code it's usually very easy. Specifically when talking about 3D mesh formats, there isn't much surplus information in the binary files, all you need to know is that for example the first 4 bytes represent the number of faces, then you have 3*4 bytes for a normal vector, then 3*3*4 bytes for 3 vertices of a face etc etc etc (simplified example of a binary stl).And when reading those bytes, for example 12 bytes of a normal vector, you can directly use this data to fill your vector structure in the code, no processing needed, just file.read or memcpy.

The best format for a game that is loading models on the fly will be a custom one that maps directly to it's internal data structures. Otherwise, it doesn't matter.

My vote for this. There's nothing better than being able to fill your whole vertex buffer by a single file.read call, reading (vertexsize * vertexcount) bytes.

binary formats are usually the fastest, and also take the least space on disk, but are comparatively much harder to parse.

Hm, I would say the exact opposite (talking about the parsing part).

Text formats are easier for human to read and understand, but reading them from a code is harder because you must do some string processing, finding keywords etc. and last but not least - converting textual representations of numbers into actual numbers.

Binary formats are nearly impossible for a human to read. But for a computer code it's usually very easy. Specifically when talking about 3D mesh formats, there isn't much surplus information in the binary files, all you need to know is that for example the first 4 bytes represent the number of faces, then you have 3*4 bytes for a normal vector, then 3*3*4 bytes for 3 vertices or a face etc etc etc (simplified example of a binary stl).And when reading those bytes, for example 12 bytes of a normal vector, you can directly use this data to fill your vector structure in the code, no processing needed, just file.read or memcpy.

Well, theoretically I would agree with you (and your explanation is correct) but usually, textual formats are left as simple as possible because, well, optimizing something that isn't meant to be fast is contradictory, whereas binary formats are usually much more complex as they feature stuff like binary compression, recursive model structures, special constructs, etc... to make them even more efficient. They also often contain tons of metadata.

In short, I correct my statement: binary formats often contain much more stuff to parse, but are indeed easier to parse by a computer.

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

Just another note to the latest posts - if you decide to make your own mesh file format, you really should make it binary and as Daaark said map it directly to your internal structures. I personaly wouldn't see any point in creating a custom text mesh file format.

I did specifically what Daaark mentions. I work with .dae (COLLADA) files in the 3d editor, but then use a custom tool to load those up (with AssImp.NET), convert the data to my proprietary mesh object, and then serialize that out into a binary file. That way I only ever store exactly what the library needs to build the mesh at runtime.

In my current engine, I use in-place binary formats wherever possible. Most of these don't require any kind of parsing step, or OnLoad/Init type functions whatsoever. You just read the file from disk into memory, cast the binary blob to some specific type of structure, and your game is ready to go immediately. Loading times are completely bound by I/O speed.

The main thing that OnLoad/Parse functions do is patch up pointer values, because you obviously can't serialize pointers to disk directly -- so most of my resource structures don't use pointers, which means they can be read/written to disk as-is, without any kind of serialization layer. Instead of pointers, I use offsets and integer addresses.

The shader compiler tool takes .hlsl/.cg files, parses/compiles them, and uses a C# binary writer to write out the above structures. The C++ engine can then just load the whole binary file into memory, and cast it to a ShaderPackBlob, and the game can access any of the sub-structures without having to parse the file.[edit] Just realised you're asking for a C# engine...I'm not sure how to implement in-place loading of binary structures like this in C#, but it would probably involve unsafe and StructLayout(LayoutKind.Explicit)...

Can I ask how you handle writing out objects that are runtime-dependent? (I may be making a false assumption here, as well). I was under the impression that things like buffers and Texture objects that require a device context to create are volatile "memory-only" elements. That's one thing I still do in my onLoad methods from the binary format: create the device-contextual buffers via the game's DX11 device before handing off the object reference.

Am I completely off-base in my understanding of these objects? Or is the data valid, and I just need to do what you mention and correct the buffer's internal relation to the device that will render it?

Can I ask how you handle writing out objects that are runtime-dependent? I was under the impression that things like buffers and Texture objects that require a device context to create are volatile "memory-only" elements. That's one thing I still do in my onLoad methods from the binary format: create the device-contextual buffers via the game's DX11 device before handing off the object reference.

Yes sorry, most of these structures don't require a parsing step. The above structure does perform the pseudocode of for each program: program.handle=device.Create(code[program.offset]).free(code). i.e. yes, D3D objects must be created and destroyed. However, this is from the D3D9 version of my shader format, so CBuffers specifically don't have to be created via D3D ;)Side note - on consoles with nicer graphics APIs, this is isn't required (all that's required is pointer-patching to where you streamed the VRAM data instead).

Having written a loader for the 3ds format I heartily recommend avoiding it if possible. It's an old format with limitations that might cause you problems over a newer format.

For example: 1) All filenames are in 8.3 format, and saving a .max file into .3ds normally truncates the filenames if they are longer than 8.3, which can lead to loss of data if you have a prefix. (Learnt that one the hard way).
2) The maximum number of vertices per object is 65536, which might be a problem depending on how high resolution your meshes are.

My current project uses .obj files because they are easy to parse, pretty much every modeller can save them, and I'm still early in development. Pretty soon I'll have to start using a format with more features such as collada, since .obj doesn't support bones or keyframes etc. But my plan is to convert the collada files into my own format during a resource build phase as others have suggested.

Currently working on an open world survival RPG - For info check out my Development blog:ByteWrangler

Uh, the file format will have zero impact in actual rendering performance, as the model will be converted to the program's internal format regardless of the original format. As for loading performance, it really depends, binary formats are usually the fastest, and also take the least space on disk, but are comparatively much harder to parse.

For instance, many people use .obj not because it is fast, but because it is very easy to parse, readily editable by any text editor, and is usually good enough for most models even if it is quite wasteful in terms of storage. But loading the same model from a .obj and from a .3ds will yield the exact same model representation in your game's memory and there will be no performance difference after loading is complete.

This is not always the case. For example, if there needs to be a file loaded during run time. Such as a cache miss, if you are running low on resources, your cache may not load some resources until it needs it at run time, or may unload something to make room for a different resource. This causes a load during run time and can effect performance. It can be avoided most of the time in a well written cache system but not always.

Even triple A games suffer cache misses occasionally.

So in this scenario a 3d binary format will effect performance far less, than a slower to load exchange format.

So like I said it depends on the internals of what your program is doing under the hood with resources.

Edited by EddieV223, 03 October 2012 - 11:41 PM.

If this post or signature was helpful and/or constructive please give rep.

Uh, the file format will have zero impact in actual rendering performance, as the model will be converted to the program's internal format regardless of the original format. As for loading performance, it really depends, binary formats are usually the fastest, and also take the least space on disk, but are comparatively much harder to parse.

For instance, many people use .obj not because it is fast, but because it is very easy to parse, readily editable by any text editor, and is usually good enough for most models even if it is quite wasteful in terms of storage. But loading the same model from a .obj and from a .3ds will yield the exact same model representation in your game's memory and there will be no performance difference after loading is complete.

This is not always the case. For example, if there needs to be a file loaded during run time. Such as a cache miss, if you are running low on resources, your cache may not load some resources until it needs it at run time, or may unload something to make room for a different resource. This causes a load during run time and can effect performance. It can be avoided most of the time in a well written cache system but not always.

Even triple A games suffer cache misses occasionally.

So in this scenario a 3d binary format will effect performance far less, than a slower to load exchange format.

So like I said it depends on the internals of what your program is doing under the hood with resources.

I don't see what cache misses have to do with the storage formats, you don't reload data from disk on a cache miss, the CPU cache pulls data from RAM on cache misses(This is automatic) and the only thing that matters for this is the internal (in memory) format.

If you are streaming data in and out of the system at runtime then yes, you need a efficient storage format and it might even be worth storing the data in a compressed form and decompress it as it loads (depending on how much free CPU resources you have), runtime streaming of data from disk however is only really needed for large open world games (or consoles such as the xbox360 since it has almost no RAM)If you have a software cache to stream data in and out at runtime you're

I don't suffer from insanity, I'm enjoying every minute of it.The voices in my head may not be real, but they have some good ideas!

After you've streamed your stored bytes into memory, you've got to do some work with them. Depending on what that work is, and how the file is laid out, you'll get a different amount of cache misses during that work.[edit]Ah, i see he's talking about a software resource cache, not the CPU's RAM cache... In that case, smaller more compact file formats would allow you to fit more files in your 'resource cache' at a time, whereas large bloated formats would waste memory and fill up your resource budget (requiring less resources to be loaded at once, requiring more streaming).

After you've streamed your stored bytes into memory, you've got to do some work with them. Depending on what that work is, and how the file is laid out, you'll get a different amount of cache misses during that work.

The post i replied to though seemed to imply that you'd reload the data from disk on a cache miss. (Maybe i just misunderstood it)

I don't suffer from insanity, I'm enjoying every minute of it.The voices in my head may not be real, but they have some good ideas!

After you've streamed your stored bytes into memory, you've got to do some work with them. Depending on what that work is, and how the file is laid out, you'll get a different amount of cache misses during that work.

The post i replied to though seemed to imply that you'd reload the data from disk on a cache miss. (Maybe i just misunderstood it)

This is the way it works. You can't put everything in memory at a single time for large games, especially open world games, their resource managers ( caches I've been calling them ) work over time to keep what you need in memory and remove what you don't. But they are never perfect in that they often "Miss" and have to wait to load from disk.

Having another thread for loading of uncached resources can help, but then you deal with pop in's and things like that.

Edited by EddieV223, 05 October 2012 - 12:56 PM.

If this post or signature was helpful and/or constructive please give rep.