PrinceC it is only an awful travesty because you actually made the mistake of looking at the code . If this is auto-generated code from metadata info etc. Then don't look at it. Pay no attention to the man behind the curtain. The public API is decent and that's all you need to care about - so long as the performance is acceptable.

But, you want to be able to read a huge number of Vector3f from a single chunk of memory.. that's why you added the offset, right? But why worry about constructing new Vector3fs to deal with this? They are small and likely short lived in this context.I would profile first.. then added the offset idea if it was required.

If the man behind the curtain is constantly doing 3x as much work as the C++ man then sadly certain operations are going to be a lot slower than in C++, and this particular kind of operation is of especial significance to people performing intensive geometry processing operations.

Creating tons of little objects is sadly still a killer in a rendering loop, and the memory is better used for something else.

Consider my age-old BSP conundrum. You have a BSP file, containing data for vertices, triangles, nodes, etc. If you tried to represent this in Java as an actual graph of Vector3fs and so on you'd end up with 50mb of object header bloat before you even got round to storing the data. This is another problem that the sliding struct solves really neatly.

I think it would help get the attention of the different advantages of struct's were better categorized, and it was made clear that they are related to completely separate abstract use-cases.

e.g., Structs: 1. Provide OO access to "raw" data from an external source (usually either a network-protocol or a file-format) 2. Provide higher-speed access to raw data that is large data structures with many small fixed-size data structures inside them

Sliding Structs: 3. Significantly reduce memory requirements and increase speed for apps that have a huge number of very small objects that cannot effectively be represented as arrays 4. Enable portions of the OO universe to be constrained to a sequential portion of native memory so that the *application* can manually dump and restore them as needed.

NB: I've never looked at using mem-mapped BB's for use-case 4; the last time I was doing that was pre-1.4.x (c.f. below).

I've no idea if these are good categorizations/use-cases, but the current descriptions tend to be different depending upon who you ask, with lots of "Oh, and BTW there's also another good reason", instead of a clear, easy-to-read overview.

Quote

Consider my age-old BSP conundrum. You have a BSP file, containing data for vertices, triangles, nodes, etc. If you tried to represent this in Java as an actual graph of Vector3fs and so on you'd end up with 50mb of object header bloat before you even got round to storing the data.

I've faced the same problem when dealing with massive parse-trees / AST's (of the order of 10^6 - 10^7 (or more) nodes), in the days before BB's existed, and structs would have been a great help (in the end I just borrowed RAM and did partial-evaluations instead). In this example, you also want to "checkpoint" frequently (losing the partial results of calculations that have generated many millions of nodes is not something you want to do!) - and BB-contained sliding-structs (IIRC your definition of the "sliding" struct...) provide a very convenient way of doing this: (temporarily) I don't care about the fileformat, just let me do a straight-through dump-to-disk at maximum speed . If the system crashes, I at least know I can get the data back...

Quote

This is another problem that the sliding struct solves really neatly.

Indeed: it is "another problem".

I'm not saying it's not a worthy problem to solve, but in the current state of things I think the structs issue comes across in a very confused manner to people who don't already know all the advantages.

Describing it as separate issues may also make it easier for sun to evaluate in the light of other activities - e.g. if they are separately spending considerable effort elsewhere trying to make objects have smaller memory footprint then part of the use-cases may be already improved from that different direction.

Sliding Structs: 3. Significantly reduce memory requirements and increase speed for apps that have a huge number of very small objects that cannot effectively be represented as arrays

The archetypal case for this is probably arrays of Complex values. In my opinion this case should not be dealt with via structs, but rather by one of the immutable object proposals.The structs for external communication and the need for efficient classes for things like Complex are two separate issues that do not deserve a common solution.

I don't care about the fileformat, just let me do a straight-through dump-to-disk at maximum speed . If the system crashes, I at least know I can get the data back...

Even before the advent of nio, I found dumping data to a file in a binary format was often limited by the disk speed. Depending on the data, piping the stream through a gzip compression would sometimes improve matters further.

Although in the past I have used 'structs' memory mapped in C++, in most cases I now think this is a mistake. For all but the most trivial objects (and objects which must be guaranteed to remain trivial), the lost capability relative to 'proper' objects eventually becomes a problem.

Why is the BSP example pointless? I'd simply map my BSP file directly from disk into memory and then when I needed to walk it I'd just have a sliding Node struct to follow the tree, and sliding Vector3fs to find coordinates in it, etc.

Holy crap. That code is so close in style and structure to my own that somebody would swear that you copied it from me or vice-versa. (A good argument against these ridiculous software patents). The largest difference is that I've got some more complicated indexing to do in some places, like:

Yes, that's exactly the point of structs - directly modifying data in Buffers by mapping normal Java fields over them.

I was looking closer at your proposal Cas, and one of the problems I have is that the Struct class can't have non-buffer data members. Wouldn't we better off adding some kind of field access specifier or field metadata attribute - like @structmember, that indicates that a field is actually read from the buffer? The structmember attribute could be a little more flexible too - like specifying the number of bytes that get read from the buffer for that data member.

I reckon that the whole struct thing can be done with metadata and a specially tuned VM now. I don't quite know exactly how metadata works yet but I have a hunch it does what we want, ie. mark a class as having its fields laid out in memory in order, etc. and the VM can generate special case code when it encounters the metadata tags.

Is there anywhere that explains 1.5 metadata and how to use it on the web that's easy to get at?

, but I would still expect the JVM to deal with it and produce correct results by generating code that manipulated non-native-endian mapped fields according to the JLS instead of just getting it wrong.

Im not sure I understand you. From what I THINK I understand, this IS what Java does today. if you use DataInput/DataOutput then it twiddles the bytes to be a standard that any Java VM can correctly read in.

Got a question about Java and game programming? Just new to the Java Game Development Community? Try my FAQ. Its likely you'll learn something!

I reckon that the whole struct thing can be done with metadata and a specially tuned VM now. I don't quite know exactly how metadata works yet but I have a hunch it does what we want, ie. mark a class as having its fields laid out in memory in order, etc. and the VM can generate special case code when it encounters the metadata tags.

Is there anywhere that explains 1.5 metadata and how to use it on the web that's easy to get at?

Sun is very reluctant to attach any semantics to metadata outside of emitting compiler warnings or errors when it comes to the Java compiler and VM:

Why is the BSP example pointless? I'd simply map my BSP file directly from disk into memory and then when I needed to walk it I'd just have a sliding Node struct to follow the tree, and sliding Vector3fs to find coordinates in it, etc.

Cas

You said structs were not to be used for lw objects. I assumed you would want the whole BSP in memory at once, in which case I'd expect you to be using lw objects.

But it hadn't occurred to me that you might want to only selectively/lazily load the file.

How tricky is it to implement deformable maps in this scenario? (and I don't mean just a single elevator, or on/off hole-in-the-wall. I mean interesting changes )

You said structs were not to be used for lw objects. I assumed you would want the whole BSP in memory at once, in which case I'd expect you to be using lw objects.

But it hadn't occurred to me that you might want to only selectively/lazily load the file.

I think Cas wants to reuse a single 'struct' to give some "structure" to different areas of the ByteBuffer on-the-fly. This isn't about many objects or lazy loading. It's just about making the data accessible in an efficient way. It is simply the "view" of the data in the ByteBuffer that we want control over.(I'm sure Cas will correct me if I got that wrong.)

Yes, that's what I, and many games developers, need to be able to do. There are so many overheads to doing it with Java objects it's just not feasible.

There are plenty of other uses too such as a vertex buffer processor. Say you needed to write interleaved data out to an OpenGL vertex buffer. You might have tons and tons of different data to deal with and you can't very well store it all in objects and keep reading and writing objects and constructing and waiting for GC as it's discarded every frame for a completely fresh set. Far better just to point a struct at it and slide it along the buffer to do your processing.

Because we're talking about data that's so mutable you process megabytes of it 85 times a second. Or data that's so large and complex that you'd need to double the memory storage to use it in Java and pointlessly get the garbage collector to traverse the whole thing periodically only to discover it's all still referenced. Or data that takes ages and ages to load in using the Serializable interface because it's so big but is mapped in the blink of an eye with a MappedByteBuffer.

Or data that takes ages and ages to load in using the Serializable interface because it's so big but is mapped in the blink of an eye with a MappedByteBuffer.

The mapping may not take long, but if you then go and read that data sequentially from beginning to end, it can be slower than using ordinary file read operations. This happens when the mapped pages remain in memory after you have been past them and cause parts of your heap to be kicked out to swap file. So memory mapping is great when you randomly read only a small part of the data, but if you actually read the whole lot it can be quite bad (at least on Windows in my experience).In my case I have 600MB heap and a 1GB data file. That brings me quite close to the address space available to ordinary applications on 32 bit windows.You also have noticed my RFE relating to some of the problems with memory mapping:http://developer.java.sun.com/developer/bugParade/bugs/4724038.html

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org