I've been playing around with my terrain rendering code recently and come across some odd behaviour.

Quick summary: application loads a gray-scale image, generates a mesh, uploads to a VBO, renders with multiple textures to blend grass, rock, snow, etc. Nothing amazingly complex and it's been working solid for quite some time. Recently I've been extending the code to handle multiple terrain 'chunks' and dynamically loading/trashing as the camera moves. Again it's all working nicely (or at least seems to be).

Now I've downloaded some massive grey-scale images (of parts of Italy cos went there recently) which will be cut up into smaller chunks, but I thought I'd test the software with a much larger image just to see how well it coped (or not). Normally the chunks are 256x256 sized, the image I tried was 2100x2100 pixels. The software took some time to load and process this as you can imagine, and at the end I got - nothing.

Debugged the code but it seemed to be working correctly. So I bodged the terrain generator so that it did 500x500 (i.e. the top-left part) and it worked fine, ditto 1000x1000, ditto 1500x1500, but the full image still fails! No errors, just nothing being rendered. The frame-rate indicates that it's basically rendering nothing. Very odd.

Is there some limit on the size of NIO buffers and/or VBOs? Or has anyone come across similar behaviour? By my calculations the 2100x2100 should be about 70Mb worth of vertex data.

Note that the same terrain image broken down into the normal 256x256 chunks with multiple VBOs also works fine. Obviously this isn't a big deal as the 'proper' chunked solution works, but I'm just surprised that the big image failed but without any errors.

I usually just find the amount of vertices that are about to be drawn, and if they go over, say, 1000, then I render all of the points, clear the Vertices, and add the extra vertices that caused the restart. That way you do not go too high on the buffers.

When my Buffers get too big, I get some kind of Core dump on java, meaning you did not just throw an error, you broke java (still works, but the buffers need to be smaller). So do not screw around with buffers...I did not know that it was just my code, panicked (reinstalled java, then linux login, then OS), and ended up screwing with my OS, and wiping my computer drive, losing all of my work. (See MERCury thread.) Major failure, all because I overloaded my Buffers.

Yeah, I was using Vertex Array Objects. A little slower, but if you can get 60K, then I must have done something wrong.

So I did a little stress test, and made mine 50K. It did well, so I geuss I made a problem, fixed it, then fixed the core solution, and went back testing it now? Whatever. I remember testing VAO, and over 1000 lagged it up a bit. But now it works. No idea how, but it worked .

Well vertex arrays are CPU centered. If you need to process large amounts of static data, use a VBO as they are stored on the GPU.

If you still use CPU sourced data, you have to create a direct NIO buffer and you can run out of memory on the native heap. Anyway, you have to pass the data to the GPU data store even though you use a VBO, for example when calling glBufferData.

Did a bit more research on this but didn't come up with anything definitive. There are limitations on the size of a VBO but it looks like it's vendor-specific, anecdotally that limit appears to be 32Mb on most cards. Presumably I went over that limit when I tried to render the large terrain.

In any case the 'correct' way to do this is to split the mesh into multiple VBOs which is what I was doing anyway , a chunk size of around 4Mb seems to be the general recommendation.

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org