Program setup:FPSAnimator is supposed to call display() as often as possible (up to 1000 FPS).Game uses a Octree, and Occlusion Culling as well as Frustum Culling are implemented. (On said Octree)All to-be-rendered triangles are stored in a VBO

What I'd like to do: Render as many Triangles as possible. (just for starters) Right now I render Cubes.

Your card "only" has 2GB of memory. You're probably not gonna get realtime performance with over 1GB of vertices. >_>

The whole point of VBOs is that the data should be stored in VRAM, unless GL_STREAM_DRAW is passed when the memory is allocated/uploaded with glBufferData in which case the driver is allowed to not store the data in VRAM. I experimented with this value on my NVidia GPU and there was no difference at all, regardless of what I chose, so it seems like at least NVidia ignores this value.

I'm pretty sure your problem is not a vertex bottleneck but a fragment bottleneck. Try to make your objects cover a smaller area (scale them smaller or something), disable anti-aliasing and disable lighting e.t.c to reduce the per-pixel cost. My laptop's GPU can draw around 1.4 million triangles per frame at 60 FPS without any texturing or anything. Considering you have a desktop computer and a desktop GPU, I'd estimate it to be 3-4x as fast, so around 5 million triangles per frame at 60 FPS would make sense. This again points at a fragment bottleneck.

Okay, so we agree that the data is/should be residing in VRAM, that's good. I used GPU-Z 0.5.8 to look at my VRAM usage: The correct amount is used. But this doesn't show whether it resides there and there is a fragmentation bottleneck, or whether it is retransmitted and there is a bug in my implementation...

Now, I don't really know anything about a "fragment bottleneck". Could you give me some link?Currently I use textured Triangles, without any lightning or transparency or normals etc. With deactivated textures the 24 MB scenario improves from 60 to 63 FPS. Not really what I hoped^^No Triangles overlap (they do touch, though).But there are many triangles hidden behind other triangles, could this cause problems? I always imagined not rendering triangles would be faster than rendering them^^

Maybe you could also give me some tips on how to implement your suggestions?-cover a smaller area: You mean a smaller area on the screen? The cubes are the same measurements as in minecraft, and I run minecraft without stutters -I never enabled VSync or anything else. I just did:

The count argument to glDrawArrays() represents the number of elements to be combined into primitives, not the number of bytes.

I think since you've packed 2-tuple tex coords and 3-tuple vertices, the number of elements to render is Buffer.capacity() / 5.

If this is indeed an error, it is likely the cause of your underwhelming performance. I have often experienced undefined and non-deterministic performance when there are errors like this. Some examples include weird slow downs, missing vertices, segfaults, etc. It all depends on where the memory is, and what the GPU tries to do to prevent invalid accesses, and how it recovers.

Now, I don't really know anything about a "fragment bottleneck". Could you give me some link?Currently I use textured Triangles, without any lightning or transparency or normals etc. With deactivated textures the 24 MB scenario improves from 60 to 63 FPS. Not really what I hoped^^No Triangles overlap (they do touch, though).But there are many triangles hidden behind other triangles, could this cause problems? I always imagined not rendering triangles would be faster than rendering them^^

Years ago I remember running into a problem where pushing 1 million over-lapphing triangles (i.e. they were hidden behind other triangles), was much slower than when the triangles were more evenly distributed around the screen. I think the GPUs have a fast path for performing quick depth checks when there aren't that many on the same pixel, or if the depths are far apart. If all of your triangles are packed together, it might have to go into slower, more accurate floating point checks.

Well, just make sure that whatever special algorithm you're using isn't more expensive than relying on the GPU. In my story about hidden triangles causing slow downs, it was 1 million triangles packed into a 200x200 area in a larger window.

That situation is pretty contrived and probably wouldn't show up in a real game.

Okay, approximating the cost of drawing fragments is pretty easy: - "Fragments" that are outside the screen cost nothing since the triangle is culled to the screen edges. - Triangles that do not cover any pixels (or MSAA sample positions) do not cost anything. - Triangles that pass the depth test have a cost depending on what shader/fixed functionality your running. - Triangles that do NOT pass the depth test still have a cost: - This cost mostly depends on whether Early-Z was used. Shaders that output a custom depth value per fragment have this disabled, meaning that the shader has to be run before the depth test. In this case the cost is almost the same as if it had passed the depth test. - With Early-Z the cost is lower, but still not free. I'd estimate it to about half the cost of simple shading.

Your GPU can fill a huge number of pixels per frame. My little laptop can handle around 79 million colored pixels per frame at 60 FPS, but this number drops insanely fast if you add texturing, lighting, e.t.c. For reference, a 1920x1080p screen is approximately 2 million pixels, so with 4 million triangles they should cover just a few pixels each for the total number of fragments to be low enough to not be a bottleneck, so overdraw is something you want to avoid. And again, your GPU is around 3-4 times faster than mine. xd

So far everything runs great and fast, too. Unless it crashes, that is

The error report is here: http://pastebin.de/23611According to this report, the error happens in renderBuffer(). Here's the source:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25

voidrenderBuffer() {

glBindTexture(GL2.GL_TEXTURE_2D, WWV.tex_handles[LODLevel]); //It's not the texture, that's almost guaranteedglBindBuffer(GL.GL_ARRAY_BUFFER, vbo_handle); //The VBO Handle is good as well (see glGetBufferSubData below)

Does anyone have an idea what could cause this? It isn't the texture, that works on other VBOs (I draw about 1800 VBOs each frame, about 1500 of them use this texture)It isn't the VBO itself, the data I grabbed seems okay (or do you see something wrong?)It shouldn't be the draw command, right?

Fun fact: As long as I just look around in my world, the game never crashes. Only when I move the program sometimes crashes (as I said, it is random. It's not movement = crash).If that could be the problem: I use KeyListener, and if some Key is pressed, a boolean is set to true, and when it is released it is set to false. There is no actual movement during the rendering of the scene. It is strictly one Thread performing movement and view, and THEN starting to render the world... This should not lead to any problems with each other, right?

Any Ideas? Maybe something I do completely wrong?Thanks again for any hints and tips

From the JVM crash, it looks like the call to glDrawArrays() is what is causing the problem, not glGetBufferSubData.

Usually a JVM crash during a call to glDrawArrays or glDrawElements is because you are attempting to a reference a vertex that is outside of the valid range in the vertex attribute VBOs or arrays.

The arguments in your pointer setup look fine, so the only thing I can think of is that when you allocate the VBO in vbo_handle, it has a size smaller than 4 * 180 (which is what you'd expect if you called glBufferData with Buffer). Looking at your original code, it appears as though you are packing multiple octree leaf data into a large VBO, so there is a chance that this is screwed up.

I would also recommend using a DebugGL wrapper around your GL to check for errors.

But I followed your advice with DebugGL - and the call to glGetBufferSubData caused an error: GL_INVALID_OPERATION. If I read the specs right, this is only thrown ifGL_INVALID_OPERATION is generated if the reserved buffer object name 0 is bound to target.http://www.opengl.org/sdk/docs/man/xhtml/glGetBufferSubData.xml

It's already late in the evening, so my brain is half asleep - but I still can't figure out why the reserved buffer object name 0 should be bound to GL.GL_ARRAY_BUFFER!?

Hm, on second thought I removed all the "debug" code, and this is what remained:

Clearly, this will no longer throw an error on glGetBufferSubData - but it should still fail (this was my original code before debugging, btw).And: It still fails, but WITHOUT an error message from DebugGL. Any ideas why OpenGL would fail so hard that no even the debugger gets a glimpse on what went wrong?

As always, thanks for any tips

EDIT: @theagentd: Even if I remove the *4 in glGetBufferSubData I still get the same error. The error seems to be caused by the state of the VBO or something like that, not the call itself...

Offsets, strides, and the contents of VBOs are not checked by the debugger. As an example of a contrived situation, I can have a texture vbo with half the elements of the vertex vbo. If both are configured as vertex attributes, and I attempt to render all of the vertices, the driver will start pulling in "texture" information from past the end of the shorter texture vbo.

Depending on how the layout of the vbo's are, you can walk into garbage vbo information, or get access violations which cause the JVM to crash. That is the case you're seeing, and why it tends to be unpredictable. Although that's the cause, I unfortunately don't have much advice for solving it accept to very carefully walk through the rendering and make sure the values passed to OpenGL are what you'd expect.

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org