Scenario, you have a cube. It's made of 8 vertices referenced by 24 indices (drawn with GL_QUAD, not GL_TRIANGLE). To texture the cube, you create a texture buffer. This is where it confuses me:

Texcoords are defined per vertex?

If you look at the front of your cube, the top left vertex is the top right vertex of the side that is 90º clockwise. This means that for texcoords to be per vertex, every second face's texture will be backwards.

It makes more sense to me to define texcoords per index element, rather than per vertex. But a couple of stackoverflow questions had answers saying that's not possible - and if it is, I can't work out how to do it.

What am I missing? How would you define texcoords for a VBO cube with 8 vertices, 24 indices, drawn in quads, and with the texture maintaining the same orientation on each face?

Oh, sorry. I meant 8 vertices. I'm just so used to looking at them labelled 0-7. (edited the original post)

I don't actually have code to construct a cube at this point. I have code that draws thousands of them in a minecraft-styled world, with only faces that are exposed being added to the index buffer. I'm just trying to figure out the logic behind texturing them. It was vexing me, but I think I see it now.

This uses a random intensity of a green colour element. But it appears to be per physical vertex.

So two cubes next to each other would have a total of 12 vertices. 48 indices - but then I would remove the adjoining face from each cube, leaving 40 indices. I just need to work out how to define texcoords per index-element rather than per vertex.

The whole idea of using an Index Buffer full of indices, is that you only need to define each vertex (point in space) once, then reference to their position in the Vertex buffer with your indices.

[EDIT: Code that builds the indices and texcoords is below]

This code:The draw method is called every frame.buildIndices is called when the current chunk is edited (perhaps once every ten seconds).buildColours is called instead when I'm debugging.When using buildColours, the colours are defined per vertex in physical space.

It's really hard to get texturing working on cubes that share vertices between surfaces as you'll need multiple texture coordinates for each point. The easiest way to get texturing working is to just stop reusing the vertices and have 24 vertices (3 duplicates of them all) and you won't have a problem with it. If you want to get working normals, it's impossible to share vertices, as you simply need 3 different normals for each vertex. If you were to share vertices, you'd end up with a cube that is lit like a sphere! xDI see you also want to reuse vertices between blocks connected to each others. This has the same problem. Consider a dirt block being next to a stone block. The vertices bordering between the two blocks would have to have 2 texture coordinates, one for dirt and one for stone. I seriously doubt any horrible hack would be able to outperform just duplicating those vertices.This is basically the same problem of drawing an old SNES style tile map. Preferably one would want to draw it line by line using quad-strips, but that doesn't really work due to the same problem as you're having. Now that I think about it, it would be possible to accomplish it by having a texture of mapWidth X mapHeight size with the tile index of each tile, and then in a shader first lookup what tile you should draw, then generate the tile texture coordinates from this index and the supplied (0 to 1) texture coordinates to do another "dependent" texture lookup. Yeah, doable, but once again, I seriously doubt it will be faster.Theoretically, you could do the same with a 3D texture for a block. However all I see in such a case is a shader full of if-statements, a ridiculously huge 3D texture and slideshow-like performance.

In short: You're overdoing it. Just go with a basic approach with duplicate vertices, and optimize it if needed later (but probably not by sharing vertices =P). "Premature optimization root of all evil" after all, and I'm the biggest hypocrite on Earth for telling you that. xD

It's really hard to get texturing working on cubes that share vertices between surfaces as you'll need multiple texture coordinates for each point. The easiest way to get texturing working is to just stop reusing the vertices and have 24 vertices (3 duplicates of them all) and you won't have a problem with it. If you want to get working normals, it's impossible to share vertices, as you simply need 3 different normals for each vertex. If you were to share vertices, you'd end up with a cube that is lit like a sphere! xDI see you also want to reuse vertices between blocks connected to each others. This has the same problem. Consider a dirt block being next to a stone block. The vertices bordering between the two blocks would have to have 2 texture coordinates, one for dirt and one for stone. I seriously doubt any horrible hack would be able to outperform just duplicating those vertices.This is basically the same problem of drawing an old SNES style tile map. Preferably one would want to draw it line by line using quad-strips, but that doesn't really work due to the same problem as you're having. Now that I think about it, it would be possible to accomplish it by having a texture of mapWidth X mapHeight size with the tile index of each tile, and then in a shader first lookup what tile you should draw, then generate the tile texture coordinates from this index and the supplied (0 to 1) texture coordinates to do another "dependent" texture lookup. Yeah, doable, but once again, I seriously doubt it will be faster.Theoretically, you could do the same with a 3D texture for a block. However all I see in such a case is a shader full of if-statements, a ridiculously huge 3D texture and slideshow-like performance.

In short: You're overdoing it. Just go with a basic approach with duplicate vertices, and optimize it if needed later (but probably not by sharing vertices =P). "Premature optimization root of all evil" after all, and I'm the biggest hypocrite on Earth for telling you that. xD

Thanks for that response. I've been doing a lot of googling on the matter since I made my post. I was actually working towards implementing a shader to do it. :-)

Duplicating vertices like that just seems too inefficient and too simple to implement. ;-) I actually already had that working a week ago - but I was trying to increase performance by eliminating as many vertices as possible.

Just for discussion purposes; Wouldn't duplicating vertices create tessellation issues? Especially with floating point accuracy taken into account with vertices that are generated algorithmically?

Duplicating vertices like that just seems too inefficient and too simple to implement. ;-) I actually already had that working a week ago - but I was trying to increase performance by eliminating as many vertices as possible.

Reducing vertices might be the wrong approach. In my own (very limited) experience with Open GL I stumbled across the following:* Vertex Arrays vs immediate mode gave me a massive performance increase (I think 10x)* 4 bytes for color instead of 3 bytes for color in my vertex arrays gave me a 2x performance increase* Calling glBindTexture just once for a spritesheet instead of every single quad also gave me a massive performance increase

I'm sure there's a lot more... considering that you're making cube-world, the number of vertices is not going to be much of a performance factor imho.

I agree with Loom_weaver, the number of vertices shouldn't be a problem. And if what he says is true about the number of color bytes, try to reduce the size of each vertex as much as possible. - For a cube world, every coordinate is an integer, so 3 shorts should be enough for the position. If you want a larger map than 65536^2, just use local coordinates relative to the player as you're not gonna have that long draw distance anyway. - Texture coordinates will always be 0 or 1, so just use bytes for them. If you don't have any lighting or any other special effect affecting the cube's color, you shouldn't even need any color data. That means every vertex only need 2*3 + 2 bytes = 8 bytes of data. Even if you want color, it'll only be 11 bytes (12 with padding). - The cube data shouldn't change very often, so keep it in a vertex buffer object. When a chunk becomes outdated because of a change (added a cube, removed a cube, e.t.c.) mark that chunk as outdated. Note that if a single cube changes, the neighboring chunks may be affected too. Just mark all of these as outdated, and update them when you want to draw them. With frustum culling, this can increase performance a lot as changes far away or behind the player aren't updated immediately, and that might save you an update if multiple changes happen before you actually look at that chunk. - You should definitely draw each chunk with a single draw call. CPU performance should be your main limitation, not GPU performance if you do things right.

Finally: Frustum culling saves you insane amounts of CPU and GPU cycles. I got an pretty good idea on how to do this. Use a pathfinding-like flood algorithm in 3D. Start at the chunk the player is standing in and check all the 6 neighbor of the start chunk using a quick sphere-to-plane distance check (basic frustum culling, I have very easy to use code for this). Keep on "flooding" the world only adding the ones that pass to the drawing list. You'll end up with a list of chunks that are inside the view frustum volume. Looping through this list should be very CPU effective (just bind VBO, bind VAO and drawElements for each chunk).

Sharing vertices like that is basically the most awful thing you can due to memory accessing. I even ran a test on this long ago but completely forgot about it. If you have the vertices needed for a cube spread out all over a vertex buffer, things are going to be slower.

I'm building quads from a grid of vertices. One quad is made of 9 vertices evenly placed with 8 triangles. All of them connect to the middle one (I hope you get the picture). In my first test I just placed all the vertices line by line and then constructed my quads using 24 (3* indices. In my second one I placed 9 vertices for each quad (the 4 corners being duplicated 4 times, the 4 edges being duplicated twice, the middle one being unique), and then 24 indices to form the quad.

I've rewritten most of my world-handling code to use 24 vertices per cube (six faces, four verts each). I get roughly the same framerate now as I had before, which is always good. All I'm wondering now is, what's the equivalent to this:

No difference? You could use samplers instead though. They are a lot more convenient than setting texture state like that, but they limit you to DX10 compatible cards (which I don't think is a problem).

Uh, how are you getting seams between identical vertices? That doesn't make any sense. Duplicated vertices should obviously have identical positions after the matrix multiplications... I have a program doing this not getting any seams at all. Am I missing something? >_> And why do you need floating point positions in the first place? Do you have PI-sized blocks?

Random thoughts: - Texture bleeding, maybe because of antialiasing? - Blending enabled? - Is there any reason the colors are white and magenta?

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org