Instanced Drawing with a texture atlas results in a weird output

Hello,

in short: I am trying to draw a tile map with instanced drawing and the different tiles are all within one texture atlas.

My original idea was to create a tile object, instance-draw it and then move each tile by manipulating the position with a vector array.
The different tiles would then result from manipulating the texture coordniates.
As each tile I want to draw right now is right from the original, I just manipulate the x-coordinate of the texture.

Here is the source code. I first create the translation vectors & then create the texture offset vectors (to manipulate the texture coordinates):

In my thought this would lead to an addition of my translation-vectors (m_posVBO) to the position vector & of the texture-offset (m_texVBO) to the texture coordinates which are passed to the fragment shader.

However this leads to this result:

I experimented a little bit with the translations vectors, but they have to be normalized vectors don't they?
Each of the tiles is 64x64px and I try to render a 128*128 TileMap.
In my brain that should lead to the vector-creating-loop I coded, but obviously there seems to something wrong in my understanding.

The Offset btw. the tiles should be 64px - the length/height of a tile. with 1 being equal to 128*64 (128 Tiles, each 64px?) the offset would be 0.015625.

I also tried to increase the offset which lead to the result of 3 "stuffed blocks" of tiles, each being roughly the value of the added offset away from each other.

So my question:
Where am I wrong here? Which part of the instancing concept do I get wrong?

(and if you aren't using a perspective projection, you only need one matrix, not three).

Right now I use an ortho matrix for projection.. the view matrix scales and translates my "original" tile (the one I am instancing).
In my thinking I need to translate it to move around via keyboard, as by that I can simply move the map, not the camera.

I am still new to this so sorry if it may sound dumb. Why is it better to stuff the model matrix, ortho and the "camera/view" matrix (it is basically a glm::lookAt) matrix into one?

PPS:
I found out that my quad was a rectangle because my viewport was 4:3. Is it better to adapt the viewport or to adapt the vertex data?
I'd say it's better to keep the Viewport bound to the size of the Window for the overall rational, but when I am already asking nooby questions - why not one more?

Why is it better to stuff the model matrix, ortho and the "camera/view" matrix (it is basically a glm::lookAt) matrix into one?

It boils down to what you need in your shader. If you only need to transform things to clip coordinates, just pass in a single composite MVP matrix. Then transforming each point or vector in the shader is cheaper because you only have one matrix-vector multiply instead of 3 (i.e. 3-4 assembly language dot product instructions instead of 9-12), and you pass less uniform data into the shader as well (1 matrix instead of 3).

I found out that my quad was a rectangle because my viewport was 4:3. Is it better to adapt the viewport or to adapt the vertex data?

Not the latter. And the former is only one option. It's easy to change the projection transform so that square objects end up square (or any aspect ratio you'd prefer), regardless of the aspect ratio you choose for your viewport.

The weird thing is, that the translation for the outer "line" of the tile map seems to be different from the rest of it:

I'd suspect an issue with the loops which populate the vertex arrays.

Originally Posted by Labidusa

Why is it better to stuff the model matrix, ortho and the "camera/view" matrix (it is basically a glm::lookAt) matrix into one?

Because it avoids performing two additional matrix multiplies for every vertex.

Your vertex shader is only using one matrix: projection * view * model. So you can multiply those matrices together in the client code and send the final product to the shader as a single matrix.

Legacy (fixed-function) OpenGL used two matrices: model-view and projection (the model and view matrices are combined). The reason that the projection is kept separate is because the lighting calculations need to be performed in a space which is affine to "reality", and a perspective projection breaks that.

If you look at shaders which implement the Phong lighting model (either per-vertex or per-fragment), you'll note that they produce two transformed vertex positions (one transformed into eye space by the model-view matrix, another further transformed into clip space by the projection matrix) and a transformed vertex normal (transformed into eye-space by the inverse-transpose of the model-view matrix). The lighting calculations only use the eye-space vectors; the clip-space position is only used by the hardware for rasterisation.

If you aren't performing lighting (or similar) calculations, or if the projection is orthographic, there's no need to separate the projection transformation. You can just go directly from object space to clip space in one step (and perform lighting calculations, if needed, in clip space).

Originally Posted by Labidusa

I found out that my quad was a rectangle because my viewport was 4:3. Is it better to adapt the viewport or to adapt the vertex data?

First off thanks again!
I made it work to only use one matrix and manipulate the position in the way that it is a quad now w/o touching the vertex data nor the viewpoint.

However my Coordinates are still broken.
I logged the result of my translation filling loop to see if it gives strange results, but it does not.

It coherently adds the offset in the exact way I want it.

But there are two problems:

I fill the translations y-dominated (don't know another way to say this) so for every y I give it 128 x coordinates to get a 128*128 map.
Starting with the y(0) I assign the x-offset by adding +1 one each. This returns 128 tiles in a perfect row.
The same goes for x(0). Every tile is instanced exactly at the offset of one tile.

I then reset xoffset to 0 and start the loop for y(1). The first tile (0/1) is drawn exactly where I want it to be drawn. However every following tile has an offset of two Tiles now - meaning there are blank spots in the map:

I checked the logged translation vectors and they are fine, meaning for each 128 vectors their x-Coordinate increments by 1. Then the y coordinate increments by one and so on.

If I set up my loop so that the first row has an x-offset of 1 and the following ones just by 0.5f everything fits perfectly except the x(0) column.
The first tile is drawn at (0/0) but the second one at (0/0.5) giving each following tile the y coordinate of (y-0.5):

In each scenario I tried it also seems that the 0-row/colum ((0/y) & (x/0)) have more tiles than the others and additionally their offset increases the higher the index is:

As I said in any tried case (tried some more variations) I logged the translation vectors and for every vector there is exactly the wanted incrementation of the x & y value.
That means for every row there are 128 translation vectors for the regarding x value that have always the same offset.
But still it is instancing them differently - especially the 0 column/ row.

Here is how my translation loop looks like (I just experimented a little with the offset values in the ways stated above, otherwise it stays the same):

My Shaders are the same except that I only have one matrix in the vertex shader now, so instead of three uniforms its just one

I know that in pretty much all cases it is a flaw in my logic, but I really don't get what I am doing wrong with the translation vectors here.
As far as I understood it, it takes the following vector out of the array for each new instanced tile, as long as I set my divisor to 1.
So in theory every tile row should be drawn like the first one, or am I wrong?