Tearing when Scaling a Tilemap

I have a 2D platform game in development, using a tilemap. I am currently trying to implement scaling functionality with the map, so that the user can use the pinch and zoom gesture to view more of the tilemap at any one time. However, I am encountering some strange problems. When my tilemap is scrolling, when I adjust the scale of the map, I find that there is tearing between the tiles, revealing the background clear colour beneath the tiles.

I've tried two methods to adjust the scaling effect; by using glScale and scaling each image on screen individually. No matter which method I use, I find the same problem. I have my tiles rendered to float positions, but the tearing still occurs.

I've seen on Angry Birds that this is definitely possible, but I am wondering what could possibly be the issue causing the tearing to occur. When the background does not scroll, the tearing does not occur...

Might want to consider rendering all your tiles first to a off-screen texture buffer (unsure what correct terminology is) at 100% scale, which you can then render to the screen back buffer, at whatever scale you like.

Or, you could try clearing the back buffer, to some neutral colour, before rendering your tiles.

that would typically be caused by vertices which are supposed to be coincident, not being exactly the same. Any time they're not exactly the same, they could end up being rasterized to slightly different pixels, leading to gaps.

T junctions have the same effect for similar reasons, but that seems less likely in a tile game.

To further what OSC said, if you want things to line up exactly you have to provide exactly the same vertexes AND exactly the same transformation matrix. Floating point numbers have finite precision, so you can't just assume that 2.3*3.0 will evaluate to exactly 6.9 for instance.

From an irb session using double precision floats:

Code:

>> 2.3*3.0
=> 6.9
>> 2.3*3.0 == 6.9
=> false

For floating point results to be equal, you not only need exactly the same input, but exactly the same order of operations as well.

Are you generating a big array with all the vertexes for all the tiles in it or are you reusing the same geometry for a single tile and just translating it? If you are doing the latter, that can be what is causing your problem.

Also true. If you are using linear filtering with an unpadded tileset or blending, that is also going to cause problems. This can sort of be fixed at the cost of more texture memory by padding your tiles.

Skorche Wrote:To further what OSC said, if you want things to line up exactly you have to provide exactly the same vertexes AND exactly the same transformation matrix. Floating point numbers have finite precision, so you can't just assume that 2.3*3.0 will evaluate to exactly 6.9 for instance.

From an irb session using double precision floats:

Code:

>> 2.3*3.0
=> 6.9
>> 2.3*3.0 == 6.9
=> false

For floating point results to be equal, you not only need exactly the same input, but exactly the same order of operations as well.

Are you generating a big array with all the vertexes for all the tiles in it or are you reusing the same geometry for a single tile and just translating it? If you are doing the latter, that can be what is causing your problem.

I use a big array with all the vertexes in and draw that.

I think this issue is likely to be the problem, but I'm not sure how to get around it. I've looked at all of the blending options and that does not appear to be the concern.