Hi, I have a map that displays real world property boundaries for a district on android. Like a basic GIS/GPS system.

The zoom and panning are working up to a certain extent, but when it gets to a certain level, lets say about 500m from ground level the panning starts to get noticeably jumpy, this gets worse the closer you zoom. The panning is via touch screen and zoom is via pinching. I need to get the smooth panning down to a level of about 1-2m. The bounding area is 65km wide and 125km height, just to give an idea of how much zoom is required, this may range depending on districts.

The zoom is done through scaling the model and translating to the correct spot. The panning is done through a translate. I have my data in VBO's and keep all my shifts and scale factor in double precision variables for accuracy, casting them to floats last second for OpenGL-ES2. When the screen loads for the first time it displays the map to its extents via Matrix.frustumM().

I know the reason this is happening, unfortunately I'm stuck for solutions.

The reason is when you zoom in close, the floating point data types cannot handle the small shifts in movement. I found this by comparing the double and float values, and the difference between them.

Before scaling and translating the model, I tried manipulating the projection matrix to provide zoom and the eye to pan. This logically seemed to have a more naturally feel to it, however I ran into the exact same issue.

I have exhausted my knowledge on how to improve this functionality. Is there someone out there that can provide some possible solution/advice?
Kind regards Hank

tonyo_au

08-06-2013, 08:09 PM

You might like to read this on precision of floating point numbers http://stackoverflow.com/questions/872544/precision-of-floating-point. It will help explain the limits you can measure too. Numbers close to zero have more precision than those further from it. If precision is your problem you can trade speed for accuracy by dynamically changing your origin. For example if you use the camera as your origin, things close to your camera can be drawn more accurately since their coordinates are now close to zero. Of course this means each object has to have its own local coordinates and is translated to its location relative to the new origin every frame. If you have a large map this may mean holding the map as a set of smaller maps stitched together.

Hank Finley

08-07-2013, 12:42 AM

Hi tonyo_au,

Thank you for the link-ref, it will come in handy.

So my map is in decimal degrees, mbrMinX:152.073393 mbrMaxX:152.75667 mbrMinY:-27.570736999700003 mbrMaxY:-26.452339000100004
mLeft:-0.3522513384566926 mRight:0.3522513384566926 mBottom:-0.5591989997999995 mTop:0.5591989997999995
The center or where I have my eye currently is eyeX:152.4150315 eyeY:-27.011537999900003

http://i.stack.imgur.com/VJDtT.png is a screenshot

Just so I understand this, I reset my eyeX and eyeY to zero (the origin (0,0)). I translate my map (model) so the center is at zero also. So the map in my case will get translated by -152.4150315, 27.011537999900003 or there abouts in float fashion.
From there, when I need to pan I translate the model, however much, around the origin. But I'm thinking my logic can't be quite right as I translate the model it would just end up being -152.4150315 + shift which would lead to it jumping again. Definitely getting myself confused, could you perhaps explain a bit further.

// Set the view matrix. This matrix can be said to represent the camera position.
// NOTE: In OpenGL 1, a ModelView matrix is used, which is a combination of a model and
// view matrix. In OpenGL 2, we can keep track of these matrices separately if we choose.
Matrix.setLookAtM(mViewMatrix, 0, (float)eyeX, (float)eyeY, eyeZ, (float)lookX, (float)lookY, lookZ, upX, upY, upZ);

//vertex and fragment shader code...

// Set program handles. These will later be used to pass in values to the program.
mMVPMatrixHandle = GLES20.glGetUniformLocation(programHandle, "u_MVPMatrix");
mPositionHandle = GLES20.glGetAttribLocation(programHandle, "a_Position");
mColorUniformLocation = GLES20.glGetUniformLocation(programHandle, "u_Color");

// Tell OpenGL to use this program when rendering.
GLES20.glUseProgram(programHandle);
}

// This multiplies the view matrix by the model matrix, and stores the result in the MVP matrix
// (which currently contains model * view).
Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0);

// This multiplies the modelview matrix by the projection matrix, and stores the result in the MVP matrix
// (which now contains model * view * projection).
Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0);

Herein lies the problem. You're wasting 8 bits of the X coordinate and 5 bits of the Y coordinate.

Single-precision floating-point has the equivalent of just under 7 decimal digits of precision at whatever scale. For the range of your X coordinates, you're wasting just over 2 decimal digits on the leading "152.".

To avoid this, you should offset the vertex data by (152.4150315, -27.011538), so that your X values are in the range -0.3416385 to +0.3416385 and your Y values are in the range -0.5592 to +0.5592. This will ensure that they use the full 24 bits of precision.

You need to offset any other coordinates used (e.g. the eye position) by the same amount so that the offset cancels out.

Hank Finley

08-07-2013, 04:23 PM

Hi GClements,
thank you I understand now.
Just following up on tonyo_au's comments about the smaller maps stitched together. I don't think I'll need this straight away, however for future reference, I'm not sure how to implement this or the logic behind it helping, would appreciate some clarification.

tonyo_au

08-07-2013, 08:33 PM

In a static environment you find the centre of your world and subtract this from all your coordinates including the camera. This is often enough to solve your precision problems.

If is not, each frame you chose a new world centre; the best is to use the camera.

Lets think of how we would draw an object in this environment.

First we create the object and store it in a vertex buffer; but rather than storing the object with its world coordinates we store it with say its centre as the origin (this would be normal for say an .obj object)

When we draw this object, we move it to its place in the world by adding its world coordinate to each vertex. With a floating origin, we instead we add its place in the world minus the camera's place in the world.

This works well for ordinary object but what about the map? If the map extents aren't to big this will work for it as well but if the extents are very large, the precision at the extremes might be a problem. If this is the case, we could break it into
several smaller non-overlapping maps and draw each one separately.

Hank Finley

08-08-2013, 01:47 AM

Hi Guys, when I said before that I understood, I may have been a bit too fast.

Now when I coding my zoom and panning I am unsure how to proceed. Should I be doing the zoom by:
1. Changing the scale of the model, or
2. Manipulating the projection so the area gets bigger or smaller

I will also need to shift either the model or eye as there is a zoom point that this should focus on.

What is recommended here?

I just tried the projection way and it still jumps. I checked at a very close zoom level, about 2m, if the x and y as float values were increasing and decreasing and they are. Cannot figure the issue, but still not sure if this is recommended way either.

GClements

08-08-2013, 09:46 PM

Hi Guys, when I said before that I understood, I may have been a bit too fast.

From what I understood I set the eye to (0,0) initially with:
Matrix.setLookAtM(mViewMatrix, 0, 0, 0, 1.5f, 0, 0, 0, 0, 1, 0);
and translate the model to:
Matrix.translateM(mModelMatrix, 0, -152.4150315, 27.011538, 0f)
First, the (-152,27) translation needs to be applied to the vertex coordinates before they're converted from double to float and uploaded to OpenGL (if you're getting the data as "float" to start with, that's going to be a significant problem).

Second, all intermediate values should use "double" rather than "float"; there's no point in trying to save 4 bytes here and 8 bytes there. If your matrices use "float", you need to apply the (-152,27) translation to the eye position while its stored as "double", then convert the offset version to float and use that to construct a translation matrix. Trying to perform the offset using a matrix is just going to run into problems with the limited precision of a "float".

IOW, as much of the calculation as possible needs to be done as "double", and the parts which must be done as "float" need to have any constant offset removed first.

Hank Finley

08-22-2013, 08:31 AM

Hi guys, apologies for late reply! Your advice has been fantastic. Panning while zoomed right in works a treat, thank you.