The rounding problem also stems from the hardware. Both nvidia and ATI cards employ custom floating-point notation numbers of various bit sizes. I believe that more modern graphics cards are all supporting 32-bit floats now that are almost IEEE 754 compliant, but we can't guarantee on having a consistent rounding/floating-point number scheme across all our different supported platforms and hardware.

I'm not sure what an acceptable solution to this problem may be. My best guess right now is to keep our floating point representation in the video engine API, but internally have the video engine convert those floats to ints prior to making the OpenGL draw calls. I don't feel like I know enough about computer graphics to comment on this further, and I don't want to make any presumptions without an expert around.

That's not going to work - most GPUs don't understand ints. Your int values would be converted back to floats, but now with the added overhead of having rounded all floats to ints and casted them back to floats along the way.

Ok, but we are already using floats and we are not making it a network multiplayer game so I don't see the problem even if the implementation of floats are different.

However, subtexel (subpixel is for polygons) is a way to go around rounding problems (int to float cast). Anyway I will start working on it as soon as I have finished this post so that we can see what is the best way to fix it.