Recommended Posts

I'm writing a 2D graphics library (that uses OpenGL) in C++. Some functions perform calculations with arrays of (x,y) points that represent polygon vertices. The question is: which type should I use for x and y of the points? In 2D there isn't much sense in specifying non-integer numbers for positions on the screen. It's possible but usually useless, so in many cases programmers will want to use x and y of type short, since it's enough for all screen sizes and allows negative values (which are useful sometimes). But sometimes the vertices are processed by physics simulation code, in which case their type is float or double. The best solution is probably using templates of simply including three versions of the function: short, float and double. But I'd like to know what experienced programmers usually do. So what would you do?

0

Share this post

Link to post

Share on other sites

All of our front end code, which generally are menu and 2D works, use float for coordinates. This is likely also because we use the same rendering engine in our menus as we do in the 3D portion of the games.

There may be some advantages to using float, specifically the ability to use ortho perspective in 3D which could mean offloading more of the work, like rendering through Direct3D or OpenGL to the video card.

I'm not going to say that using float is a requirement for efficient code, but given the homogeneous approach to rendering systems (especially if any aspect of the game will be in 3D) I think you would find advantages.

0

Share this post

Link to post

Share on other sites

Original post by Tom BacktonIn 2D there isn't much sense in specifying non-integer numbers for positions on the screen.

A lot of people use normalised 2D coordinates stored in non-integer types (e.g. top left of the screen is x=0.0, y=0.0 and the bottom right of the screen is x=1.0, y=1.0). One reason is it makes resolution independence easy to deal with.

e.g. if I want to draw a rectangle which occupies the top left quarter of the screen, the coordinates of it's two extremities are x=0.0, y=0.0 and x=0.5, y=0.5. To translate that to real pixel coordinates I just multiply all coordinates by the pixel dimensions of the screen inside the graphics library.

To change the resolution of a game using the library I then only need to change the screen size multiplier values rather than changing all the 2D coordinates in my game. If your 2D graphics is ultimately going to end up going through a 3D pipeline then normalised coordinates are also closer to what you'd feed into a homogenous projection matrix.

Another resolution independent alternative that some people prefer (that also fits with using integers for coordinates) is virtual coordinates where you pick a maximum resolution you're going to support (e.g. 4096x4096) then all 2D coordinates are passed to your library assuming they're going to be displayed at that resolution. Your library then does the conversion to the real resolution in one place in the same way as you would with normalised coordinates.

Caveats with either form of resolution independence:1) a coordinate the user passes to your library might not be exactly representable in the final resolution, e.g. x = 0.0137 on a 1280 pixel wide resolution ends up at pixel location 17.536. The visual result of this if your library is being used for something like a UI is slighty mis-aligned pixels on some things.

2) you need to define what the aspect ratio is for your normalised or virtualised coordinate system and also adjust for that when converting to pixel coordinates (or in your projection matrix for a 3d pipeline).

Asides:1) fixed point. You can have values which are stored in integer types but represent non-integer values. If your library is used on low end platforms like mobile phones you might want to look into it.

2) 2D becomes something more than 2D when viewed on a 3D TV or with 3D glasses. It looks like 3D viewing technologies might have a chance of taking off this time around. You might want to consider exposing a depth value for your 2D because mis-placed 2D on a 3D display can look/feel very weird (but if you do that you may as well consider your 2D as just another type of 3D [smile]).

0

Share this post

Link to post

Share on other sites

In 2D there isn't much sense in specifying non-integer numbers for positions on the screen. It's possible but usually useless, so in many cases programmers will want to use x and y of type short, since it's enough for all screen sizes and allows negative values (which are useful sometimes).

Actually, I've personally found that dealing with coordinates in terms of integers is the biggest shortcoming of all those 2D libraries.It is not useless no, it's highly useful.

Dealing with pixels is just very low-level.Dealing with a real (i.e. not integers) scale, you don't have to think of the world in terms of pixels, making the game resolution and aspect ratio agnostic. Sometimes also, you want to have things that don't fit on pixels. It could make sense for your object to be at position (0.2356, -0.8546), whether or not there is an exact pixel for that position or not.

Obvious examples are when you model physics. Even for something like Pong, if you want a speed that doesn't depend on trajectory you can't always move the ball one pixel per frame. It only moves "some" distance, then you need to anti-alias this position to display it on the screen as pixels.

0

Share this post

Link to post

Share on other sites

About normalized coordinates: the library will not use them because (at least the first version) is intended for use in 2D games. The programmer chooses a window size (which does not depend on resolution or screen size) and specifies coordinates as pixels in the window. The resolution can be changed independently by scaling the viewport (and the library will scale the final window size accordingly).

But it seems the only advantage of integer coordinates is saving memory, so I'll probably use them only for long lists of vertices.

0

Share this post

Link to post

Share on other sites

Original post by S1CAA lot of people use normalised 2D coordinates stored in non-integer types (e.g. top left of the screen is x=0.0, y=0.0 and the bottom right of the screen is x=1.0, y=1.0). One reason is it makes resolution independence easy to deal with.

The range could be anything, not just 0.0 to 1.0, it's better choose a range that seems easier to handler from the developer side. I find 20.0 a lot easier value to deal with than 0.03125, for instance. Notice I'm still using floats though.

But that's pretty much making our life easier. There's another detail, which is taking into account different ratios. If you always use the same range, either you have to adapt the coordinates of what you render, or they will appear stretched or shrunk if the ratio isn't the one they were made for. The workaround I'm using for that is using a minimum virtual resolution for a ratio, increase it in case the ratio changes (only the dimension that is altered, that is), and make graphics positioned relative to either the center or the limits of the screen.

Though that's game-dependent already, not the job of a 2D library, I guess. EDIT: with exception of being able to set the virtual resolution, of course. The library should provide that functionality if you go this way.

0

Share this post

Link to post

Share on other sites

Personally I wrote a small 3d math library that is templated on, apart from other parameters, a scalar type. While it might seem to be a bit clumsy, just a handfull of using-declarations and typedefs configurate what I'd like to use in my real project. E.g., I use fixed-point position vectors, but floating point direction vectors. I can switch to floating-point position vectors in a glimpse.

While writing such library might be an uneasy and cumbersome task (e.g. I use a lot of hand written type traits, e.g. to extract the number of fractional bits out of a fixed-point type), it goes into the direction write-once-use-everywhere, and I don't have to struggle with manually emitted overloads everywhere.