Since pixels come in 'whole numbers' why do programmers always write code using floating point numbers for positions on the screen?

My point is why say new Ellipse2D.Float(100.0f, 100.0f, 100.0f, 100.0f) when it HAS to end up being drawn at a whole number point on the screen...we'll at least it seems that way to me I must be wrong or else everyone would use integers and save memory!

Its also important for smooth movement, if you use ints then you can only move whole pixels at a time. This might not seem like too much of a problem, but then you realise something simple like a jump isn't going to look right if you use int speeds only. Obviously you need to snap to nearest pixel when drawing, but the 'true' float value will accumulate the fractions over time.

Pixels come in 'whole numbers', but the shapes that you draw do not. Moving a sircle by 0.1 will probably change how it will look on screen. Because some pixels will go outside, and others inside the sircle. It's called sub pixel accuracy.

In addition you can draw in fractions (with antialiasing and the fraction thingy hint). Eg if you draw a black line from 0.1/0 to 0.1/100 it isn't 1 pixel wide and black - instead it's two pixels wide. The left line is almost black and the right line is almost white (if your background is white).

if you draw a black line from 0.1/0 to 0.1/100 it isn't 1 pixel wide and black - instead it's two pixels wide. The left line is almost black and the right line is almost white (if your background is white).

I think the infinity of 0.1/0 will likely cause problems before the line is drawn .

But yeah - the whole point is image quality, sub-pixel rendering with antialiasing yields really nice results. Specially good for slow moving things in games. Though games usually don't go to the trouble.

Yep, the integer multiplies and divides are actually performed as:Int->Double convert, FPU operation, double->int convertUnsurprisingly, this is a fair bit slower than using floats. Additions & subtractions are 1 cycle for ints though, and slightly slower for floats.

Yep, the integer multiplies and divides are actually performed as:Int->Double convert, FPU operation, double->int convertUnsurprisingly, this is a fair bit slower than using floats. Additions & subtractions are 1 cycle for ints though, and slightly slower for floats.

- Dom

Are you sure about that? That means that integer multiplies and divides don't use the integer ALU.. that seems like a waste, since an integer multiply is easier to do that a floating point multiply. It COULD be done faster if the integer ALU supported the operation. And the fact that integer operations are generally more common.. since they are needed to calculate array offsets etc.. it just doesn't seem to make sense to go through all those hoops.

I know in many cases you can get better performance if you convert floating point ops to integer ops with fixed point math... maybe that was only if you could avoid division.. For example to divide by a constant multiply be the reciprocal.. if possible use power of two fractions so it ends up being an integer multiply followed by a shift. Sometimes with a little extra fudging you can get the exact same answer as if you did the math with floats, yet the result is computed in half the time. I know of someone that used this technique to get dramatic speedups with a video codec and that was using at least a Pentium 3.

Also, ever since Pentium, floating point has actually been nominally faster then integer in the x86 world (and on similar processors.)

I've not checked since Java 1.1.8 and 1.2.x (that's the last time I had a real app that spent nearly all of it's time in ALU-limited calculations), but in Sun's JVMs IME int mul and div were much faster than float mul/div, on Pentium 2 upwards.

OTOH, float add/sub were (effectively) the same speed as int add/sub.

This was when I was writing 3d fractal renderers where it's pretty easy to see the effects on performance of changing your datatypes. However, it's long enough ago that I cannot recall what differences there were with/without hotspot, and perhaps 1.1.x just wasn't utilising the P2 fully? (PS I was only ever working with basic datatypes)

<fx: prepares to be told he did something stupid with his app design >

I've not checked since Java 1.1.8 and 1.2.x (that's the last time I had a real app that spent nearly all of it's time in ALU-limited calculations), but in Sun's JVMs IME int mul and div were much faster than float mul/div, on Pentium 2 upwards.

Likely limits in the VM. These days Im pretty sure you are getting platform speed in the Intel environment.

Got a question about Java and game programming? Just new to the Java Game Development Community? Try my FAQ. Its likely you'll learn something!

Are you sure about that? That means that integer multiplies and divides don't use the integer ALU.. that seems like a waste, since an integer multiply is easier to do that a floating point multiply. It COULD be done faster if the integer ALU supported the operation. And the fact that integer operations are generally more common.. since they are needed to calculate array offsets etc.. it just doesn't seem to make sense to go through all those hoops.

It makes more sense to save silicon and only have a single divide unit. Int & float divides take the same time on PII+ (39 cycles, double precision, 23 single), and cannot be done in parallel (like they could on plain Pentium, as that had 2 units). The int conversion is done 'transparently' in these cases, and the FPU can perform multiplication & additions while waiting for the result. However, only the last 2 cycles of the divide can overlap with integer instructions, so by trying to use integer math you would lose 36 cycles worth of computation - thats 36 multiplies!

Int multiplies are 4 cycles (PII+, 9 cycles before) and 3 cycles for a float. The float instructions can also be pielined to allow 3 multiplies to be in flight at any one time. Int*float is even worse - 6 cycles with no concurrency.

Quote

I know in many cases you can get better performance if you convert floating point ops to integer ops with fixed point math... maybe that was only if you could avoid division.. For example to divide by a constant multiply be the reciprocal.. if possible use power of two fractions so it ends up being an integer multiply followed by a shift. Sometimes with a little extra fudging you can get the exact same answer as if you did the math with floats, yet the result is computed in half the time. I know of someone that used this technique to get dramatic speedups with a video codec and that was using at least a Pentium 3.

Multiplying/dividing by simple powers of 2 is a great speed up. Not something easily achieved in Java except for small cases. In C++, we use a union between an int and a float (called a floint) that lets you do some great speed ups:Clamp to zero:Float:if(f < 0.0f) f = 0.0f;Looks easy enough, but in fact it takes 15-30 cycles to work out

This takes around 5 cycles to compute with no chance of a Branch Predict error - 3 to 6 times faster. You can use this when doing clamping operations on colours, so it is a good thing for video codecs & software renderers.

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org