(I really hope I didnt screw up the FPS calc here, I'm doing this from memory. Point is I'm very confident in the way the FPS is calculated. I'm pretty sure the strangeness I'm seeing is related to the computer, or DirectX, or something not related to code logic...)

The really wierd thing is, half the time I launch the game, it runs between approx 90 and 120 frames per second, and about the other half the time I launch the game, it runs between 650 and 750 frames per second. Yup you read that right. Its like the game randomly either runs in "Fast mode" or "slow mode" and I have no control over it. I certainly dont have any code that messes with the frame rate like that. I dont have enough randomization in the game to account for that huge range of frame rate. I'm not watching movies or calculating PI to the trillionth digit at the same time either. I could literally run the game right now and get 90-120 FPS, then immediately quit and run it again and get 650-750 FPS. And the really wierd part is I NEVER see frame rates between 120 and 650. Its always completely in the lower range or the upper range, no crossing over.

Share this post

Link to post

Share on other sites

First: I don't know why you see what you're seeing. Likely has to do with using integers and how you determine "milliseconds."

Second: why do you care? Your updates should take place with fixed intervals (for various reasons, you should fix your timestep ) and you should be controlling your framerate. Anything over 60 fps (actual rendering frequency, not loop delta-time) isn't needed visually, likely isn't compatible with some monitors, can be hard on hardware (GPU and cooling fans, in particular), and is a waste of energy.

Third: you should be determining delta-times in microseconds**, not milliseconds. If you're really interested in fine differences in time, you shouldn't be using integers to total up floating point values such as 8.33 (1/120), 1.5 (1/650) and 1.3 (1/750). The integer sum = 8 + 1 + 1 = 10. The floating point sum = 11.13. That's just over 3 possible time intervals and you already have a 10% error.

I'm not very good at math, but there's several things that seem odd to me. First, what if your frameTime is greater than two seconds? But you only subtract one second worth of frame? That means, from then on, your framerate would appear doubled.

It should be "frameTime %= 1000" (i.e. get the remainder of a division by 1000, which is basically the same as saying, "keep subtracting until it's below 1000").

My guess it, sometimes your program happens to take a few extra seconds to start up, or lags for a second, or is minimized for a second (perhaps behind your IDE after starting up), and gains 5000 or so frametime on the first frame... but you never get rid of the extra time, so it sticks around and accidentally inflates your measurements.

Also, 'FPS' isn't the best measurement to use, because it doesn't scale linearly. One extra FPS when you're running at 100 FPS is not the same gain as one FPS when you are running at 10 FPS. You ought to measure your average frametime, not just the number of frames per second.

Now for my comments based on the responses so far (thanks by the way!):

First: I don't know why you see what you're seeing. Likely has to do with using integers and how you determine "milliseconds."

Can you check my updated code for any glaring errors? I've been staring at it forever and I don't see anything that would randomly increase the FPS by a factor of 7ish.

Second: why do you care? Your updates should take place with fixed intervals (for various reasons, you should fix your timestep ) and you should be controlling your framerate. Anything over 60 fps (actual rendering frequency, not loop delta-time) isn't needed visually, likely isn't compatible with some monitors, can be hard on hardware (GPU and cooling fans, in particular), and is a waste of energy.

Theres something extremely weird going on here, any decent programmer would care and want to know what & why. Fixed timesteps are an alternative sure, but this game isn't using them currently. I also know that my eyes cannot detect 650 FPS, and that my current code might be unnecessarily burdening the video card... however none of that really has to do with the issue. I still want to solve the issue. I might make some changes to all of the above later, but first I want to know whats happening.

Third: you should be determining delta-times in microseconds**, not milliseconds. If you're really interested in fine differences in time, you shouldn't be using integers to total up floating point values such as 8.33 (1/120), 1.5 (1/650) and 1.3 (1/750). The integer sum = 8 + 1 + 1 = 10. The floating point sum = 11.13. That's just over 3 possible time intervals and you already have a 10% error.

I've never heard of people using microseconds instead of milliseconds. I'll look into that but I'm not really convinced that milliseconds aren't good enough. You're definitely right about not using integers to add up float values. However now that I posted my real code you can see that the millisecond time values are indeed longs, not floats. so theres no rounding/truncating errors going on.

Also about using microseconds, I don't see how lack of accuracy is my current problem. Lets say that my current code is determining that 1 frame takes 12 milliseconds. Maybe if I replaced my StopWatch with a Microsecond timer, I would find out that the frame was in fact 12,183 microseconds (or 12.183 milliseconds). So what? If you follow the logic and math you can see that microseconds would not make much of a difference. Maybe my FPS calculation would be more accurate by 1 - 5 FPS. It does not explain why it varies between 90 FPS and 650 in between executions.

I'm not very good at math, but there's several things that seem odd to me. First, what if your frameTime is greater than two seconds? But you only subtract one second worth of frame? That means, from then on, your framerate would appear doubled.

It should be "frameTime %= 1000" (i.e. get the remainder of a division by 1000, which is basically the same as saying, "keep subtracting until it's below 1000").

My guess it, sometimes your program happens to take a few extra seconds to start up, or lags for a second, or is minimized for a second (perhaps behind your IDE after starting up), and gains 5000 or so frametime on the first frame... but you never get rid of the extra time, so it sticks around and accidentally inflates your measurements.

I see your point, but the framerate wouldn't be doubled "from then on", it would only be doubled for 1 or 2 iterations through the loop. So only for a few milliseconds, ie, not even noticeable.

Also, 'FPS' isn't the best measurement to use, because it doesn't scale linearly. One extra FPS when you're running at 100 FPS is not the same gain as one FPS when you are running at 10 FPS. You ought to measure your average frametime, not just the number of frames per second.

Yeah I know - I'm not going to put too much stock in the FPS of my game, I'm more interested in how it looks and feels. But now that I have this very strange problem happening, I just have to get to the bottom of it. Saying "oh well, I didn't really need FPS anyway" is not really my style :) I gotta figure this out :)

Your counter and your time-accumulator are not synchronized after the first entry into this area. If you set frameCount to 0, you should set frameMillis to 0, otherwise it’s like giving frameMillis a head-start over the frame counter. If, after frameMillis -= 1000;, the result is 13 (for example), then you’ve carried milliseconds over from the previous 1000 milliseconds but you didn’t carry over any fractions of the frame counter, which, to be fair and synchronized, should be something like 0.4 (just picking randomly). Another way of thinking about it is that you included those 13 milliseconds in your divide here in “Convert.ToInt32(frameCount * 1000.0 / frameMillis);,” so you shouldn’t be including them the next time around.

The code should set both frameCount and frameMillis to 0 for “accurate” readings.

After that, always use microseconds, not milliseconds. You should especially know why given that you reach up to 750 frames-per-second. Once you go over 1,000, your delta is 0 and you do a whole update with no change from the previous update. When you approach 1,000, you get updates with deltas of 1 and 2, which means some of your updates are literally twice as long as others. This too can cause visual artifacts.

As for the large differences between your runs, this is somewhat normal. Your debugging environment can sometimes cause it. When your “game” is running at extremely high FPS, any small thing, including certain processes also being open, can cause a significant drop in frame-rate even though they are really only taking away a small amount of time from your frames. I got this exact problem many times on older computers (4 or 5 years old). It won’t happen on a more mature project because slowing down a game by the same amount of time has no noticeable impact if the game is running at 60 or especially 30 FPS.

Share this post

Link to post

Share on other sites

I'm not really convinced that milliseconds aren't good enough. You're definitely right about not using integers to add up float values. However now that I posted my real code you can see that the millisecond time values are indeed longs, not floats. so theres no rounding/truncating errors going on.

That should read "millisecond time values are indeed longs, not floats, so there are rounding/truncation errors."

You're reporting frame-rates of 650 to 750. 1/650 = 1.54 milliseconds. Because you're using integers, that's either 1 mS or 2 mS. Either way, that's 25%-50% error for one frame time. Using a long just means you can represent a large number of milliseconds. If each of those milliseconds is really 1.5 milliseconds, storing it as a byte, an integer, long or long long makes no difference in the accuracy.

If you're coding in .net, you would, at a minimum, be better off using ElapsedMilliseconds for the entire 1000 frames with perhaps an error of 1 frame time, rather than timing each frame, and accumulating some error every frame. I.e., why not start the timer, count 1000 frames, stop the timer and divide the time by 1000?

Edited March 6, 2015 by Buckeye

4

Share this post

Link to post

Share on other sites

Even with microseconds, your clock may lose a second per 4 hours. I wouldn't be satisfied with a watch that did that :lol:
It might not seem like much, but may be enough to lead to bugs in long play sessions.

IMHO, absolute time values should either be:
* in a 64bit float (i.e. a double), in seconds, which provides the convenience of making all your blah-per-second math easy, and has the necessary precision to remain accurate even if the user leaves the game running for months.
* in a 64bit integer, in the CPU's native timer frequency (whatever QueryPerformanceCounter/etc is in), which is likely a fraction of a nanosecond. This is simpler in a lot of ways, but requires dividing by the CPU timer's frequency to convert from arbitrary ticks into time before using it for any calculations.

Delta time variables can almost always be 32bit - Either the difference of two absolute time doubles with the result truncated to float, or the difference between two int64's.

Share this post

Link to post

Share on other sites

Also, the math won't work as you expected. Some people fixed it in their code samples but nobody explicitly called it out:

long frameCount;

long frameTime;

...

1000* frameCount / frameTime;

This will not give the result you seem to expect from your description.

Since both values are integral type (int, long, short, byte, char, whatever) the result will also be the same integer type. You won't get anything on the decimal side. As an example, 99/100 does not equal 0.99, it equals 0 because of integer math. 3/2 = 1, 4/5 = 0, 49/5 = 9, and so on.

Since it looks like you are expecting a number like "17.231" then you need to have a floating point value in at least one spot, probably in all the spots.

Share this post

Link to post

Share on other sites

Also, the math won't work as you expected. Some people fixed it in their code samples but nobody explicitly called it out:

long frameCount;
long frameTime;
...1000* frameCount / frameTime;

This will not give the result you seem to expect from your description.

Since both values are integral type (int, long, short, byte, char, whatever) the result will also be the same integer type. You won't get anything on the decimal side. As an example, 99/100 does not equal 0.99, it equals 0 because of integer math. 3/2 = 1, 4/5 = 0, 49/5 = 9, and so on.

Since it looks like you are expecting a number like "17.231" then you need to have a floating point value in at least one spot, probably in all the spots.

1000.0f * (float)frameCount) / (float)frameTime;

He fixed this in his “real” code: Convert.ToInt32(frameCount * 1000.0 / frameMillis);.

The result of (frameCount * 1000.0) is a double, and that causes the division to be a double, so the math will work correctly.

Share this post

Link to post

Share on other sites

He fixed this in his “real” code: Convert.ToInt32(frameCount * 1000.0 / frameMillis);.
The result of (frameCount * 1000.0) is a double, and that causes the division to be a double, so the math will work correctly.

That's a few assumptions about the order the operations take place. It is not guaranteed with the code above.

The operations can be reordered since multiplication and division are associative and commutative and have the same precedence. Unless he adds parenthesis around a pair of them, such as the two you wrote, the C# compiler has the option to reorder them so the integer division takes place first in order to avoid a conversion.

In other words, the compiler is free to reorder that code into the ordering caused by ((double)(frameCount/frameMillis))*1000.0.

Share this post

Link to post

Share on other sites

That's a few assumptions about the order the operations take place. It is not guaranteed with the code above.

…the compiler is not allowed to make assumptions that would change the result.
The specifications clearly state the order of operations in math and also promotion priorities. While it may be allowed to do tricky business under the hood, it is not allowed to violate the assumptions the programmer can make based on these 2 things.

It is guaranteed that all operations will be of double precision.

L. Spiro

6

Share this post

Link to post

Share on other sites

The operations can be reordered since multiplication and division are associative and commutative and have the same precedence. Unless he adds parenthesis around a pair of them, such as the two you wrote, the C# compiler has the option to reorder them so the integer division takes place first in order to avoid a conversion.

Do you have any reference to support the statement that the C# compiler will do that? I've never heard of a compiler moving operations around to avoid a cast, and then not doing because a programmer used parentheses. If it was going to optimize something a certain way, I don't see why a pair of parentheses that do not actually change the order of operations will affect anything.

Share this post

Link to post

Share on other sites

Assuming all the math is right or wrong doesn't answer the question of why sometimes it launches and runs fast and sometimes slow. I had a similar issue and it turned out to be that it ran slow when my laptop wasn't plugged into the power socket! :-)

0

Share this post

Link to post

Share on other sites

Just wanted to thank everyone for taking the time to respond. I got a lot of good information, although RobMaddison is right, nobody directly answered my exact question. I realize I had some minor math bugs but they were nothing that explained the massive variance between FPS between different launches of the game. Thats my fault for not providing the full picture though.

I did end up switching from milliseconds to microseconds which should make things smoother and more accurate (I havnt actually noticed a difference, but mathmatically I know this was a good change to make).

I believe my FPS fluctuation has to do with a separate call to Application.DoEvents() that happens elsewhere in the code. I'm not sure yet exactly how its related, but I did some profiling and the time it takes for Application.DoEvents() to run is exactly the time thats "missing" from my FPS calculation and causing the FPS to vary so wildly.