It's my understanding that checking that the event queue is empty before drawing is important so that rendering doesn't "fall behind". But this poses a problem when the update portion of the loop takes longer than 1/60th of a second to complete-- the event queue is never empty, because (I think) the update took long enough that another timer event was triggered. Thus, nothing is ever rendered.

Could someone offer me some advice for dealing with this problem? If I render after every update without checking if the event queue is empty, would I be at risk for any major problems? Would it just decrease the "actual" framerate when under load, and then recover afterwards? Is there something important that I'm not considering?

One potential issue I see is that the timer would generate more events than could be processed, since the update is slower than 60 FPS. Is this a valid concern?

Removing the emptiness check seems to work fine (I end up with around 50 FPS) but I'm worried there might be something I'm not seeing that will come back to haunt me...

Update: It seems to be as I feared; the timer events block other events from being processed!

It is best to empty the queue every time. You can drop rendering frames as you need to. But if you're getting behind on logic processing you have other problems.

I have a wrapper around the Allegro event queue that lets me do things like take all of one type of event off of the queue. This allows me to take as many logic ticks off the queue as I want. I can then process them all or drop as many as needed.

That's what I thought... If the logic portion takes longer to complete than it takes the timer to tick, how can that be helped? Short of decreasing the level of processing, is the best solution here just to clear the event queue of any extra ticks after each logic update?

This problem came about as a side-effect of stress-testing my collision system. Rendering stops because the logic cannot be processed at 60 FPS (with thousands of bodies; the system itself is fine at a reasonable scale), so the event queue fills up with timer events as a result and is never empty to allow rendering to occur.

Obviously, it's better to have the game start lagging than to have it stop rendering altogether. So, dropping the extra timer events should do it without any hidden consequences?

That only works if the next event is a timer event. You want to drain the queue completely, count the number of logic ticks, and drop them as necessary. You might not want to ignore them completely, rather multiply them by some ratio less than one according to the strain on the system, and interpolate.

I want to avoid interpolation and other fancy things for now-- My only goal at the moment is just to have the game continue rendering, even if it's stuttering.

I get what you're saying-- Count the logic ticks, and if they're above a certain threshold, drop all remaining timer events so it can render, and also prevent the timer events from continually accumulating (which would probably cause an overflow eventually). Right?

At this rate, it seems like I'm going to have to use a wrapper myself to have more control over the events. I was hoping there would be some way to do it using Allegro directly, but it doesn't look like there's an easy way to drop the remaining timer events.

Your fixed-step timer events are clogging up and causing a bottleneck in your queue. You can fix this by dropping any backed up events, but, you don't want to drop all events. Rather you should drop any sequential timer events that are coming from that source.

The function you need is al_peek_next_event(), which will fetch the next event in the queue. If it's an ALLEGRO_EVENT_TIMER and it comes from the same timer source, then you can remove it with al_drop_next_event().

If even the logic part alone takes 100% CPU, there is no way you can additionally DRAW something.You need to either :- consider better hardware- locate a mistake in your code which makes something very expensive for no good reason (ex: file I/O during logic)- locate expensive work and make less of it. Typically : path-finding.- reconsider the size of your "time-steps"

If it's an ALLEGRO_EVENT_TIMER and it comes from the same timer source, then you can remove it with al_drop_next_event().

That's what I'm trying to do now (mentioned in this post), but since doing it that way will only clear sequential ALLEGRO_EVENT_TIMER events, it introduces another issue: If I do something that also generates frequent events, such as moving the mouse and introducing new ALLEGRO_EVENT_MOUSE_AXES events, the queue starts looking like this:

... and so on, so rendering remains blocked until I stop moving the mouse. Since the event following a timer event is always ALLEGRO_EVENT_MOUSE_AXES, the timer events can't be cleared properly. Your loop in Allegro Flare is practically identical to what I'm using, so how do you manage to avoid that, if at all?

Update: Maybe I should cause it to ignore all following timer events after a logic update until it gets a chance to render the last frame? I'm going to try doing that when I get the chance later today. I was so focused on the idea of dropping them that I forgot I could just ignore them...

If even the logic part alone takes 100% CPU, there is no way you can additionally DRAW something.You need to either :- consider better hardware- locate a mistake in your code which makes something very expensive for no good reason (ex: file I/O during logic)- locate expensive work and make less of it. Typically : path-finding.- reconsider the size of your "time-steps"

It's supposed to be ridiculously expensive; like I said, this came about because I wanted to stress-test my collision system. These conditions would likely never exist in the finished product (i.e., thousands of objects colliding at once), but I want to make sure that it stutters while still being able to render, albeit at a slower rate. It's not using 100% of the CPU, it's just that since there's so much going on in the logic update that more timer events are generated than can be satisfied.

Dropping them seems to be the right idea, but it only works if they're sequential and there's nothing in-between...

IMO, it's not a good thing to modify the system to perform better in these cases which-never-happen, if it performs worse in cases-that-happen.

For example, the typical frame-dropping system performs very good in case of tiny hiccup (ie. Antivirus hogs resources for 1/10th of a second), because it catches up the missed ticks : over 10 seconds, you keep exactly 600 logic steps.Dropping "timer" events means these will be slowdowns (less than 600 logic steps over 10 seconds).

This has the advantage of dropping all extra timer events that may have piled up without losing any input. And because you're processing every single event in the queue you can't get behind. It should slow down somewhat gracefully, without destroying your fixed step determinism. I'm curious to see how this loop would perform in your collision stress test.

EDITIf it takes a long time to process your logic, and input events are received in the meantime, this method could fail, because there would be timer events in-between the input events.

At some point, you're just going to have to slow down your timer. Ideally you want your logic to take up somewhere around your timer rate minus a few milliseconds to allow for drawing.

EDIT2What I meant to say was, you want your timer to tick at a rate of your logic duration plus a few milliseconds for drawing. That way, it won't ever get behind.

Alright, I think I've solved it. The input I received from you guys was extremely helpful; it turns out I was waaaaaay overthinking it. In case it helps someone else, here's what I did:

I added a constant to represent the maximum number of frames that rendering can skip (e.g., 10 frames). I also kept track of the number of frames skipped, the value of which is incremented after each logic update.

If the maximum allowed number of frames is skipped, then all following ALLEGRO_EVENT_TIMER events are ignored. This allows the event queue to empty so things can be rendered while preserving all other events.

The value is then reset back to 0 after rendering.

This results in the slowdown/stuttering I was looking for without ceasing to render altogether. I can't believe it took all of this to realize I could just skip the superfluous events.

For the record: This was important to me because while more intensive portions of the game may run fine for me, I can't guarantee that they'll be perfect on anyone else's PC. It's much better to have it slow down from 60 FPS to 40 FPS than to risk losing rendering, imo.

If you are willing to, you could add a freestanding version of your new game loop to the allegro wiki's Timer tutorial, as a second example. That way you could help new users starting to use the API (and veterans as well) have a better way to implement a game loop with Allegro.

Or you can post the code here and either me or someone else can add it then on the tutorial page. Last time it was updated was way back on 2013. Countless Allegro applications have been built using that code as a starting point.

As an aside to this particular problem, but if you are doing something that really takes more than a frame, say, complex AI, then you will need to program an interruptible algorithm that keeps it's state and intermediate results and only does a bit of work every frame. Or use a separate thread to run that long work.

For the record: This was important to me because while more intensive portions of the game may run fine for me, I can't guarantee that they'll be perfect on anyone else's PC. It's much better to have it slow down from 60 FPS to 40 FPS than to risk losing rendering, imo.

If you drop up to 10 frames, your game could end up running at around 5-6FPS.

Personally, I think you would be better off with either an adaptive timer rate, or simply dropping all timer events after the first. That way, your game would run as fast as possible, while still keeping your fixed step determinism.

You're right, and that's what I ended up doing. Skipping 10 frames was far too much (and unnecessary for my purposes). Again, another case of overthinking it...

Eventually I just made it so all extra timer events are ignored until rendering has occurred. The frame rate was much better and it did exactly what I wanted. I can see the value of an adaptive timer, and maybe I'll try seeing how that goes at some point, but I'm happy with the current results for the time being.

If you are willing to, you could add a freestanding version of your new game loop to the allegro wiki's Timer tutorial, as a second example. That way you could help new users starting to use the API (and veterans as well) have a better way to implement a game loop with Allegro.

To have the same results as my current loop logic, only one line of the existing example would need to be altered:

if(ev.type == ALLEGRO_EVENT_TIMER){

Would become

if(ev.type == ALLEGRO_EVENT_TIMER &&!redraw){

to make sure every frame is drawn and extra timer events are ignored.

Because I prepared it anyway, here's another modified version of the example loop that allows you to set a max frame skip value:

How about not using a timer at all, and just redrawing whenever a redraw is needed?The effect is that the game will slow down if processing the events takes longer that it should, but at least you'll get 1 frame per iteration and it won't clog:

Right, right, I forgot... but you fix that adding an al_rest() somewhere. You can check the difference between old_time - new_time and 1.0/FPS to get an idea how much to rest, or something like that.I still much prefer this method than using a timer that potentially clogs the queue.

Right, the granularity of al_rest() (or sleep() or whatever you use) may or may not be a problem, but the point is you can sleep (using whatever method you want) for the required time. To me using an allegro timer seem to be a solution to a non-issue, and then there's the potential trouble they may cause (clogging, as you said).

but the point is you can sleep (using whatever method you want) for the required time

Well the advantage of events is that you will wake up on input events while sleeping. Mouse-moved events tend to be quite numerous, too, for example. Maybe you could use al_wait_for_event_timed() for that, though.

Another advantage of timers is being able to have multiple timers active. Trying to simulate that with al_rest would be a nightmare to do yourself, always trying to figure out which "timer" would tick next. Say you had different animation rates or different logic and drawing rates. These would be easier with a timer than with al_rest.