One new anomaly concerns performance and computational load. One indication of this, I take it, is how "fast" one can fast-forward a particular game. Upon loading a game which could previously be fast-forwarded at 50x, is now stuck at 10-20x. However, this does not persist: after a few in-game hours, 30-40x is possible again. The slower time does not coincide with a month end, and nor does it recur later. Do you have any idea what might cause this (admittedly non-serious) anomaly? I've tested this with multiple maps and the same is observable. It's excruciating on a map which usually fast-forwards at 16x, but can only manage 4-5x upon loading, since this takes a long time to get past the slow phase.

Perhaps more seriously, it's worth noting also that the eventual plateau for fast-forwarding is significantly lower than what was possible in 11.9004 (the most recent version I have) -- performance is, overall, significantly worse. Has there been a significant increase in computational load since that release?

thank you for your report. There have been quite a number of changes, although none that I should expect would have a large negative impact on computational load. Indeed, I had rather hoped that some of the changes would reduce computational load, particularly with respect to passenger generation, although there are one or two places in the code relating to distributing goods from industries where there is extra work to be done to take account of the separate coverage radius for factories.

Can you upload examples of the saved games with which you have difficulties so that I can carry out a comparison?

I recommend checking these out in 10.27, RC11.9004 and RC11.9006 respectively and playing with the fast-forward function. I expect your precise figures will vary from those I've quoted above, but hopefully you'll at least be able to see the variation in performance between the different versions -- and, particularly, the large difference between RC11.9004 and RC11.9006.

The UK map is especially large and complex so is a good barometer for this sort of thing.

Thank you for that. Looking at the UK map one, I get 38 or so with 10.27, about 42 with 11.9005, and about 12 with 11.9006, so there is indeed a performance reduction in some aspects compared to the previous build.

Interestingly, the results are not the same with the Bridgewater-Brunel saved game: in 10.27, I get about 3; in 11.9005, I get around 7, and in 11.9006, I get around 5; the performance improvements since 10.27 in this case, but not in the case of the UK map, outweigh whatever has reduced performance in the latest builds. This is somewhat curious.

I've tested some more maps, though. Here's another archive with two savegames (and one pak-folder, which applies to both. Note that the "Third.sve" map seems to have been saved under a different pak name, so won't appear at the top of the savegame list). These are both very old saves but should suffice for this kind of comparative testing.

'Third.sve' exhibits the same ratios you described. It is faster under RC11.9006 than 10.27. However, 'UK July.sve' exhibits the ratios from the two maps I uploaded -- i.e. it's much slower in RC11.9006.

Perhaps this will help to diagnose the relevant difference here. The only difference I can pin down is that all three of the affected maps (balkans, new uk, old uk) involve extensive use of the convoy spacing feature, while Third.sve does not. I'm not sure whether this is true of the Brunel map?

Interesting - what are your system specifications that you can't satisfactorily load the Bridgewater-Brunel game?

That game was one in which, I think, quite a few people used the convoy spacing feature; but it seems unlikely that that would have made much of a difference to the performance, as this code does not impact on anything critical to the performance.

So the performance issue here has to be related to some aspect of the convoy spacing system -- or something on which it depends. There are still many candidates, I guess. A map with heavy convoy spacing has lots of convoys loading at stations at any one time (since they are waiting for spacing), so it could just as easily be related to the goods-loading system.

So the performance issue here has to be related to some aspect of the convoy spacing system -- or something on which it depends. There are still many candidates, I guess. A map with heavy convoy spacing has lots of convoys loading at stations at any one time (since they are waiting for spacing), so it could just as easily be related to the goods-loading system.

Could you check whether the convoys are mostly doing *factory goods* loading or *passenger/mail* loading? I have thoughts about how to clean up and speed up both, but the code paths have diverged somewhat.

All passengers. The upshot being that *something* related to convoy spacing for passenger transport (and everything it entails and depends upon) has changed since 11.9004, leading to a degradation of performance.

Thank you all for your reports and feedback. Currently having some trouble tracking this one down, but I can definitely reproduce it (as between 11.9005 and 11.9006). The odd thing is, as Carl reports, the performance in the later versions slowly gets worse, whereas it remains largely steady in 11.9005; I am not sure what might cause this sort of behaviour. I have specifically checked, and there are no significant memory leaks (a few minor memory leaks are found in the image reading code and in the translator code, but the total amount of memory leaked from these sources is very low, and these were all present in earlier versions in any event).

Thank you all for your reports and feedback. Currently having some trouble tracking this one down, but I can definitely reproduce it (as between 11.9005 and 11.9006). The odd thing is, as Carl reports, the performance in the later versions slowly gets worse, whereas it remains largely steady in 11.9005; I am not sure what might cause this sort of behaviour.

James, could you give me the tags, or the magic commit numbers, for the specific git commits which represent 11.9005 and 11.9006? That should give a finite list of commits to work through to figure out where it broke. (I can use the github "network" tool to go through them.)

If it's not obvious from the list of commits in between the two, then the next thing is to do a bisection: you try the commit halfway in between, and see if the bug is there.

You repeat until you find the commit which introduced the bug.

If there was a major merge of another branch in between those two, the commits just before and just after the merge are the first thing to check (because if the merge introduced it, then you have to go chasing back through the other branch).

Otherwise, it's pretty easy -- there's actually a command-line git tool designed precisely to help with bisection, "git bisect". If you don't know how to do a bisection, I might be able to do it for you (but no promises, not sure how much time I have). I'd need the test file, of course.

Hmm, it might come to bisection - I was hoping to be able to deduce what sort of thing might cause this first, but having no luck so far. The tags are 11,9005 and RC-11.9006 (I forgot to add "RC" for 11.9005). Thank you for looking into this.

The first commit to check in the bisection is ffc7286c55badf825c8f8cc36272cad95fa20753. This is the one before the (seemingly OK) merge from standard. If the bug is NOT present here, then the bug came from standard. Since I don't have the test file, I suggest you test this one.

If the bug IS present here, I have another suggestion, but I'd like to rule out the possibility that the bug came from standard first.

I suspect that the bug was introduced in the commit after that (commit f24ebfd1a5be1a44fca81c308e98c1296dfba18b) but this is just a guess, and with performance bugs like this there is no substitute for bisection.

Thank you for the suggestions on where to start. Will bisect when I return home and have access to my development environment some time next week, and keep everyone posted on what commit caused the problems so that we can consider how best to fix it.

Thank you for the suggestions on where to start. Will bisect when I return home and have access to my development environment some time next week, and keep everyone posted on what commit caused the problems so that we can consider how best to fix it.

The intention was to make sure that the pakset dependant waiting times were properly enforced, not rounded up to whatever was defined as "WTT_LOADING". It is not immediately clear to me how having a minimum waiting time would reduce computational intensity; how do you see the two interacting?

The intention was to make sure that the pakset dependant waiting times were properly enforced, not rounded up to whatever was defined as "WTT_LOADING". It is not immediately clear to me how having a minimum waiting time would reduce computational intensity; how do you see the two interacting?

The problem I see here is that wait_lock doesn't get set any more. As a result, I believe a whole lot of vehicles are trying to process movement (checking whether they need to move) when they should just be chilling out. Try this patch, which restores the setting of wait_lock but leaves out the rest of the WTT_LOADING stuff: