Check out our new "Community Passport" Feature!
Just click the brand new Community Passport tab to register your serials and get easy access to all of the latest product downloads in one place.

We'd like to know about your workflow in Fusion!
Do you storyboard, work on visuals first? Maybe you focus on the code/logic right from the start - we'd love to know!CLICK HERE to take part in our poll/discussion when you have a moment!

I know that Fusion's had lots of optimisation improvements over the years, which is why I was surprised by some recent tests I did using the A/B tester in VACCiNE (which, as far as I can tell, is reasonably accurate).

See these two comparisons. Since the events in the one on the left are adjacent and have identical conditions, I assumed that the Fusion compiler would consolidate them, resulting essentially in the single event on the right. It seems like an obvious optimisation to make (and I'm sure the compiler contains much cleverer and more intricate optimisations than that). Yet, as you can see in the test results, the right version performs 32% better (I ran the test multiple times to make sure)

I sometimes split up events (as on the left) because it makes the code easier to read (and you can place comments in between particular sections). And I've just assumed that it didn't make any performance impact. Do you think the 32% above is because these use fastloops, or do you think that all types of events would suffer a similar performance decrease when split up?

Now, some of you may have noticed that these tests used a fastloop that was run 700000 times per frame with the left events causing a performance drop of 24ms, and you might ask: "so what? Isn't that such a tiny drop that it's not worth even considering?" The answer to that is, of course: yes it is :-) ....though perhaps not quite as microsopic as you might think. Humour me:

24ms per 700000 loops is 0.00003428571ms per single loop. That is indeed tiny! But my CPU is pretty fast (core i7 4770k). What about if we ran this code on an older laptop with a weaker CPU? We could feasibly imagine it taking twice as long to do these operations. Assuming it still had a 32% discrepancy between both events, we might then get a 48ms drop, or 0.00007ms per single loop. Now, let's imagine that we're running a full game that has many of these 'split events' throughout its code, including inside fastloops and foreach loops. Without too much stretching of plausability, we could imagine that the game had 100 such 'split events', which would accumulate to 0.007ms per single loop.

Now, let's say we're running this game connected to a 144Hz monitor, that runs 144 frames per second. There are 1000 ms in a second, which means that for a smooth 144fps, each frame must be completed in 6.9ms or less. But, because of our performance drop of 0.007ms, we now have a diminished time limit: we now only have 6.893ms to complete each frame. This is................still really tiny. OK, you win :P

I think that kind of optimization job is due to the developer,
the compiler would have hard time knowing "whys" of you putting those events in different branches within the same loop,
you might have your reasons, like one of those phases changing the loop index,
that would affect next phase,
combining all of those with the loopindex change in a single loop iteration wouldn't have the same result
(well, compiler could know this, but tracking back all this kind of things might be very difficult,
and risky, sensible to mistakes,
particularly in low-performance-gain areas, this could result in more hassle than advantage)

As for performances,
I think every "check" is a little bit of work to do, regardless of it being a loop or not,
so if you had that split in six "FLAG 0 = ON" events (instead of "on loop >> do this")
I think that would have added a tiny bit as well
(don't know if finding a loop/iterating indices is slower than checking a flag, probably yes)

Optimizations are good to be searched anywhere, no matter how tiny,
your loops might grow deeper and longer, and added to the other zillion things you do during a frame,
this could be that little significant drop in the sea... who knows

(had tested this and confirm the same result btw, less "on loop" with same actions = faster)

ok, well your unpleasant feeling was justified . I just inadvertantly found a case where merging two adjacent events with identical conditions makes a big difference. I have no idea why, but the version on the left works (the moving platform pushes the player, and rebounds off walls), while the version on the right completely breaks (the platform goes through the player, and fails to rebound off walls). Through isolating bits of the code I can tell that it has something to do with the "destroy" in 2358 and/or the "create object" in 2359, but other than that I don't really understand what's happening.

Can't attempt to produce an example right now (at work)
but I clearly remember at least two situations (one was in P3D) when I had to fight against same issue,
when firing two foreach in the same event, and second one going nuts,
solved by splitting the event in two, one with a foreach and one with the other (like your left image)

I seem to remember thinking something was going crazy with scoping in the second foreach,
after the first one performed,
one difference from your case is I was cyclyng through the same object (qualifier) with both foreachs
and another difference is I was not creating/destroying anything

** warning - very speculative paragraph follows **
I suspect this might have to do with the fact that creating object is a "scoping" action,
and ***perhaps*** this scoping is not "closed" at the end of the actions list after the last iteration of a foreach,
thus leaving the object scoped for the next foreach - called for just one object?

This could be tested, and could be wrong.
Anyway I'm sure I had issues in at least two cases with two foreach fired in the same condition

True,
the last image seems at least a little "sensed" to me,
because the foreach fires "scoped" by the diamond created within the fastloop

Seems like foreach does something not really intelligible with scoping when there's a creation event involved,
second one ignoring completely those just created objects,
same result here (I was making a very similar test as your first one,
report here just for reference):

This might even have to do with time of execution of the event,
since there's nothing scoped here,
but the global value is not updated at the end of loop (to the left)
while it is if you move the action in a new event (to the right)