What architecture did you run on? Did you compile with good optimization settings? I just tried your code, with and without the sort (the C++ variant) and did not find any runtime difference. Having a look at the assembler output (gcc.godbolt.org is handy for that) I could also see that there is no branch done on the if, but a cmovge is being used. When using -O2 I see a difference in speed only, but not with -O3...
–
PlasmaHHJun 27 '12 at 14:10

194

@GManNickG: I did investigate a bit further, and things are "funny". With O3, both versions (sort/non sort) are the same speed (4.5) but with O2, both are different (3.1/15.7) so I looked at the O2 version. There is a branch. So gcc seems to optimize for "random data" here. To further test if it is branch prediction, I tested the O2 code not with sort, but in the creation phase I set/removed the top bit of the byte for one half, but not the other. Things are the same result here, so it really has nothing to do with the data being sorted, but with the if condition being true/false for one half.
–
PlasmaHHJun 27 '12 at 14:16

127

Just to add more fun, on my CPU, when alternating the bits in the input, the branch predictor seems to be able to recognize the pattern. The same for some other alternating bit patterns.
–
PlasmaHHJun 27 '12 at 14:37

34

instead of doing a complete sorting then summing, in this particular case try doing a partial sorting (i.e. partitioning) with pivot of 128, then sum from the pivot to the end without any branch statements or unreadable bitwise twiddling.
–
Lie RyanJun 28 '12 at 2:57

43

I think what's most interesting is that the Java VM executes the same code faster than the native C++ version for in original case.
–
dcowJun 30 '12 at 1:56

What is Branch Prediction?

Now for the sake of argument, suppose this is back in the 1800s - before long distance or radio communication.

You are the operator of a junction and you hear a train coming. You have no idea which way it is supposed to go. You stop the train to ask the captain which direction he wants. And then you set the switch appropriately.

Trains are heavy and have a lot of inertia. So they take forever to start up and slow down.

Is there a better way? You guess which direction the train will go!

If you guessed right, it continues on.

If you guessed wrong, the captain will stop, back up, and yell at you to flip the switch. Then it can restart down the other path.

If you guess right every time, the train will never have to stop.If you guess wrong too often, the train will spend a lot of time stopping, backing up, and restarting.

Consider an if-statement: At the processor level, it is a branch instruction:

You are a processor and you see a branch. You have no idea which way it will go. What do you do? You halt execution and wait until the previous instructions are complete. Then you continue down the correct path.

Modern processors are complicated and have long pipelines. So they take forever to "warm up" and "slow down".

Is there a better way? You guess which direction the branch will go!

If you guessed right, you continue executing.

If you guessed wrong, you need to flush the pipeline and roll back to the branch. Then you can restart down the other path.

If you guess right every time, the execution will never have to stop.If you guess wrong too often, you spend a lot of time stalling, rolling back, and restarting.

This is branch prediction. I admit it's not the best analogy since the train could just signal the direction with a flag. But in computers, the processor doesn't know which direction a branch will go until the last moment.

So how would you strategically guess to minimize the number of times that the train must back up and go down the other path? You look at the past history! If the train goes left 99% of the time, then you guess left. If it alternates, then you alternate your guesses. If it goes one way every 3 times, you guess the same...

In other words, you try to identify a pattern and follow it. This is more or less how branch predictors work.

Most applications have well-behaved branches. So modern branch predictors will typically achieve >90% hit rates. But when faced with unpredictable branches with no recognizable patterns, branch predictors are virtually useless.

As hinted from above, the culprit is this if-statement:

if (data[c] >= 128)
sum += data[c];

Notice that the data is evenly distributed between 0 and 255.
When the data is sorted, roughly the first half of the iterations will not enter the if-statement. After that, they will all enter the if-statement.

This is very friendly to the branch predictor since the branch consecutively goes the same direction many times.
Even a simple saturating counter will correctly predict the branch except for the few iterations after it switches direction.

However, when the data is completely random, the branch predictor is rendered useless because it can't predict random data.
Thus there will probably be around 50% misprediction. (no better than random guessing)

With the Branch: There is a huge difference between the sorted and unsorted data.

With the Hack: There is no difference between sorted and unsorted data.

In the C++ case, the hack is actually a tad slower than with the branch when the data is sorted.

A general rule of thumb is to avoid data-dependent branching in critical loops. (such as in this example)

Update :

GCC 4.6.1 with -O3 or -ftree-vectorize on x64 is able to generate a conditional move. So there is no difference between the sorted and unsorted data - both are fast.

VC++ 2010 is unable to generate conditional moves for this branch even under /Ox.

Intel Compiler 11 does something miraculous. It interchanges the two loops, thereby hoisting the unpredictable branch to the outer loop. So not only is it immune the mispredictions, it is also twice as fast as whatever VC++ and GCC can generate! In other words, ICC took advantage of the test-loop to defeat the benchmark...

If you give the Intel Compiler the branchless code, it just out-right vectorizes it... and is just as fast as with the branch (with the loop interchange).

This goes to show that even mature modern compilers can vary wildly in their ability to optimize code...

Note that with the "hack" (which is equivalent to the cmovge optimization gcc does with -O3, as noted in my comment to the question) it might be possible that the speed is a bit slower than in the case where branch prediction works "perfectly". So this is once more a case where you might want to optimize your code not only for the data structure, but also for its contents.
–
PlasmaHHJun 27 '12 at 14:19

363

One way you can make the train analogy better is if you say that the only way the operator can know if the switch is correct is if the captain gives him a thumbs up or thumbs down, and the captain sits at the back of the train such that the operator can only see him when the captain passes him. This way if the switch is incorrect the train would have to stop, back up, and then take the correct route.
–
ThomasJun 27 '12 at 16:30

162

I'm amazed at both the question and the answer - this has explained something I only barely knew about. But it raises a question for me. Should you optimize your code to take into account things like branch prediction? Or would that be a case of pre-mature optimization? Knowing the data you are processing would seem to drive the implementation.
–
Peter MJun 29 '12 at 12:50

476

I would like to point out that the reason the Intel compiler does the loop swap is actually far more impressive than just to help out branch prediction. The loops written as is are almost 100% guaranteed to cause millions of cache misses on every iteration of the outer loop. If you flip the two loops, you get at most 32768 cache misses (disregarding os preemption.) A missed branch prediction here costs nanoseconds. A cache miss can cause milliseconds if it has to fetch from memory. That's a 10000000x improvement.
–
Michael GraczykJul 7 '12 at 5:45

80

@MichaelGraczyk I agree with most of what you say. But that 10000000x is extremely exaggerated. My tests show that the loop-interchange is "only" 2-3x improvement over the sorted cases for GCC and VC++. Note that the data fits entirely into the L2 cache. So no memory access is required beyond the initial data generation. Even when the data doesn't fit in cache, I'd seriously doubt there would be a 10000000x speedup. 100 - 1000x would be more realistic.
–
MysticialJul 8 '12 at 3:27

Branch prediction. With a sorted array, the condition data[c] >= 128 is first false for a streak of values, then becomes true for all later values. That's easy to predict. With an unsorted array, you pay the branching cost.

That is because the body of the loop is small (a single statement). If it had been a larger block, then the cost of the wrong path would be less.
–
VSOverFlowJun 27 '12 at 23:38

11

@Shubham: Because the branch is eventually executed (and then the surrounding context is completely known)? It might be possible to determine this earlier, but when the branch is executed is at least a lower bound in some sense.
–
cicAug 8 '12 at 13:00

1

Does branch prediction work better on sorted arrays vs. arrays with different patterns? For example, for the array --> { 10, 5, 20, 10, 40, 20, ... } the next element in the array from the pattern is 80. Would this kind of array be sped up by branch prediction in which the next element is 80 here if the pattern is followed? Or does it usually only help with sorted arrays?
–
Adam FreemanSep 23 '14 at 18:58

So basically everything I conventionally learned about big-O is out of the window? Better to incur a sorting cost than a branching cost?
–
Agrim PathakOct 30 '14 at 7:51

7

@AgrimPathak That depends. For not too large input, an algorithm with higher complexity is faster than an algorithm with lower complexity when the constants are smaller for the algorithm with higher complexity. Where the break-even point is can be hard to predict. Also, compare this, locality is important. Big-O is important, but it is not the sole criterion for performance.
–
Daniel FischerOct 30 '14 at 10:14

The reason why the performance improves drastically when the data is sorted is that the branch prediction penalty is removed, as explained beautifully in Mysticial's answer.

Now, if we look at the code

if (data[c] >= 128)
sum += data[c];

we can find that the meaning of this particular if... else... branch is to add something when a condition is satisfied. This type of branch can be easily transformed into a conditional move statement, which would be compiled into a conditional move instruction: cmovl, in an x86 system. The branch and thus the potential branch prediction penalty is removed.

In C, thus C++, the statment, which would compile directly (without any optimization) into the conditional move instruction in x86, is the ternary operator ... ? ... : .... So we rewrite the above statement into an equivalent one:

sum += data[c] >=128 ? data[c] : 0;

While maintaining readability, we can check the speedup factor.

On an Intel Core i7-2600K @ 3.4GHz and Visual Studio 2010 Release Mode,
the benchmark is (format copied from Mysticial):

The result is robust in multiple tests. We get great speedup when the branch result is unpredictable, but we suffer a little bit when it is predictable. In fact, when using a conditional move, the performance is the same regardless of the data pattern.

Now let's look more closely by investigating at the x86 assembly they generate. For simplicity, we use two functions max1 and max2.

max2 uses much less code due to the usage of instruction cmovge. But the real gain is that max2 does not involve branch jumps, jmp, which would have a significant performance penalty if the predicted result is not right.

So why can a conditional move perform better?

In a typical x86 processor, the execution of an instruction is divided to several stages. Roughly, we have different hardware to deal with different stages. So we do not have to wait for one instruction to finish to start a new one. This is called pipelining.

In a branch case, the following instruction is determined by the preceding one, so we can not do pipelining. We have to either wait or predict.

In a conditional move case, the execution conditional move instruction is divided into several stages, but the earlier stages like Fetch, Decode, does not depend on the result of previous instruction, only latter stages need the result. So we wait a fraction of one instruction's execution time. This is why the conditional move version is slower than the branch when prediction is easy.

Sometimes, some modern compilers can optimize our code to assembly with better performance, sometimes some compilers can't (the code in question is using Visual Studio's native compiler). Knowing the performance difference between branch and conditional move when unpredictable can help us write code with better performance when the scenario gets so complex that the compiler can not optimize them automatically.

I'm confused as to how you got those results. Isn't the ternary operator just an inline branch?
–
TulloJun 28 '12 at 3:12

32

In your Edit section, you forgot to ask the compiler to optimize the code. By default, GCC doesn't perform any optimisation. Adding -O2 will give the same assembler code for max1() and max2()
–
ydroneaudJun 28 '12 at 12:58

22

There's no default optimization level unless you add -O to your GCC command lines. (And you can't have a worst english than mine ;)
–
ydroneaudJun 28 '12 at 14:04

42

Please please please please don't benchmark unoptimized code. If GCC compiles your two examples to the same assembly with -O2, then the two pieces of code are equivalent, end of story.
–
Justin L.Oct 10 '12 at 19:38

7

@WiSaGaN The code demonstrates nothing, because your two pieces of code compile to the same machine code. It's critically important that people don't get the idea that somehow the if statement in your example is different from the terenary in your example. It's true that you own up to the similarity in your last paragraph, but that doesn't erase the fact that the rest of the example is harmful.
–
Justin L.Oct 11 '12 at 3:12

+1 for commenting on loop swap. See my comment on Mystical's answer. People reading this thread should note that thinking about memory layout and caching is almost always WAY more important than optimizing for branch prediction. (100000x improvement versus 3x improvement)
–
Michael GraczykJul 7 '12 at 5:49

159

Yes, but the 100,000 loop was just to make the benchmark long enough that the timings would be significant. In a real application, this kind of opportunity is rare, and the branch prediction remains a significant factor.
–
Adrian McCarthyJul 11 '12 at 17:22

59

@JasonWilliams: I think you misunderstood the point I was trying to make. The loop to 100,000 is part of the benchmarking framework--it's not part of the code we're trying to optimize.
–
Adrian McCarthyJul 12 '12 at 17:59

73

If you want to cheat, you might as well take the multiplication outside the loop and do sum*=100000 after the loop.
–
JyaifOct 11 '12 at 1:48

3

@Michael - I believe that this example is actually an example of loop-invariant hoisting (LIH) optimization, and NOT loop swap. In this case, the entire inner loop is independent of the outer loop and can therefore be hoisted out of the outer loop, whereupon the result is simply multiplied by a sum over i of one unit =1e5. It makes no difference to the end result, but I just wanted to set the record straight since this is such a frequented page.
–
Yair AltmanMar 4 '13 at 12:59

No doubt some of us would be interested in ways of identifying code that is problematic for the CPU's branch-predictor. The Valgrind tool cachegrind has a branch-predictor simulator, enabled by using the --branch-sim=yes flag. Running it over the examples in this question, with the number of outer loops reduced to 10000 and compiled with g++, gives these results:

This lets you easily identify the problematic line - in the unsorted version the if (data[c] >= 128) line is causing 164,050,007 mispredicted conditional branches (Bcm) under cachegrind's branch-predictor model, whereas it's only causing 10,006 in the sorted version.

Alternatively, on Linux you can use the performance counters subsystem to accomplish the same task, but with native performance using CPU counters.

Perhaps but it doesn't explain why the sorted array is faster if you don't know anything about Branch prediction. It is however a very inspiring post.
–
Pierre ArlaudNov 20 '13 at 8:47

14

@ArlaudAgbePierre: I didn't see any point in reiterating what several other answers had said about that.
–
cafNov 21 '13 at 0:09

2

Of course not, your answer is very interesting. I'm merely explaining why the answer that nurettin found "less inspiring" got chosen in the first place instead of yours. But we're both totally fine with that.
–
Pierre ArlaudNov 21 '13 at 9:20

9

This is scary, in the unsorted list, there should be 50% chance of hitting the add. Somehow the branch prediction only has a 25% miss rate, how can it do better than 50% miss?
–
tall.b.loDec 9 '13 at 4:00

27

@tall.b.lo: The 25% is of all branches - there are two branches in the loop, one for data[c] >= 128 (which has a 50% miss rate as you suggest) and one for the loop condition c < arraySize which has ~0% miss rate.
–
cafDec 9 '13 at 4:29

Just read up on the thread and I feel an answer is missing. A common way to eliminate branch prediction that I've found to work particularly good in managed languages is a table lookup instead of using a branch. (although I haven't tested it in this case)

This approach works in general if:

It's a small table and is likely to be cached in the processor

You are running things in a quite tight loop and/or the processor can pre-load the data

Background and why

Pfew, so what the hell is that supposed to mean?

From a processor perspective, your memory is slow. To compensate for the difference in speed, they build in a couple of caches in your processor (L1/L2 cache) that compensate for that. So imagine that you're doing your nice calculations and figure out that you need a piece of memory. The processor will get his 'load' operation and loads the piece of memory into cache - and then uses the cache to do the rest of the calculations. Because memory is relatively slow, this 'load' will slow down your program.

Like branch prediction, this was optimized in the Pentium processors: the processor predicts that it needs to load a piece of data and attempts to load that into the cache before the operation actually hits the cache. As we've already seen, branch prediction sometimes goes horribly wrong -- in the worst case scenario you need to go back and actually wait for a memory load, which will take forever (in other words: failing branch prediction is bad, a memory load after a branch prediction fail is just horrible!).

Fortunately for us, if the memory access pattern is predictable, the processor will load it in its fast cache and all is well.

First thing we need to know is what is small? While smaller is generally better, a rule of thumb is to stick to lookup tables that are <=4096 bytes in size. As an upper limit: if your lookup table is larger than 64K it's probably worth reconsidering.

Constructing a table

So we've figured out that we can create a small table. Next thing to do is get a lookup function in place. Lookup functions are usually small functions that use a couple of basic integer operations (and, or, xor, shift, add, remove and perhaps a multiply). What you want is to have your input translated by the lookup function to some kind of 'unique key' in your table, which then simply gives you the answer of all the work you wanted it to do.

In this case: >=128 means we can keep the value, <128 means we get rid of it. The easiest way to do that is by using an 'AND': if we keep it, we AND it with 7FFFFFFF ; if we want to get rid of it, we AND it with 0. Notice also that 128 is a power of 2 -- so we can go ahead and make a table of 32768/128 integers and fill it with one zero and a lot of 7FFFFFFFF's.

Managed languages

You might wonder why this works well in managed languages. After all, managed languages check the boundaries of the arrays with a branch to ensure you don't mess up...

Well, not exactly... :-)

There has been quite some work on eliminating this branch for managed languages. For example:

for (int i=0; i<array.Length; ++i)
// use array[i]

in this case it's obvious to the compiler that the boundary condition will never hit. At least the Microsoft JIT compiler (but I expect Java does similar things) will notice this and remove the check all together. WOW - that means no branch. Similarly, it will deal with other obvious cases.

If you run into trouble with lookups on managed languages - the key is to add a & 0x[something]FFF to your lookup function to make the boundary check predictable - and watch it going faster.

Because no branch is better than a branch :-) In a lot of situations this is simply a lot faster... if you're optimizing, it's definitely worth a try. They also use it quite a bit in f.ex. graphics.stanford.edu/~seander/bithacks.html
–
atlasteApr 24 '13 at 21:57

In general lookup tables can be fast, but have you ran the tests for this particular condition? You'll still have a branch condition in your code, only now it's moved to the look up table generation part. You still wouldn't get your perf boost
–
Zain RizviDec 19 '13 at 21:45

@Zain if you really want to know... Yes: 15 seconds with the branch and 10 with my version. Regardless, it's a useful technique to know either way.
–
atlasteDec 20 '13 at 18:57

5

Why not sum += lookup[data[j]] where lookup is an array with 256 entries, the first ones being zero and the last ones being equal to the index?
–
Kris VandermottenMar 12 '14 at 12:17

As data is distributed between 0 and 255 when array is sorted, around first half of the iterations will not enter the if-statement (if statement shared below).

if (data[c] >= 128)
sum += data[c];

Question is what make the above statement not execute in certain case as in case of sorted data? Here comes the "Branch predictor" a branch predictor is a digital circuit that tries to guess which way a branch (e.g. an if-then-else structure) will go before this is known for sure. The purpose of the branch predictor is to improve the flow in the instruction pipeline. Branch predictors play a critical role in achieving high effective performance!

Lets do some bench marking to understand it better

The performance of an if-statement depends on whether its condition has a predictable pattern. If the condition is always true or always false, the branch prediction logic in the processor will pick up the pattern. On the other hand, if the pattern is unpredictable, the if-statement will be much more expensive.

A “bad” true-false pattern can make an if-statement up to six times slower than a “good” pattern! Of course, which pattern is good and which is bad depends on the exact instructions generated by the compiler and on the specific processor.

So there is no doubt about impact of branch prediction on performance!

One way to avoid branch prediction errors is to build a lookup table, and index it using the data. Stefan de Bruijn discussed that in his answer.

But in this case, we know values are in the range [0, 255] and we only care about values >= 128. That means we can easily extract a single bit that will tell us whether we want a value or not: by shifting the data to the right 7 bits, we are left with a 0 bit or a 1 bit, and we only want to add the value when we have a 1 bit. Let's call this bit the "decision bit".

By using the 0/1 value of the decision bit as an index into an array, we can make code that will be equally fast whether the data is sorted or not sorted. Our code will always add a value, but when the decision bit is 0, we will add the value somewhere we don't care about. Here's the code:

This code wastes half of the adds, but never has a branch prediction failure. It's tremendously faster on random data than the version with an actual if statement.

But in my testing, an explicit lookup table was slightly faster than this, probably because indexing into a lookup table was slightly faster than bit shifting. This shows how my code sets up and uses the lookup table (unimaginatively called lut for "LookUp Table" in the code). Here's the C++ code:

In this case the lookup table was only 256 bytes, so it fit nicely in cache and all was fast. This technique wouldn't work well if the data was 24-bit values and we only wanted half of them... the lookup table would be far too big to be practical. On the other hand, we can combine the two techniques shown above: first shift the bits over, then index a lookup table. For a 24-bit value that we only want the top half value, we could potentially shift the data right by 12 bits, and be left with a 12-bit value for a table index. A 12-bit table index implies a table of 4096 values, which might be practical.

EDIT: One thing I forgot to put in.

The technique of indexing into an array, instead of using an if statement, can be used for deciding which pointer to use. I saw a library that implemented binary trees, and instead of having two named pointers (pLeft and pRight or whatever) had a length-2 array of pointers, and used the "decision bit" technique to decide which one to follow. For example, instead of:

Right, you can also just use the bit directly and multiply (data[c]>>7 - which is discussed somewhere here as well); I intentionally left this solution out, but of course you are correct. Just a small note: The rule of thumb for lookup tables is that if it fits in 4KB (because of caching), it'll work - preferably make the table as small as possible. For managed languages I'd push that to 64KB, for low-level languages like C++ and C, I'd probably reconsider (that's just my experience). Since typeof(int) = 4, I'd try to stick to max 10 bits.
–
atlasteJul 29 '13 at 12:05

I think indexing with the 0/1 value will probably be faster than an integer multiply, but I guess if performance is really critical you should profile it. I agree that small lookup tables are essential to avoid cache pressure, but clearly if you have a bigger cache you can get away with a bigger lookup table, so 4KB is more a rule of thumb than a hard rule. I think you meant sizeof(int) == 4? That would be true for 32-bit. My two-year-old cell phone has a 32KB L1 cache, so even a 4K lookup table might work, especially if the lookup values were a byte instead of an int.
–
stevehaJul 29 '13 at 22:02

2

Umm...no, you have an if condition still hidden inside your lookup table generation code. No cookie for you
–
Zain RizviDec 19 '13 at 21:41

18

@Zain, try actually benchmarking my code and then decide whether to award me a cookie or not. There is a world of difference between an if in the code to generate a short lookup table, and an if in the main loop processing a large data set. If you really want, you can make a static initializer for the lookup table, but the cost of setting it up is trivial. The loop that fills in the lookup table acts like the sorted list: the if always branches one way on the first half of the table, and always branches the other way on the second half, so there is only one mispredicted branch.
–
stevehaDec 19 '13 at 22:07

4

@Petter I asked Zain to benchmark the code. Now I ask you to do it. Don't believe me, don't trust me, run the code for yourself and time it. You will find that the lookup table code and the "decision bit" code are both faster than the code that has an if while processing the random data. If you find that the "decision bit" version is faster than the lookup table, please tell us all about it; particularly which processor your computer uses. But my goodness, please actually run the code and time it before telling me that "there is nothing to gain".
–
stevehaJul 4 '14 at 17:48

In the sorted case, you can do better than relying on successful branch prediction or any branchless comparison trick: completely remove the branch.

Indeed, the array is partitioned in a contiguous zone with data < 128 and another with data >= 128. So you should find the partition point with a dichotomic search (using Lg(arraySize) = 15 comparisons), then do a straight accumulation from that point.

sum= 3137536 - clever. That's kinda obviously not the point of the question. The question is clearly about explaining surprising performance characteristics. I'm inclined to say that the addition of doing std::partition instead of std::sort is valuable. Though the actual question extends to more than just the synthetic benchmark given.
–
seheJul 24 '13 at 16:31

@DeadMG: this is indeed not the standard dichotomic search for a given key, but a search for the partitioning index; it requires a single compare per iteration. But don't rely on this code, I have not checked it. If you are interested in a guaranteed correct implementation, let me know.
–
Yves DaoustJul 24 '13 at 20:37

1

If the list is already sorted than it should be able to be divided in to and then divided again and again until you find what you are looking for.
–
Doug HaufFeb 3 '14 at 13:51

It's a good thing to do to find gems! Top answer is excellent and correct, further answers (yours included) give some real clues on other ways to address the original problem.
–
Doug LuceApr 22 at 22:15

To execute instruction B or instruction C the processor will have to wait till the instruction A doesn't reach till EX stage in the pipeline, as the decision to go to instruction B or instruction C depends on the result of instruction A. So the pipeline will look like this.

when if condition returns true:

When if condition returns false:

As a result of waiting for the result of instruction A, the total CPU cycles spent in the above case(without branch prediction;for both true and false) is 7.

So what is branch prediction?

Branch predictor will tries to guess which way a branch ( an if-then-else structure) will go before this is known for sure. It will not wait for the instruction A to reach the EX stage of the pipeline, but it will guess the decision and go onto that instruction(B or C in case of our example).

In case of a correct guess, the pipeline looks something like this:

If it is later detected that the guess was wrong then the partially executed instructions are discarded and the pipeline starts over with the correct branch, incurring a delay.
The time that is wasted in case of a branch misprediction is equal to the number of stages in the pipeline from the fetch stage to the execute stage. Modern microprocessors tend to have quite long pipelines so that the misprediction delay is between 10 and 20 clock cycles. The longer the pipeline the greater the need for a good branch predictor. (https://en.wikipedia.org/wiki/Branch_predictor)

In the OP's code, the first time when the conditional, the branch predictor does not have any information to base up prediction, so first time it will randomly choose the next instruction. Later in the for loop it can base the prediction on the history.
For an array sorted in ascending order, there are three possibilities:

All the elements are less than 128

All the elements are greater than 128

Some starting new elements are less than 128 and later it become greater than 128

Let us assume that the predictor will always assume the true branch on the first run.

So in the first case it will always take the true branch since historically all its predictions are correct.
In the 2nd case, initially it will predict wrong, but after a few iterations it will predict correctly.
In the 3rd case it will will initially predict correctly till the elements are less than 128. After which it will fail for some time and the correct itself when it see branch prediction failure in history.

In all these cases the failure will be too less in number and as a result only few times it will need to discard the partially executed instructions and start over with the correct branch, resulting in less CPU cycles.

But in case of random unsorted array, the prediction will need to discard the partially executed instructions and start over with the correct branch most of the time and result in more CPU cycles compared to the sorted array.