Tutorial :What is faster (x < 0) or (x == -1)?

Question:

Language: C/C++, but I suppose all other languages will have the same.

P.S. I personally think that answer is (x < 0).

More widely for gurus: what if x from -1 to 2^30?

Solution:1

That depends entirely on the ISA you're compiling for, and the quality of your compiler's optimizer. Don't optimize prematurely: profile first to find your bottlenecks.

That said, in x86, you'll find that both are equally fast in most cases. In both cases, you'll have a comparison (cmp) and a conditional jump (jCC) instructions. However, for (x < 0), there may be some instances where the compiler can elide the cmp instruction, speeding up your code by one whole cycle.

Specifically, if the value x is stored in a register and was recently the result of an arithmetic operation (such as add, or sub, but there are many more possibilities) that sets the sign flag SF in the EFLAGS register, then there's no need for the cmp instruction, and the compiler can emit just a js instruction. There's no simple jCC instruction that jumps when the input was -1.

Solution:2

Why? Whichever you do, the compiler will optimize it on whatever platform you are currently compiling on.

If you need to check if it's -1, use (x == -1), if you want to know if it's less than zero, use that one instead. Write what you would read out loud.

Tiny things like this won't make anything faster, and you should be worried about readability and clean design rather than which tiny operation is faster.

And even if it doesn't do any logical changes, chances are on your platform, both will perform in one CPU cycle.

Solution:3

Try it and see! Do a million, or better, a billion of each and time them. I bet there is no statistical significance in your results, but who knows -- maybe on your platform and compiler, you might find a result.

Solution:4

Both operations can be done in a single CPU step, so they should be the same performance wise.

Solution:5

x < 0 will be faster. If nothing else, it prevents fetching the constant -1 as an operand. Most architectures have special instructions for comparing against zero, so that will help too.

Solution:6

It could be dependent on what operations precede or succeed the comparison. For example, if you assign a value to x just before doing the comparison, then it might be faster to check the sign flag than to compare to a specific value. Or the CPU's branch-prediction performance could be affected by which comparison you choose.

But, as others have said, this is dependent upon CPU architecture, memory architecture, compiler, and a lot of other things, so there is no general answer.

Solution:7

The important consideration, anyway, is which actually directs your program flow accurately, and which just happens to produce the same result?

If x is actually and index or a value in an enum, then will -1 always be what you want, or will any negative value work? Right now, -1 is the only negative, but that could change.

Solution:8

You can't even answer this question out of context. If you try for a trivial microbenchmark, it's entirely possible that the optimizer will waft your code into the ether:

// Get time int x = -1; for (int i = 0; i < ONE_JILLION; i++) { int dummy = (x < 0); // Poof! Dummy is ignored. } // Compute time difference - in the presence of good optimization // expect this time difference to be close to useless.

Solution:9

Same, both operations are usually done in 1 clock.

Solution:10

It depends on the architecture, but the x == -1 is more error-prone. x < 0 is the way to go.

Solution:11

As others have said there probably isn't any difference. Comparisons are such fundamental operations in a CPU that chip designers want to make them as fast as possible.

But there is something else you could consider. Analyze the frequencies of each value and have the comparisons in that order. This could save you quite a few cycles. Of course you still need to compile your code to asm to verify this.

Solution:12

I'm sure you're confident this is a real time-taker.

I would suppose asking the machine would give a more reliable answer than any of us could give.

I've found, even in code like you're talking about, my supposition that I knew where the time was going was not quite correct. For example, if this is in an inner loop, if there is any sort of function call, even an invisible one inserted by the compiler, the cost of that call will dominate by far.

Solution:13

Nikolay, you write:

It's actually bottleneck operator in the high-load program. Performance in this 1-2 strings is much more valuable than readability...

All bottlenecks are usually this small, even in perfect design with perfect algorithms (though there is no such). I do high-load DNA processing and know my field and my algorithms quite well

If so, why not to do next:

get timer, set it to 0;

compile your high-load program with (x < 0);

start your program and timer;

on program end look at the timer and remember result1.

same as 1;

compile your high-load program with (x == -1);

same as 3;

on program end look at the timer and remember result2.

compare result1 and result2.

You'll get the Answer.

Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com