If you really want to assign the result to x again, neither is appropriate in this way: it should be either x >>= 1 or x /= 2, depending on what you intend to express with the operation. Not because it's faster (any modern compiler will compile all the equivalent variants to identical, fast assembly anyway) but because it's less confusing.
–
leftaroundaboutMay 21 '12 at 13:46

31

I disagree with leftaroundabout. - But I think its noteworthy that there is an operation called arithmetic shift in many programming languages that keeps the sign bit in place and therefor works for signed values as expected. The syntax may be like x = x >>> 1. Also note that depending on the platform and compiler it may be quite reasonable to manually optimize divisions and multiplications using shifts. - Thinking of micro controllers, for instance, w/o direct ALU support for multiplication.
–
Hanno BinderMay 21 '12 at 19:59

@HannoBinder: certainly subjective, in particular a lot of habit. IMO, in a language where all arithmetic operators have the ⬜= combinations, these should be used whenever it's possible. It removes noise and puts emphasis on the fact that x is modified, while the general = operator rather suggests that it takes on a completely new value independent of the old one. — Always avoiding the combined operators (so that it's readable so someone who only knows mathematical operators) may have its point as well, but then you'd need to relinquish the extremely useful ++, --, +=, too.
–
leftaroundaboutMay 23 '12 at 15:20

This is the most correct answer. The negative part is very very important to understand.
–
PubbyMay 21 '12 at 7:59

16

The original question was also vague about the term 'best'. 'Best' in terms of speed, readability, exam question to trick students, etc... In the absence of a explanation of what 'best' means, this seems to be the most correct answer.
–
RayMay 21 '12 at 8:01

42

In C++03, both are implementation defined for negative numbers, and might give the same results. In C++11, division is well defined for negative numbers, but shifting is still implementation defined.
–
James KanzeMay 21 '12 at 8:02

6

"Implementation defined" means that the implementer of the compiler got to choose among several implementation choices, usually with substantial constraints. Here, one constraint is that the % and / operators must be consistent for both positive and negative operands so that (a/b)*b+(a%b)==a is true regardless of the signs of a and b. Usually, the author will make choices that get the best possible performance out of the CPU.
–
RBerteigMay 21 '12 at 20:35

4

So everyone who says "the compiler will convert it to a shift anyway" is wrong, right? Unless the compiler can guarantee that you are dealing with a non-negative integer (either it is a constant or it is an unsigned int), it can't change it to a shift
–
KipMay 25 '12 at 19:39

Does the first one look like dividing? No. If you want to divide, use x / 2. Compiler can optimise it to use bit-shift if possible (it's called strength reduction), which makes it a useless micro-optimisation if you do it on your own.

Many compilers will not turn division by power of two into a bitshift. That would be an incorrect optimization for signed integers. You should try to look at assembly output from your compiler and see for yourself.
–
exDM69Jun 8 '12 at 9:52

1

IIRC I used that to make parallel reduction faster on CUDA (avoid integer div). However this was more than a year ago, I wonder how smart the CUDA compilers are nowadays.
–
NilsJun 8 '12 at 10:43

7

@exDM69: Many compilers will do that even for signed integers, and just adjust them according to the signedness. A nice tool to play with these things is this: tinyurl.com/6uww253
–
PlasmaHHJun 8 '12 at 11:57

15

@exDM69: And that's relevant, how? I said "if possible", not "always". If optimisation is incorrect, then doing it manually doesn't make it correct (plus as mentioned, GCC is smart enough to figure out proper replacement for signed integers).
–
Cat Plus PlusJun 8 '12 at 15:26

1

Looking at the WikiPedia page, this is apparently controversial, but I wouldn't call it a strength reduction. A strength reduction is when, in a loop, you reduce from, for instance, multiplication to addition, by adding to previous values in the loop. This is more of a peephole optimization, which compilers can do pretty reliably.
–
SomeCallMeTimJun 8 '12 at 16:37

even if the compiler didn't reduce it and chose a slower operation than the shift, the likelihood that this ends up affecting your program's performance in a measurable way is itself vanishingly small (and if it does affect it measurably, then you have an actual reason to use a shift)

if the division is going to be part of a larger expression, you're more likely to get the precedence right if you use the division operator:

x = x / 2 + 5;
x = x >> 1 + 5; // not the same as above

signed arithmetic might complicate things even more than the precedence problem mentioned above

to reiterate - the compiler will already do this for you anyway. In fact, it'll convert division by a constant to a series of shifts, adds, and multiplies for all sorts of numbers, not just powers of two. See this question for links to even more information about this.

In short, you buy nothing by coding a shift when you really mean to multiply or divide, except maybe an increased possibility of introducing a bug. It's been a lifetime since compilers weren't smart enough to optimize this kind of thing to a shift when appropriate.

It's also worth adding that while there are rules of precedence, there isn't a damn thing wrong with using parentheses. While revamping some production code, I actually saw something of the form a/b/c*d (where a..d denoted numeric variables) instead of the far more readable (a*d)/(b*c).
–
Jack ManeyMay 23 '12 at 20:16

1

The performance and optimizations depend on the compiler and target. For example, I do some work for a microcontroller where anything higher than -O0 is disabled unless you buy the commercial compiler, so the compiler will definitely not turn divide into bitshifts. Furthermore, bitshifts take one cycle and divide takes 18 cycles on this target and since the microcontrollers clock speed is pretty low, this may be indeed be a noticeable performance hit (but it depends on your code - you should definitely use / until profiling tells you its a problem!)
–
DanJun 8 '12 at 10:21

3

@JackManey, if there's any possibility that a*d or b*c would produce an overflow, the less readable form is not equivalent and has an obvious advantage. P.S. I agree that parentheses are your best friend.
–
Mark RansomJun 8 '12 at 16:36

@MarkRansom - A fair point (even though I ran into a/b/c*d in R code--in a context where overflow would mean that something was seriously wrong with the data--and not in, say, a performance-critical block of C code).
–
Jack ManeyJun 8 '12 at 18:10

The code x=x/2; is only "clearer" than x>>=1 if x will never be an odd negative number or one doesn't care about off-by-one errors. Otherwise x=x/2; and x>>=1; have different meanings. If what one needs is the value computed by x>>=1, I would regard that as clearer than x = (x & ~1)/2 or x = (x < 0) ? (x-1)/2 : x/2, or any other formulation I can think of using division by two. Likewise if one needs the value computed by x/=2, that is clearer than ((x + ((unsigned)x>>31)>>1).
–
supercatNov 1 '13 at 19:10

If the compiler doesn't optimize accordingly, you should use a better compiler.
–
David StoneAug 13 '12 at 0:34

1

@DavidStone: On what processors can a compiler optimize division of a possibly-negative signed integer by any constant other than 1 to be as efficient as a shift?
–
supercatNov 1 '13 at 19:19

1

@supercat: That is a good point. You could of course store the value in an unsigned integer (which I feel have a much worse reputation than they should when combined with signed / unsigned mismatch warnings), and most compilers also have a way of telling them to assume something is true when optimizing. I would prefer wrapping that in a compatibility macro and have something like ASSUME(x >= 0); x /= 2; over x >>= 1;, but that is still an important point to bring up.
–
David StoneNov 2 '13 at 1:17

I agree with other answers that you should favor x / 2 because its intent is clearer, and the compiler should optimize it for you.

However, another reason for preferring x / 2 over x >> 1 is that the behavior of >> is implementation-dependent if x is a signed int and is negative.

From section 6.5.7, bullet 5 of the ISO C99 standard:

The result of E1 >> E2 is E1 right-shifted E2 bit positions. If E1 has
an unsigned type or if E1 has a signed type and a nonnegative value,
the value of the result is the integral part of the quotient of E1 /
2E2. If E1 has a signed type and a negative value, the resulting value
is implementation-defined.

It's worthwhile to note that the behavior that many implementations define for x>>scalepower on negative numbers will be precisely what is needed when dividing a value by a power of two for purposes such as screen rendering, while using x/scalefactor will be wrong unless one applies corrections to negative values.
–
supercatNov 1 '13 at 19:17

x / 2 is clearer, and x >> 1 is not much faster (according to a micro-benchmark, about 30% faster for a Java JVM). As others have noted, for negative numbers the rounding is slightly different, so you have to consider this when you want to process negative numbers. Some compilers may automatically convert x / 2 to x >> 1 if they know the number can not be negative (even thought I could not verify this).

For Java there is also the unsigned right shift, x >>> 1, which is again different. It allows to correctly calculate the mean (average) value of two values, so that (a + b) >>> 1 will return the mean value even for very large values of a and b. This is required for example for binary search if the array indices can get very large. There was a bug in many versions of binary search, because they used (a + b) / 2 to calculate the average. This doesn't work correctly. The correct solution is to use (a + b) >>> 1 instead.

Compilers cannot convert x/2 to x>>1 in cases where x may be negative. If what one wants is the value which x>>1 would compute, that will almost certainly be faster than any expression involving x/2 which computes the same value.
–
supercatNov 1 '13 at 19:26

You are right. A compilers may only convert x/2 to x>>1 if he knows the value is not negative. I will try to update my answer.
–
Thomas MuellerNov 2 '13 at 7:40

What would you consider the preferred method of scaling a number down by a power of two if one wants integers to uphold the axiom (which applies to natural numbers and real numbers) that (n+d)/d = (n/d)+1? Violations that axiom when scaling graphics will cause visible "seams" in the result. If one wants something which is uniform and almost symmetric about zero, (n+8)>>4 works nicely. Can you offer any approach which is as clear or as efficient without using a signed right shift?
–
supercatNov 20 '13 at 17:29

Take a look at the compiler output to help you decide. I ran this test with
gcc (GCC) 4.2.1 20070719 [FreeBSD]

What you see is the compiler does use a shrl instruction in both cases, so it does recognize the similarity between the two expressions. If you use the divide, the compiler also needs to adjust for negative numbers. To do that it shifts the sign bit down to the lowest order bit, and adds that to the result. This fixes the off-by-one issue when shifting negative numbers, compared to what a divide would do.
Since the divide case does 2 shifts, while the explicit shift case only does one, we can now explain some of the performance differences measured by other answers here.

Depending upon what one is doing, it may fix the off-by-one error, or it may cause an off-by-one error (compared with what's actually needed) which will necessitate the use of further code to fix it. If what one wants is a floored result, a right-shift is a faster and easier than any alternative I know of.
–
supercatNov 20 '13 at 0:15

If you need a floor, it is unlikely you would describe what you want as "dividing by 2"
–
Michael DonohueNov 20 '13 at 7:03

Division of both natural numbers and real numbers upholds the axiom that (n+d)/d = (n/d)+1. Division of real numbers also upholds (-n)/d = -(n/d), an axiom which is meaningless with natural numbers. It is not possible to have a division operator which is closed on integers and upholds both axioms. To my mind, saying that the first axiom should hold for all numbers and the second only for reals seems more natural than saying that the first should hold for whole numbers or reals but not for integers. Further, I'm curious in what cases the second axiom is actually useful.
–
supercatNov 20 '13 at 16:58

An integer division method which satisfies the first axiom will partition the number line into regions of size d. Such partitioning is useful for many purposes. Even if one would rather have the breakpoint somewhere other than between 0 and -1, adding an offset will move it. An integer division which satisfies the second axiom will partition the number line into regions which are mostly of size d, but one of which is of size 2*d-1. Not exactly "equal" divisions. Can you offer suggestions of when the oddball partition is actually useful?
–
supercatNov 20 '13 at 17:03

@minitech: That's such a bad test. All the code in the test is constant. Before the code is even JITed, it'll eliminate all the constants.
–
M28Jan 27 '14 at 3:44

@M28: I was pretty sure jsPerf’s internals (i.e. eval) made that happen afresh each time. Regardless, yes, it is a pretty bad test, because it’s a very silly optimization.
–
minitech♦Jan 27 '14 at 4:10

I am telling for the purpose of programming competitions. Generally they have very large inputs where division by 2 takes place many times and its known that input is positive or negative.

x>>1 will be better than x/2. I checked on ideone.com by running a program where more than 10^10 division by 2 operations took place. x/2 took nearly 5.5s whereas x>>1 took nearly 2.6s for same program.

For unsigned values, a compiler should optimize x/2 to x>>1. For signed values, nearly all implementations define x>>1 to have a meaning which is is equivalent to x/2 but can be computed faster when x is positive, and is usefully different from x/2 when x is negative.
–
supercatNov 20 '13 at 0:22

Use x = x / 2; OR x /= 2; Because it is possible that a new programmer works on it in future. So it will be easier for him to find out what is going on in the line of code. Everyone may not be aware of such optimizations.

Bitshift should be faster, as no special computation is really
needed to shift the bits, however as pointed out, there are
potential issues with negative numbers. If you are ensured to have
positive numbers, and are looking for speed then I would recommend
bitshift.

The division operator is very easy for humans to read.
So if you are looking for code readability, you could use this. Note
that the field of compiler optimization has come a long way, so making code easy
to read and understand is good practice.

Depending on the underlying hardware,
operations may have different speeds. Amdal's law is to make the
common case fast. So you may have hardware that can perform
different operations faster than others. For example, multiplying by
0.5 may be faster than dividing by 2. (Granted you may need to take the floor of the multiplication if you wish to enforce integer division).

If you are after pure performance, I would recommend creating some tests that could do the operations millions of times. Sample the execution several times (your sample size) to determine which one is statistically best with your OS/Hardware/Compiler/Code.

I hope they would, but unless you have access to the source of the compiler, you can't be sure :)
–
VenomFangsJul 31 '13 at 14:57

I would also recommend bitshift if one's implementation handles it in the most common fashion and the way one wants to handle negative numbers matches with what >> does and doesn't match what / does.
–
supercatNov 1 '13 at 19:34

As far as the CPU is concerned, bit-shift operations are faster than division operations.
However, the compiler knows this and will optimize appropriately to the extent that it can,
so you can code in the way that makes the most sense and rest easy knowing that your code is
running efficiently. But remember that an unsigned int can (in some cases) be optimized better than an int for reasons previously pointed out.
If you don't need signed arithmatic, then don't include the sign bit.

The answer to this will depend on the environment you're working under.

If you're working on an 8-bit microcontroller or anything without hardware support for multiplication, bit shifting is expected and commonplace, and while the compiler will almost certainly turn x /= 2 into x >>= 1, the presence of a division symbol will raise more eyebrows in that environment than using a shift to effect a division.

If you're working in a performance-critical environment or section of code, or your code could be compiled with compiler optimization off, x >>= 1 with a comment explaining its reasoning is probably best just for clarity of purpose.

If you're not under one of the above conditions, make your code more readable by simply using x /= 2. Better to save the next programmer who happens to look at your code the 10 second double-take on your shift operation than to needlessly prove you knew the shift was more efficient sans compiler optimization.

All these assume unsigned integers. The simple shift is probably not what you want for signed. Also, DanielH brings up a good point about using x *= 0.5 for certain languages like ActionScript.

this is sometimes used to speed up programs at the cost of clarity. I don't think you should do it . The compiler is smart enough to perform the speedup automatically. This means that putting in a shift gains you nothing at the expense of clarity.

If one wants to compute the value that e.g. (x+128)>>8 would compute for values of x not near the maximum, how could one concisely do it without a shift? Note that (x+128)/256 won't work. Do you know of any nice expression that will?
–
supercatNov 1 '13 at 19:28

Obviously, if you are writing your code for the next guy who reads it, go for the clarity of "x/2".

However, if speed is your goal, try it both ways and time the results. A few months ago I worked on a bitmap convolution routine which involved stepping through an array of integers and dividing each element by 2. I did all kinds of things to optimize it including the old trick of substituting "x>>1" for "x/2".

When I actually timed both ways I discovered to my surprise that x/2 was faster than x>>1

This was using Microsoft VS2008 C++ with the default optimizations turned on.