ZigZag encoding maps signed integers to unsigned integers so that numbers with a small absolute value (for instance, -1) have a small varint encoded value too. It does this in a way that "zig-zags" back and forth through the positive and negative integers, so that -1 is encoded as 1, 1 is encoded as 2, -2 is encoded as 3, and so on, as you can see in the following table:

How does (n << 1) ^ (n >> 31) equal whats in the table? I understand that would work for positives, but how does that work for say, -1? Wouldn't -1 be 1111 1111, and (n << 1) be 1111 1110? (Is bit-shifting on negatives well formed in any language?)

Nonetheless, using the fomula and doing (-1 << 1) ^ (-1 >> 31), assuming a 32-bit int, I get 1111 1111, which is 4 billion, whereas the table thinks I should have 1.

Ah, which is in fact what the next paragraph that I was misreading says. Thanks very much!
–
ThanatosDec 26 '10 at 8:55

3

I gave a +1. However it should be pointed out that the meanings of >> and >>> differ by language/implementation (see Shift Operator). In the case of the protocol-buffer document it explicitly says an Arithmetic Shift (aka "Signed Shift") which semantically does as described.
–
user166390Mar 18 '11 at 18:34