numerical range of int and short

This is a discussion on numerical range of int and short within the C++ Programming forums, part of the General Programming Boards category; Hi
The range for short is: -32,768 to 32,767
The range for int is: -2,147,483,648 to 2,147,483,647
(int is system ...

numerical range of int and short

The range for short is: -32,768 to 32,767
The range for int is: -2,147,483,648 to 2,147,483,647

(int is system dependent which means that on 16-bit system it would be occupy 2 bytes which would result in small numerical range)

The simple way to know the range for short is (the same goes for int): There are 2 bytes occupied by a short. There are 16 bits in 2 bytes. 2^16 = 65536. Now divide the "65536" by 2 to get -ve and +ve ranges. 65536/2 = 32768. Now we have to include "0" too in the range. The -ve range is: -32,768, and +ve range is: 32,767. The question which can come to mind is that why the +ve range is not "32,768" instead of "32,767". Let's check a simple case to understand this.

When 30 is divided by 2, the result is 15. This result doesn't include "0" and "0" is only used as a reference. But if we were to include "0" also then the +ve and -ve range for 30 would be: -15 to +14.

I get maybe two dozen requests for help with some sort of programming or design problem every day. Most have more sense than to send me hundreds of lines of code. If they do, I ask them to find the smallest example that exhibits the problem and send me that. Mostly, they then find the error themselves. "Finding the smallest program that demonstrates the error" is a powerful debugging tool.

The range for short is: -32,768 to 32,767
The range for int is: -2,147,483,648 to 2,147,483,647

(int is system dependent which means that on 16-bit system it would be occupy 2 bytes which would result in small numerical range)

The simple way to know the range for short is (the same goes for int): There are 2 bytes occupied by a short. There are 16 bits in 2 bytes. 2^16 = 65536. Now divide the "65536" by 2 to get -ve and +ve ranges. 65536/2 = 32768. Now we have to include "0" too in the range. The -ve range is: -32,768, and +ve range is: 32,767. The question which can come to mind is that why the +ve range is not "32,768" instead of "32,767". Let's check a simple case to understand this.

When 30 is divided by 2, the result is 15. This result doesn't include "0" and "0" is only used as a reference. But if we were to include "0" also then the +ve and -ve range for 30 would be: -15 to +14.

Think about an 8 bit number. There are 256 possible combinations. If it's unsigned that equates to a value ranging from [0, 255]. But if it's signed then the "unsigned part" only has 7 bits to work with, so 128 possible combinations ranging from [0, 127]. The sign bit, if zero, is really just an extension of the plain old unsigned zero, otherwise it must necessarily represent some nonzero value, meaning 128 possible negative combinations ranging from [-128, -1]. The same principle applies to wider integers.

Even char has an implementation defined size. sizeof(char) is always 1, but the number of bits in char does not have to be 8. 8 bits is the norm though, so this bit of trivia is really only good for looking smart. Like I hope I am doing now.

Well, it's like this: the last ( most significant ) bit is considered the sign bit of a number. With that in mind, and with quick and easier addition/subtraction, most nowadays computers use the 2's complement notation.

Now, the reason of a signed 16-bit integer ranging from -32,768 to 32,767 is because of the 2's complement we're using. If you keep incrementing a signed 16-bit integer it will go from 0 to 32,767, then drop to -32,768 and all the way down to -1. 32767 is 0x7FFF, -32768 is 0x8000 etc. -1 is 0xFFFF

Positive and negative are defined in terms of their relative distance from zero, so zero itself is neither.

Positive and negative are not defined in terms of distance from zero. Mathematicians typically define positive and negative in terms of direction from zero, where positive and negative are opposite directions from each other. Let's say we have a straight line going from our left to our right, and some point on that line is designated as zero. If points to the right of zero are deemed to represent positive values, points to the left represent negative values.

The designation of which point is marked as zero, and which direction is positive, depends on frame of reference. For example, if we have a ruler with a set of values marked, we might see negative values to the left of zero and positive values to the right of zero. But someone else, looking at the same ruler from the other side, might see negative values to his right of zero and positive values to his left.

If I seem grumpy or unhelpful in reply to you, or tell you you need to demonstrate more effort before you can expect help, it is likely you deserve it. Suck it up, Buttercup, and read this, this, and this before posting again.

Positive and negative are not defined in terms of distance from zero. Mathematicians typically define positive and negative in terms of direction from zero, where positive and negative are opposite directions from each other. Let's say we have a straight line going from our left to our right, and some point on that line is designated as zero. If points to the right of zero are deemed to represent positive values, points to the left represent negative values.

The designation of which point is marked as zero, and which direction is positive, depends on frame of reference. For example, if we have a ruler with a set of values marked, we might see negative values to the left of zero and positive values to the right of zero. But someone else, looking at the same ruler from the other side, might see negative values to his right of zero and positive values to his left.

So, from this perspective, zero is neither positive nor negative.

Yes, direction...not distance! But your argument that the distinction between the two is merely dependent on the frame of reference isn't exactly accurate. Multiplying two negative numbers produces one of a different sign, taking the square root of one produces an imaginary number, etc, so there are some important differences.

But your argument that the distinction between the two is merely dependent on the frame of reference isn't exactly accurate.

True. I left out plenty of conditions and caveats. In a discussion of the meaning of positive versus negative for integral values, delving into complex number theory is a little excessive.

Originally Posted by gardhr

Multiplying two negative numbers produces one of a different sign, taking the square root of one produces an imaginary number, etc, so there are some important differences.

You're delving more than necessary into complex number theory than I consider is necessary for this discussion.

But, to follow your diversion slightly .... the differences you are highlighting are trivial enough that they are not really differences in complex number theory. The frame of reference remains important though (in the sense of which directions along the real and imaginary axes are deemed to be "positive").

Multiplying two real values of the same sign always gives a positive value.

In the complex number space, there are always two square roots of any value - for real values, that is true regardless of whether the value is positive or negative. For any complex value, the two square roots are related by a rotation though 180 degrees about the origin. The relationship between square roots of a positive real value and of a negative real value is a rotation of 90 degrees about the origin. The sign of the rotational angle (which direction of rotation is positive) still depends on which directions on the real and imaginary axes are stipulated to represent positive.

If I seem grumpy or unhelpful in reply to you, or tell you you need to demonstrate more effort before you can expect help, it is likely you deserve it. Suck it up, Buttercup, and read this, this, and this before posting again.