We're looking for long answers that provide some explanation and context. Don't just give a one-line answer; explain why your answer is right, ideally with citations. Answers that don't include explanations may be removed.

20

The question is flawed. 32-bit machines can handle numbers much larger than 2^32. They do it all the time, with 'long' and so on. They can only store up to 2^32 in one register, but the software is written to bypass this problem. Some modern languages don't even have a problem with the length of a given number.
–
JFAJan 11 '14 at 17:46

16

Please keep comments on-topic, polite, and relevant to the technical aspects of the question. Nearly 50 joke comments already had to be removed, and we'd like to avoid having to lock the post. Thank you.
–
nhinkle♦Jan 11 '14 at 22:05

6

This question has been written in a way that is a bit sloppy. What do you mean by "write" and "display" the number 1000000000000? When you wrote the question you wrote the number 1000000000000, and your web browser displays it just fine, I assume, but this should be nothing strange to anyone that has ever used a computer before. The question asks for free interpretation.
–
HelloGoodbyeJan 12 '14 at 1:39

2

The human consciousness is estimated to hold about 50 bits (I read somewhere). So the question is not "How can I write 10^9 without my PC crashing?" but rather "How can I write 10^(18) without my brain crashing?"
–
Hagen von EitzenJan 18 '14 at 9:38

18 Answers
18

You likely count up to the largest possible number with one hand, and then you move on to your second hand when you run out of fingers. Computers do the same thing, if they need to represent a value larger than a single register can hold they will use multiple 32bit blocks to work with the data.

This post has been locked due to the high amount of off-topic comments generated. For extended discussions, please use chat.

203

How do you count on your fingers to 6? - I hold 2nd and 3rd finger up, others down (00110) :)
–
codename-Jan 10 '14 at 15:16

16

Funny, @codename. How then do you count on your fingers to 32 or more (i.e. once 2^5 is exhausted)? ;) The analogy of moving to one's other hand is good...even if binary delays the need to move to one's other hand. What I would like to see is counting to 1,024 or more with the pedial dexterity to move to one's toes for further counting in binary - up to 1,048,575! :) That's potentially 20-bits of daughterboard power. :P
–
J0e3ganJan 10 '14 at 16:17

14

Please keep comments on-topic and relevant to discussing the technical aspects of this answer. Over 60 joke comments have already been deleted from this answer, and we'd like to avoid having to lock the post.
–
nhinkle♦Jan 11 '14 at 22:03

2

I think this is not the answer to the relevant question. Answer by @Bigbio2002 is the correct one. Here "1000000000000" is not a number but a text, just like "adsfjhekgnoregrebgoregnkevnregj". What you are saying is true, but I strongly feel this is not the correct answer. And to see so many upvotes...
–
Master ChiefJan 18 '14 at 4:40

You are correct that a 32-bit integer cannot hold a value greater than 2^32-1. However, the value of this 32-bit integer and how it appears on your screen are two completely different things. The printed string "1000000000000" is not represented by a 32-bit integer in memory.

To literally display the number "1000000000000" requires 13 bytes of memory. Each individual byte can hold a value of up to 255. None of them can hold the entire, numerical value, but interpreted individually as ASCII characters (for example, the character '0' is represented by decimal value 48, binary value 00110000), they can be strung together into a format that makes sense for you, a human.

A related concept in programming is typecasting, which is how a computer will interpret a particular stream of 0s and 1s. As in the above example, it can be interpreted as a numerical value, a character, or even something else entirely. While a 32-bit integer may not be able to hold a value of 1000000000000, a 32-bit floating-point number will be able to, using an entirely different interpretation.

As for how computers can work with and process large numbers internally, there exist 64-bit integers (which can accommodate values of up to 16-billion-billion), floating-point values, as well as specialized libraries that can work with arbitrarily large numbers.

Actually that is mostly correct but not quite. A 32 point floating point number is unlikely to be able to accurately represent 1000000000000. It will represent a number very very close to the desired number but not exactly it.
–
Tim BJan 9 '14 at 10:54

6

@TimB: Have you heard about decimal32 format? It's part of IEEE 754-2008 standard. This format is capable of correct representation of this number:)
–
V-XJan 9 '14 at 12:54

15

True, that can. However that is not the format people mean when they say "float", which usually refers to a 32bit floating point number as stored and used by standard floating point processors in current computers.
–
Tim BJan 9 '14 at 13:06

1

@TimB indeed. The closest number to that which can be represented as a float32 is 999999995904
–
greggoJan 9 '14 at 18:03

First and foremost, 32-bit computers can store numbers up to 232-1 in a single machine word. Machine word is the amount of data the CPU can process in a natural way (ie. operations on data of that size are implemented in hardware and are generally fastest to perform). 32-bit CPUs use words consisting of 32 bits, thus they can store numbers from 0 to 232-1 in one word.

Second, 1 trillion and 1000000000000 are two different things.

1 trillion is an abstract concept of number

1000000000000 is text

By pressing 1 once and then 0 12 times you're typing text. 1 inputs 1, 0 inputs 0. See? You're typing characters. Characters aren't numbers. Typewriters had no CPU or memory at all and they were handling such "numbers" pretty well, because it's just text.

Proof that 1000000000000 isn't a number, but text: it can mean 1 trillion (in decimal), 4096 (in binary) or 281474976710656 (in hexadecimal). It has even more meanings in different systems. Meaning of 1000000000000 is a number and storing it is a different story (we'll get back to it in a moment).

To store the text (in programming it's called string) 1000000000000 you need 14 bytes (one for each character plus a terminating NULL byte that basically means "the string ends here"). That's 4 machine words. 3 and half would be enough, but as I said, operations on machine words are fastest. Let's assume ASCII is used for text storage, so in memory it will look like this: (converting ASCII codes corresponding to 0 and 1 to binary, each word in a separate line)

Four characters fit in one word, the rest is moved to the next one. The rest is moved to next word until everything (including first NULL byte) fits.

Now, back to storing numbers. It works just like with overflowing text, but they are fitted from right to left. It may sound complicated, so here's an example. For the sake of simplicity let's assume that:

our imaginary computer uses decimal instead of binary

one byte can hold numbers 0..9

one word consists of two bytes

Here's an empty 2-word memory:

0 0
0 0

Let's store the number 4:

0 4
0 0

Now let's add 9:

1 3
0 0

Notice that both operands would fit in one byte, but not the result. But we have another one ready to use. Now let's store 99:

9 9
0 0

Again, we have used second byte to store the number. Let's add 1:

0 0
0 0

Whoops... That's called integer overflow and is a cause of many serious problems, sometimes very expensive ones.

But if we expect that overflow will happen, we can do this:

0 0
9 9

And now add 1:

0 1
0 0

It becomes clearer if you remove byte-separating spaces and newlines:

0099 | +1
0100

We have predicted that overflow may happen and we may need additional memory. Handling numbers this way isn't as fast as with numbers that fit in single words and it has to be implemented in software. Adding support for two-32-bit-word-numbers to a 32-bit CPU effectively makes it a 64-bit CPU (now it can operate on 64-bit numbers natively, right?).

Everything I have described above applies to binary memory with 8-bit bytes and 4-byte words too, it works pretty much the same way:

Your answer reads rather condescendingly. OP is clearly talking about the number, not the text: large as the number 1 trillion (1000000000000). Also, you are almost talking about Arbitrary-precision arithmetic, but you never really mention any of the terms for what you are saying....
–
MirroredFateJan 8 '14 at 1:22

@ElzoValugi It is. I had to find some way to present the concept of abstract number, as opposed to string representing a number. I believe "1 trillion" is a better and less ambiguous way to do it (see the proof in answer).
–
gronostajJan 8 '14 at 11:09

19

@MirroredFate I disagree with 'is clearly talking about the number'. OP says 'displayed fine', which clearly is talking about the text '1000000000000' to me...
–
JoeJan 8 '14 at 20:49

4

@yannbane 'A' is a character and not a number. '?' is a character and not a number. '1' is a character and not a number too. Characters are just symbols. They can represent digits or numbers, but definitely they aren't numbers. '1' can stand for one, ten, hundred, thousand and so on, it's just a symbol that stands for a digit that can be a number or its part. '10' (string of characters) can mean two or eight or ten or sixteen etc. but when you say you have ten apples, you're using a number ten and everybody knows what you mean. There's a huge difference between characters and numbers.
–
gronostajJan 9 '14 at 15:52

You are also able to write "THIS STATEMENT IS FALSE" without your computer crashing :) @Scott's answer is spot-on for certain calculation frameworks, but your question of "writing" a large number implies that it's just plain text, at least until it's interpreted.

Edit: now with less sarcasm more useful information on different ways a number can be stored in memory. I'll be describing these with higher abstraction i.e. in terms that a modern programmer may be writing code in before it's translated to machine code for execution.

Data on a computer has to be restricted to a certain type, and a computer definition of such type describes what operations can be performed on this data and how (i.e. compare numbers, concatenate text or XOR a boolean). You can't simply add text to a number, just like you can't multiply a number by text so some of these values can be converted between types.

Let's start with unsigned integers. In these value types, all bits are used to store information about digits; yours is an example of a 32-bit unsigned integer where any value from 0 to 2^32-1 can be stored. And yes, depending on the language or architecture of the platform used you could have 16-bit integers or 256-bit integers.

What if you want to get negative? Intuitively, signed integers is the name of the game. Convention is to allocate all values from -2^(n-1) to 2^(n-1)-1 - this way we avoid the confusion of having to deal with two ways to write +0 and -0. So a 32-bit signed integer would hold a value from -2147483648 to 2147483647. Neat, isn't it?

Ok, we've covered integers which are numbers without a decimal component. Expressing these is trickier: the non-integer part can sensibly only be somewhere between 0 and 1, so every extra bit used to describe it would increase its precision: 1/2, 1/4, 1/8... The problem is, you can't precisely express a simple decimal 0.1 as a sum of fractions that can only have powers of two in their denominator! Wouldn't it be much easier to store the number as an integer, but agree to put the radix (decimal) point instead? This is called fixed point numbers, where we store 1234100 but agree on a convention to read it as 1234.100 instead.

A relatively more common type used for calculations is floating point. The way it works is really neat, it uses one bit to store the sign value, then some to store exponent and significand. There are standards that define such allocations, but for a 32-bit float the maximum number you would be able to store is an overwhelming

(2 - 2^-23) * 2^(2^7 - 1) ≈ 3.4 * 10^38

This however comes at the cost of precision. JavaScript available in browsers uses 64-bit floats, and it still can't get things right. Just copy this into the address bar and press enter. Spoiler alert: the result is not going to be 0.3.

javascript:alert(0.1+0.2);

There are more alternative types like Microsoft .NET 4.5's BigInteger, which theoretically has no upper or lower bounds and has to be calculated in "batches"; but perhaps the more fascinating technologies are those that understand math, like Wolfram Mathematica engine, which can precisely work with abstract values like infinity.

True, if a computer insists on storing numbers using a simple binary representation of the number using a single word (4 bytes on a 32 bit system), then a 32 bit computer can only store numbers up to 2^32. But there are plenty of other ways to encode numbers depending on what it is you want to achieve with them.

One example is how computers store floating point numbers. Computers can use a whole bunch of different ways to encode them. The standard IEEE 754 defines rules for encoding numbers larger than 2^32. Crudely, computers can implement this by dividing the 32 bits into different parts representing some digits of the number and other bits representing the size of the number (i.e. the exponent, 10^x). This allows a much larger range of numbers in size terms, but compromises the precision (which is OK for many purposes). Of course the computer can also use more than one word for this encoding increasing the precision of the magnitude of the available encoded numbers. The simple decimal 32 version of the IEEE standard allows numbers with about 7 decimal digits of precision and numbers of up to about 10^96 in magnitude.

But there are many other options if you need the extra precision. Obviously you can use more words in your encoding without limit (though with a performance penalty to convert into and out of the encoded format). If you want to explore one way this can be done there is a great open-source add-in for Excel that uses an encoding scheme allowing hundreds of digits of precision in calculation. The add-in is called Xnumbers and is available here. The code is in Visual Basic which isn't the fastest possible but has the advantage that it is easy to understand and modify. It is a great way to learn how computers achieve encoding of longer numbers. And you can play around with the results within Excel without having to install any programming tools.

You can write any number you like on paper. Try writing a trillion dots on a white sheet of paper. It's slow and ineffective. That's why we have a 10-digit system to represent those big numbers. We even have names for big numbers like "million", "trillion" and more, so you don't say one one one one one one one one one one one... out loud.

32-bit processors are designed to work most quick and efficiently with blocks of memory that are exactly 32 binary digits long. But we, people, commonly use 10-digit numeric system, and computers, being electronic, use 2-digit system (binary). Numbers 32 and 64 just happen to be powers of 2. So are a million and a trillion are powers of 10. It's easier for us to operate with these numbers than multitudes of 65536, for example.

We break big numbers into digits when we write them on paper. Computers break down numbers into a greater number of digits. We can write down any number we like, and so may the computers if we design them so.

32bit and 64bit refer to memory addresses. Your computer memory is like post office boxes, each one has a different address. The CPU (Central Processing Unit) uses those addresses to address memory locations on your RAM (Random Access Memory). When the CPU could only handle 16bit addresses, you could only use 32mb of RAM (which seemed huge at the time). With 32bit it went to 4+gb (which seemed huge at the time). Now that we have 64bit addresses the RAM goes into terabytes (which seems huge).
However the program is able to allocate multiple blocks of memory for things like storing numbers and text, that is up to the program and not related to the size of each address. So a program can tell the CPU, I'm going to use 10 address blocks of storage and then store a very large number, or a 10 letter string or whatever.
Side note: Memory addresses are pointed to by "pointers", so the 32- and 64-bit value means the size of the pointer used to access the memory.

Good answer except for the details - 16bits of address space gave you 64kb, not 32mb, and machines like the 286 had 24-bit addresses (for 16mb). Also, with 64-bit addresses, you go well beyond terabytes - more like 16 exabytes - terabytes is around the kind of limits motherboards/CPUs of the present generation are imposing - not the size of the addresses.
–
PhilJan 8 '14 at 7:58

4

32-bit refers to machine word size, not memory addresses. As Phil mentioned, 286 was 16-bit CPU but used 24 bits for addressing through memory segmentation. x86 CPUs are 32-bit, but use 36-bit addressing. See PAE.
–
gronostajJan 8 '14 at 8:30

Because displaying the number is done using individual characters, not integers. Each digit in the number is represented with a separate character literal, whose integer value is defined by the encoding being used, for example 'a' is represented with ascii value 97, while '1' is represented with 49. Check the ascii table here.
For displaying both 'a' and '1' is same. They are character literals, not integers. Each character literal is allowed to have max value of 255 in 32-bit platform storing the value in 8 bit or 1 byte size(That's platform dependent, however 8 bit is most common character size), thus they can be grouped together and can be displayed. How much separate characters they can display depends upon the RAM you have. If you have just 1 byte of RAM then you can display just one character, if you have 1GB of RAM, you can display well 1024*1024*1024 characters(Too lazy to do the math).

This limitation however applies to the calculations, however I guess you're interested about the IPV4 standard. Although it's not entirely related to computers's bit-size, it's somehow affected the standards. When IPV4 standard created, they stored the ip values in 32-bit integers. Now once you gave the size, and it became standard. Everything we know about internet was dependent on that, and then we ran out of IP addresses to assign. So if the IP standard was revised to have 64 bit, everything will just stopped working, including your router(I assume this to be correct) and other networking devices. So a new standard has to be created, which just swapped the 32 bit integer with 128 bit one. And adjusted rest of the standard. Hardware manufacturer just need to declare that they support this new standard and it'll get viral. Although it's not that simple, but I guess you got the point here.

Disclaimer: Most of the points mentioned here are true to my assumption. I may have missed important points here to make it simpler. I am not good with numbers, so must have missed some digits, but my point here is to reply the OP's answer about why it won't crash the PC.

I haven't downvoted, but there's a number of problems with your answer. 1 is 0x31 in ASCII, not 0x1. 1 GB = 1024^3 B. IPv4 wad invented before 32-bit CPUs were introduced, so saying that addresses were stored in 32-bit integers is in conflict with OP's question. And finally IPv6 is using 128-bit addresses, not 64-bit.
–
gronostajJan 8 '14 at 7:37

In processors, there is "words".
There is different words. When people say "32 bit processor", they mean mostly "memory bus width". This word consists of different "fields", which refer to subsystems of computer corresponding to transmitting (24 bits) and control (other bits). I can be wrong about exact numbers, make yourself sure about it through manuals.

Completely different aspect is computation. SSE and MMX instruction sets can store long integers. Maximal lenght without loss of productivity depends on current SSE version but its always about multiple of 64bits.

Current Opteron processors can handle 256 bit wide numbers (I'm not sure about integer, but float is for sure).

Summary: (1) bus width is not connected to computation width directly, (2) even different words (memory word, register word, bus word etc) do not connected to each other, other then they have common divisor about 8 or 16 or 24. Many processors even used 6 bit word (but its history).

The purpose of a computing device, generally, is to accept, process, store, and emit data. The underlying hardware is merely a machine that helps perform those four functions. It can do none of those without software.

Software is the code that tells the machine how to accept data, how to process it, how to store it, and how to provide it to others.

The underlying hardware will always have limitations. In the case of a 32 bit machine, most of the registers that process data are only 32 bits wide. This doesn't mean, though, that the machine can't handle numbers beyond 2^32, it means that if you want to deal with larger numbers, it may take the machine more than one cycle to accept it, process it, store it, or emit it.

The software tells the machine how to handle numbers. If the software is designed to handle large numbers, it sends a series of instructions to the CPU that tell it how to handle the larger numbers. For instance, your number can be represented by two 32 bit registers. If you wanted to add 1,234 to your number, the software would tell the CPU to first add 1,234 to the lower register, then check the overflow bit to see if that addition resulted in a number too big for the lower register. If it did, then it adds 1 to the upper register.

In the same way elementary schoolchildren are taught to add with carry, the CPU can be told to handle numbers larger than it can hold in a single register. This is true for most generic math operations, for numbers of any practical size.

You are correct that for a theoretical 8-bit machine, we are only able to store 2^8 values in a single processor register or memory address. (Please keep in mind this varies from "machine" to "machine" based on processor used, memory architecture, etc. But for now, let's stick with a hypothetical 'stereotype' machine.)

For a theoretical 16-bit machine, the max value in a register/memory location would be 2^16, for a 32-bit machine, 2^32, etc.

Over the years, programmers have devised all sorts of chicanery to store and handle numbers greater than can be stored in a single processor register or memory address. Many methods exist, but they all involve using more than one register/memory address to store values larger than their "native" register/memory location width.

All of these methods are beneficial in that the machine can store/process values larger than their native capacity. The downside is almost all approaches require multiple machine instructions/reads/etc. to handle these numbers. For the occasional large number, this isn't a problem. When dealing with lots of large numbers (large memory addresses in particular) the overhead involved slows things down.

Hence the general desire to make registers, memory locations and memory address hardware "wider" and wider in order to handle large numbers "natively" so such numbers can be handled with the minimum number of operations.

Since number size is infinite, processor register/memory size/addressing is always a balance of native number size and the costs involved in implementing larger and larger widths.

32 bit computers can only store numbers up to 2^32 in a single machine word, but that doesn't mean that they can't handle larger entities of data.

The meaning of a 32 bit computer is generally that the data bus and address bus is 32 bits wide, which means that the computer can handle 4 GB of memory address space at once, and send four bytes of data at a time over the data bus.

That however doesn't limit the computer from handling more data, it just have to divide the data into four byte chunks when it's send over the data bus.

The regular Intel 32-bit processor can handle 128-bit numbers internally, which would let you handle numbers like 100000000000000000000000000000000000000 without any problem.

You can handle much bigger numbers than that in a computer, but then the calculations have to be done by software, the CPU doesn't have instructions for handling numbers larger than 128 bits. (It can handle much larger number in the form of floating point numbers, but then you only have 15 digits of precision.)

If you write 1000000000000 for example in calculator, computer will calculate it as a Real type number with decimal point. Limit for 32 bits you mentioned touches more all the Integer type numbers without decimal point. Different data types use different methods how to get into bits/bytes.

Real type numbers:
Real type numbers contain value with floating point and exponent and you can enter much larger numbers, but with limited accuracy/precision. (http://msdn.microsoft.com/en-us/library/6bs3y5ya.aspx) For example LDBL (large double) in C++ has maximum exponent 308, so possibly you can enter or have as a result number 9.999 x 10^308, means you will have theoretically 308(+1) digits of 9 but only 15 most important digits will be used to represent it, rest will be lost, cause of limited precision.

Additionally, there are different programming languages and they could have different implementations of number limits. So you can imagine that specialized applications could handle much larger (and/or more exact/precise) numbers than C++.

Just adding a note to the many other answers, because this is a pretty important fact in this question that has been missed.

"32 bit" refers to the memory address width. It has nothing to do with the register size. Many 32 bit CPUs likely have 64 or even 128 bit registers. In particular referring to the x86 product line, the recent consumer CPUs, which are all 64bit, posses up to 256 bit registers for special purposes.

This difference between the register width and the address width has existed since ancient times, when we had 4 bit registers and 8 bit addresses, or vice versa.

It is simple to see that storing a large number is no problem regardless of the register size, as explained in other answers.

The reason why the registers, of whatever size they may happen to be, can additionally also calculate with larger numbers, is that too large calculations can be broken down into several smaller ones that do fit into the registers (it's just a tiny bit more complicated in reality).

So basically it's the same as using the normal + - * / operators, just with a library to break the numbers up and store them internally as multiple machine word sized (i.e 32-bit) numbers. There are also scanf() type functions for handling converting text input to integer types.

The structure of mpz_t is exactly like Scott Chamberlain's example of counting to 6 using two hands. It's basically an array of machine word sized mp_limb_t types, and when a number is too large to fit in a machine word, GMP uses multiple mp_limb_t to store the high/low parts of the number.

The answers already given are actually pretty good, but they tend to address the issue from different sides and thus present an incomplete picture. They're also a bit overly technical, in my opinion.

So, just to clarify something that's hinted at but not explicitly expressed in any of the other answers, and which I think is the crux of the matter:

You're mixing up several concepts in your question, and one of them ("32 bit") can actually refer to a variety of different things (and different answers have assumed different interpretations). These concepts all have something to do with the number of bits (1's and 0's) used (or available) in various computing contexts (what I mean by this will hopefully be clarified by the examples below), but the concepts are otherwise unrelated.

Explicitly:

"IPv4/6" refers to internet protocol, a set of rules defining how information is to be packaged and interpreted on the internet. The primary (or at least the most well-known) distinction between IPv4 and IPv6 is that the address space (i.e. the set of addresses that can be used to distinguish between different locations on the network) is larger in IPv6. This has to do with how many bits in each packet of data sent across the network are allocated for (i.e. set aside for the purpose of) identifying the packet's sender and intended recipient.

Non-computing analogy: Each packet is like a letter sent via snail-mail, and the address space is like the amount of characters you're "allowed" to use when writing the address and return-address on the envelope.

I don't see this mentioned in any of the other answers so far.

Computer-memory "words" (32-bit and 64-bit) can generally be thought of as the smallest piece of data that a computer uses, or "thinks" in. These bits of data come together to make up other bits of data, such as chunks of text or larger integers.

Non-computing analogy: words can be thought of a bit like letters making up words on paper, or even as individual words in a train of thought.

32-bit pointers may or may not be words, but they are nevertheless treated atomically (i.e. as individual units that can't be broken down into smaller components). Pointers are the lowest-level way in which a computer can record the location in memory of some arbitrary chunk of data. Note that the pointer size used by the computer (or, really, by the operating system) limits the range of memory that can be accessed by a single pointer, since there are only as many possible memory locations that a pointer can "point" to as there are possible values for the pointer itself. This is analogous to the way in which IPv4 limits the range of possible internet addresses, but does not limit the amount of data that can be present in, for instance, a particular web page. However, pointer size does not limit the size of the data itself to which the pointer can point. (For an example of a scheme for allowing data size to exceed pointer range, check out Linux's inode pointer structure. Note that this is a slightly different use of the word "pointer" than is typical, since pointer usually refers to a pointer in to random access memory, not hard drive space.)

Non-computing analogy: hmmmm....this one's a bit tricky. Perhaps the Dewey decimal system for indexing library materials is a bit similar? Or any indexing system, really.

In your mind you only know 10 different digits. 0 to 9. Internally in your brain, this is certainly encoded differently than in a computer.

A computer uses bits to encode numbers, but that is not important. That's just the way engineers chose to encode stuff, but you should ignore that. You can think of it as a 32 bit computer has a unique representation of more than 4 billion different values, while us humans have a unique representation for 10 different values.

Whenever we must comprehend a larger number, we use a system. The leftmost number is the most important. It is 10 times more important than the next.

A computer able to differentiate between four billion different values, will similarly have to make the leftmost value, in a set of values, be four billion times as important as the next value in that set. Actually a computer does not care at all. It does not assign "importance" to numbers. Programmers must make special code to take care of that.

Whenever a value becomes greater than the number of unique symbols, 9 in a humans mind, you add one to the number to the left.

3+3=6

In this case, the number still fits within a single "slot"

5+5=10. This situation is called an overflow.

So humans always deal with the problem of not having enough unique symbols. Unless the computer has a system to deal with this, it would simply write 0, forgetting that there was a number extra. Luckily, computers have an "overflow flag" that is raised in this case.

987+321 is more difficult.

You may have learned a method in school. An algorithm. The algorithm is quite simple. Start by adding the two leftmost symbols.

7+1=8, we now have ...8 as the result so far

Then you move to the next slot and perform the same addition.

8+2=10, the overflow flag is raised. We now have ...08, plus overflow.

Since we had an overflow, it means that we have to add 1 to the next number.

9+3=12, and then we add one due to overflow. ...308, and we had another overflow.

There are no more numbers to add, so we simply create a slot and inser 1 because the overflow flag was raised.

1308

A computer does it exactly the same way, except it has 2^32 or even better 2^64 different symbols, instead of only 10 like humans.

On a hardware level, the computer works on single bits using exactly the same method. Luckily, that is abstracted away for programmers. Bits is only two digits, because that is easy to represent in a power line. Either the light is on, or it is off.

Finally, a computer could display any number as a simple sequence of characters. That's what computers are best at. The algorithm for converting between a sequence of characters, and an internal representation is quite complex.

Because you are not displaying a number (as far as the computer is concerned), but a string, or a sequence of digits. Sure, some apps (like the calculator, I guess), which deal with numbers, can handle such a number, I guess. I don't know what tricks they use... I'm sure some of the other, more elaborate answers cover that.