Why does multiplying by 0 equal 0?

I know why it would equal to 0 if it was (0*0). But what about an actual number? Why does (100*0) equal to 0? You're not multiplying anything, but shouldn't it still equal to 100? If I have 100 cookies on the table, and I don't multiply it by anything, why do I suddenly have zero cookies on the table?

I'm just trying to gain a conceptual understanding behind the zero-factor algebraic property.

One possible conceptual way to think about it is to say, multiplying 2 * 100 is like having 2 groups of 100 cookies on the table. Multiplying 1 * 100 is like having 1 group of 100 cookies on the table. Multiplying 0 * 100 is like having no groups of 100 cookies on the table, hence no cookies at all.

However, I agree that this is may not be very intuitive. When dealing with 0, it is sometimes more difficult to match mathematical situations to real life situations. It is probably better to understand 0 * any number = 0 just as a consequence of several properties of numbers that we take for granted.

One possible conceptual way to think about it is to say, multiplying 2 * 100 is like having 2 groups of 100 cookies on the table. Multiplying 1 * 100 is like having 1 group of 100 cookies on the table. Multiplying 0 * 100 is like having no groups of 100 cookies on the table, hence no cookies at all.

However, I agree that this is may not be very intuitive. When dealing with 0, it is sometimes more difficult to match mathematical situations to real life situations. It is probably better to understand 0 * any number = 0 just as a consequence of several properties of numbers that we take for granted.

I know why it would equal to 0 if it was (0*0). But what about an actual number? Why does (100*0) equal to 0? You're not multiplying anything, but shouldn't it still equal to 100? If I have 100 cookies on the table, and I don't multiply it by anything, why do I suddenly have zero cookies on the table?

I'm just trying to gain a conceptual understanding behind the zero-factor algebraic property.

The problem is that natural language sort of pigeon-holes us into thinking "none" and "at least one" are conceptually different. As soon as you break through this barrier and become comfortable working with degenerate cases, stuff like this becomes easy.

For example, how many pennies do you have, if you have zero rows of N pennies each? (Or, as one would generally say in natural language, if you don't have any rows of N pennies each)

I know why it would equal to 0 if it was (0*0). But what about an actual number? Why does (100*0) equal to 0?

Because it turns out simple and useful.

Multiplication, and any operation, is something defined by the mathematician. We could imagine a world where 0 * n = n. But it would break a lot of useful theorems. For instance, 0 * 1 + 1= 1 + 1 = 2. However, 0 * 1 + 1 = (0 + 1)*1 (by distribution), so 0 * 1 + 1 = 1, and so 1 = 2. We must conclude that addition no longer distributes over multiplication (disastrous!!).

In many definitions, when you get to the lowest possible value, the definition loses its literal intuitive meaning. One example is the factorial function, where 0! = 1. Factorial is often defined as the product: 1 * 2 * ... * n, but when n = 0, this definition doesn't make sense.

(Though there are other definitions that do, this is just one example).

If by 0, you mean the element of the system such that a + 0 = a for all elements a in the system, then yes, as the property follows directly from this statement and the distributive property of multiplication over addition, and multiplicative commutativity. Hint: Consider the equation a*a= (a + 0)(a + 0).

If by 0, you mean the element of the system such that a + 0 = a for all elements a in the system, then yes, as the property follows directly from this statement and the distributive property of multiplication over addition, and multiplicative commutativity. Hint: Consider the equation a*a= (a + 0)(a + 0).