I've just started studying programming, and one assignment was to create a program that would return the factorial of any positive integer. So n!=x. Here's where I ran into what I consider dogma. The test of the program included 0. And supposedly, 0!=1... This makes no sense. 1!=1... Are you trying to say, through the law of equivalency, that 0!=1! ?

I can find no logical reason for this, other than laziness. n/0 = error, so why do we give special privilege to 0! ? Why not simply make 0!=( ) ?

Is it not time for us to evolve away from viewing zero as a number? It seems like we have to bend over backwards to make zero as a number fit with our worldview, and what are we really getting in return?

My ultimate argument: For the set of all the LEGO bricks you've inserted into your anus, how many different ways can we stack those bricks?

Personally I'd rather stick with exactly zero bricks, for which there are exactly 0! results to insert them (hint: not at all = 1).

On a more serious note: Look up the wikipedia article on binomial coefficients, look at the interpretation in terms of combinatorics. You'll find something along the lines of "how many ways you can choose k elements out of n" being n!/(k! (n-k)!). And then do some examples. There's exactly one way to choose six out of six (you take all of them), and indeed:6! / (6! (6-6)!) = 1if you defined 0! = 1.

That looks complex, but it looks like you're just switching the blame to x^0 = 1. Is that the case?

I also call bullshit on 0^0 = 1. It's the sort of something-out-of-nothing prestidigitation I'd expect from a member of the Cult of Zero. 0^0 = ( ). If it's good enough for n/0, it's good enough for everything else.

On topic, 0! is the empty product and can be defined unambiguously as 1. It is very much a logical choice. (and yes there is exactly one function from the empty set to another set. The empty map that is.)

Also compare with the empty sum. Google these terms(empty sum/product) if you haven't heard of them before.

What is "( )"? What's 1+0!? 1+( )? What's that? Are you just putting parentheses around nothing? And then treating that like some kind of number? That's what a zero, '0', is, isn't it? Just a ring around nothing, to show there's nothing there. And then treating that like some kind of number, x+0=x, 0x=0, etc. Are you proposing to define 0!=0?

Is it not time for us to evolve away from viewing zero as a number? It seems like we have to bend over backwards to make zero as a number fit with our worldview, and what are we really getting in return?

Are you trolling?

My ultimate argument: For the set of all the LEGO bricks you've inserted into your anus, how many different ways can we stack those bricks?

I'll use ordered pairs for stacks, where the first item in a pair, which I'll call the head, is an item on the stack, and the second item, which I'll call the tail, is an ordered pair that constitutes the rest of the stack. An empty stack is a special ordered pair, which I'll call nil, where nil=(nil, nil).How many ways can we stack all the elements of the set {A, B, C}?(A, (B, (C, nil)))(A, (C, (B, nil)))(B, (A, (C, nil)))(B, (C, (A, nil)))(C, (A, (B, nil)))(C, (B, (A, nil)))Six. What about the set {A}?(A, nil)One. What about the empty set, {}?nilOne.

How many different strings of three, 8-bit characters can there be? 224. What about one character strings? 28. What about empty strings? How many distinct empty strings can there be? Just the one.

The answer to your Lego-brick-stacking question is one. There is exactly one way of stacking zero bricks.

Now accepting any logical argument from the gallery of LEGOphiles.

Is that supposed to be some sort of pre-emptive insult?

If you reject 0 as a number, then tell us, how many Lego bricks have you inserted into your anus? Your answer has to be what you consider to be a proper number.

I'm using parentheses around nothing to represent an empty set, which I believe (and could be mistaken) is unique from the "number" zero.

I suppose I'm proposing that we stop treating zero like a number. They say one of the evolutionary steps of a civilization is defining zero. I propose that, as is frequent in evolution, zero eventually becomes vestigial as well. It seems like this thread is full of posts proving the need to perform unnatural contortions in order to make zero work like the rest of our numbers.

As for your final confrontational statement, I'll answer the way most people would likely answer: None. As I've never put any bricks up my ass, there's no way to stack nothing. If I asked you how many elephants you have at home, would you say "I am currently harboring zero elephants" or "I don't have any elephants at home"? If you accept zero as a number, you accept an infinite nothingness that is infinitely larger than our infinite universe.

You're saying it's better to have zero magically convert itself to 1 than to just remove it when it comes up.

But really, I'm just pissed off that the program I wrote returned the "wrong" answer based on a semantic technicality.

That looks complex, but it looks like you're just switching the blame to x^0 = 1. Is that the case?

I also call bullshit on 0^0 = 1. It's the sort of something-out-of-nothing prestidigitation I'd expect from a member of the Cult of Zero. 0^0 = ( ). If it's good enough for n/0, it's good enough for everything else.

0^0 does not always equal one. It is an indeterminate form, like 0/0, 0/infinity, 1^infinity, etc. Note that indeterminate forms are not the same as undefined; 1/0 is not the same as 0/0. However, x^0 = 1 for all x not equal to 0. In certain limits, 0^0 does in fact equal one, and most useful applications of this indeterminate form do have limits that converge to 0^0 = 1. But if you want, you can make an expression that would tell you 0^0 = pi.

Is it not time for us to evolve away from viewing zero as a number? It seems like we have to bend over backwards to make zero as a number fit with our worldview, and what are we really getting in return?

Zero is probably one of the most important inventions in mathematics. I'm not sure why you say we're getting nothing in return. Having to deal with a few weird cases is a small price to pay for the innumerable benefits we get from it.

frog42 wrote:I'm using parentheses around nothing to represent an empty set, which I believe (and could be mistaken) is unique from the "number" zero.

Curly braces are usually used for sets. The empty set is {}.

It seems like this thread is full of posts proving the need to perform unnatural contortions in order to make zero work like the rest of our numbers.

I haven't seen any "unnatural contortions" in this thread.

As for your final confrontational statement, I'll answer the way most people would likely answer: None. As I've never put any bricks up my ass, there's no way to stack nothing.

Most people aren't very good at mathematics. While they'd be right in the everyday sense that you can't stack bricks you don't have, they'd be wrong in the mathematical sense of how many distinct stacks of zero bricks you could have.

In your original post, you said you'd just started studying programming. Well, if you continue, you should find that there's a real, significant difference between having an empty stack - a stack of zero items - and not having a stack at all. There's a difference between an empty string, and not even having a string. Likewise with arrays (more generally), lists, etc. For example, in C:-

Thanks for the comment. Some interesting bits in there that I'll have to look into. I used to love math so much. I fondly remember a freshman class where I determined a much simpler formula for determining the area of an ellipse than what the book listed. The teacher (a grad student from MSU) let me teach it to the class and promised to check with her professor to make sure it was correct. Unfortunately, she never really got back to me, and the internet was much too young back then for me to research it. I suppose it probably wasn't anything groundbreaking. Thereafter, I spent every math class reading novels, knowing the teachers and books weren't terribly reliable.

Fast forward a couple decades and I find myself interested again. In the intro to CS class at Udacity, they briefly discussed the Collatz Conjecture. It seems, instinctually, that it should be solvable. I now regret never going beyond calculus. However, I have found something very interesting about it. If you convert all your numbers from decimal to trenary, you can observe some remarkable patterns. For instance:

-After the first consecutive reductions, it appears you will never encounter a number ending in 0 (in decimal, any number divisible by 3). -The only numbers that aren't divisible in trenary have an odd number of 1's.-To perform 3n+1 in trenary, all you have to do is skooch the number over and add a 1.-For any trenary number that is only 0's and 2's, dividing it by 2 results in the same number with the 2's replaced by 1's.

There's more, but those are the immediately apparent ones. I could link my program if anyone else wanted to play around with it.

As for whether zero is worthwhile? I think we're asking too much of it. It's currently a placeholder AND a "number" representing "nothing". I think we'd benefit from separating out its jobs. Zero is a symbol, as is everything in math, but I'm not aware of any other symbol we've applied multiple meanings to.

Our current concept of zero makes us susceptible to false thoughts. We can imagine "empty space" between the stars but it appears that's a fallacy simply propped up by our belief in the possible existence of "nothing". In our universe, it seems impossible for there to be "nothing". At the very least, gravity is omnipresent. Gravity (to my knowledge) has no minimal unit that (upon becoming small enough) would cease to exert its presence.

I think I've argued intelligently against zero as a number. And considering I knew nothing (well, nothing I remembered) about factorials until I got pissed about their not fitting into my program, I'd say I've taken this opportunity to learn an immense amount across a number of topics.

I'm starting by learning Python right now. It seems like they use parentheses for an empty set. Please correct me if I'm wrong.

I think 0!=1! is a pretty unnatural contortion, no? And having x^0 = 1 for any number except x=0 feels pretty unnatural. If you keep having to write exceptions to your rules, it seems like it might not be the rules that are problematic.

As for zero creating an infinity of nothing infinitely greater than our universe's infinity of things, I think it's pretty simple. Add up all the places you can't find elephants, then add up all the places you can find them. Or add up how many things you don't have, then add up everything you do have.

frog42 wrote:As for whether zero is worthwhile? I think we're asking too much of it. It's currently a placeholder AND a "number" representing "nothing". I think we'd benefit from separating out its jobs. Zero is a symbol, as is everything in math, but I'm not aware of any other symbol we've applied multiple meanings to.

You're not aware of any other symbol with multiple meanings attached? What about negative numbers? Rational numbers, real numbers, complex numbers? Do you believe the only purpose of Mathematics is to count sheeps? To be able to do anything more interesting it's neccessary to have some level of abstraction.

No, for a variety of reasons already listed it is an extremely convenient choice. There's no ambiguity. n! is a function (of sorts), that happens to have the property that f(0) = f(1). Why is that a problem? (x-0.5)^2 has the same property.

frog42 wrote:And having x^0 = 1 for any number except x=0 feels pretty unnatural. If you keep having to write exceptions to your rules, it seems like it might not be the rules that are problematic.

There's lots of functions that are undefined for certain domains. ln(x) is undefined for all x <= 0 (in the reals, at least); arcsin(x) is undefined for |x| > 1

frog42 wrote:As for whether zero is worthwhile? I think we're asking too much of it. It's currently a placeholder AND a "number" representing "nothing". I think we'd benefit from separating out its jobs. Zero is a symbol, as is everything in math, but I'm not aware of any other symbol we've applied multiple meanings to.

You're not aware of any other symbol with multiple meanings attached? What about negative numbers? Rational numbers, real numbers, complex numbers? Do you believe the only purpose of Mathematics is to count sheeps? To be able to do anything more interesting it's neccessary to have some level of abstraction.

Hi brenok. Your first post was kind of rude, suggesting that I was trolling. I forgave FancyHat, because s/he actually made valid points, but you contributed nothing to the conversation. Not appreciated. I'm not trolling. I'm contentious and stubborn, certainly, but I haven't actually been swayed to the benefit of 0!=1 yet, so that should be expected. (I'm working on digesting the info from LaserGuy's info, so I'll admit he may have already hit on something that invalidates my arguments.)

As for your comment, I'm having trouble parsing your meaning. I'm arguing that zero is being shoe-horned into places it doesn't belong, specifically 0!. You don't have a problem with x/0 being undefined, so what would be wrong with 0! getting the same treatment?

frog42 wrote:@tooyoo:I actually looked at that n!/(k! (n-k)!) business, and it works just as well if 0! = ( ).

Nope. Doesn't.

As you wrote somewhere else, you'd rather define 0! to be the empty set which you denote "()". However, multiplication with and division by a set - least of all the empty one is usually not defined. Of course you could give a definition for that, thus making sure that the statement of yours I quoted is true (i.e. that it works as well), but that just amounts to taking one definition you don't like (0! = 1) and substituting another definition that you like, but nobody else is using (one for dividing by empty sets).

Note: Since you wrote that you're studying programming, you might intuitively think of multiplication with a set by iterating the multiplication over the whole set, which might be how you came to your above statement. I didn't say that multiplication by a set cannot be defined in some sensible way. Just that nobody does it. Some parts of mathematics are simply convention.

Cauchy wrote:If 0 doesn't exist, then what is 1 + -1 supposed to equal? Do negative numbers exist? Or is 2 - 3 just not a thing? Where do you draw the line?

I'm not saying that "nothing" (zero) isn't useful, just that it's unique and shouldn't be expected mimic the behavior of "something" (integers). You can't give the factorial of a negative number, so why make a special case for zero?

Cauchy wrote:If 0 doesn't exist, then what is 1 + -1 supposed to equal? Do negative numbers exist? Or is 2 - 3 just not a thing? Where do you draw the line?

I'm not saying that "nothing" (zero) isn't useful, just that it's unique and shouldn't be expected mimic the behavior of "something" (integers). You can't give the factorial of a negative number, so why make a special case for zero?

frog42 wrote:I'm starting by learning Python right now. It seems like they use parentheses for an empty set. Please correct me if I'm wrong.

You're wrong. '()' is the empty tuple. 'set()' is the empty set. ('{}' is the empty dictionary, not the empty set, for historical reasons.)

frog42 wrote:I think 0!=1! is a pretty unnatural contortion, no?

How so? The basic definition of a factorial is that n! = n (n-1)!. If we decide to define 0!, it should satisfy 1! = 1 x 0! => 0! = 1. And as others having pointed out, having 0! defined is more convenient than not having it defined.

frog42 wrote:And having x^0 = 1 for any number except x=0 feels pretty unnatural. If you keep having to write exceptions to your rules, it seems like it might not be the rules that are problematic.

How so? The basic rule for exponents is xm xn = xm+n. Do you propose to add an exception for n = -m?

Cauchy wrote:If 0 doesn't exist, then what is 1 + -1 supposed to equal? Do negative numbers exist? Or is 2 - 3 just not a thing? Where do you draw the line?

0^0 is, in general, an indeterminate form, and if it appears in an equation you usually have to work out where it came from and hence what value it should take. It just so happens that in most of the applications that you're likely to see it (such as in a definition of the gamma function), it turns out that it should take the value 1. That doesn't make 0 any less of a number, it just means that certain operations don't behave well with it - and that's fine, because there are operations that don't behave well with negative numbers, or fractions, or odd integers, and there's no reason to stop calling them numbers.

If you want to get really technical, I'd say that there's no point saying that "0 is a placeholder for something" because *every* number is a placeholder for a broader concept - for example, depending on how you're defining your numbers, "1" is just a placeholder for the multiplicative identity. Or the positive unit. Or the application of the successor function to 0. Or the set containing the empty set. But I repeat myself. The tricky part is showing that when you construct different sets of numbers with different properties, that the thing that you're using 1 to represent happens to be equivalent in all of them, allowing you to identify these sets with each other (or parts thereof). And similarly, you can prove that in a field (like the real numbers) your additive identity (i.e. 0) has some unusual properties like being its own additive inverse (that is, 0 = -0), and that anything multiplied by it is also 0, and that you aren't allowed to divide by it. But that holds in any construct that obeys the field axioms, even if it otherwise holds no relationship to the real numbers.

ConMan wrote:0^0 is, in general, an indeterminate form, and if it appears in an equation you usually have to work out where it came from and hence what value it should take. It just so happens that in most of the applications that you're likely to see it (such as in a definition of the gamma function), it turns out that it should take the value 1. That doesn't make 0 any less of a number, it just means that certain operations don't behave well with it - and that's fine, because there are operations that don't behave well with negative numbers, or fractions, or odd integers, and there's no reason to stop calling them numbers.

If you want to get really technical, I'd say that there's no point saying that "0 is a placeholder for something" because *every* number is a placeholder for a broader concept - for example, depending on how you're defining your numbers, "1" is just a placeholder for the multiplicative identity. Or the positive unit. Or the application of the successor function to 0. Or the set containing the empty set. But I repeat myself. The tricky part is showing that when you construct different sets of numbers with different properties, that the thing that you're using 1 to represent happens to be equivalent in all of them, allowing you to identify these sets with each other (or parts thereof). And similarly, you can prove that in a field (like the real numbers) your additive identity (i.e. 0) has some unusual properties like being its own additive inverse (that is, 0 = -0), and that anything multiplied by it is also 0, and that you aren't allowed to divide by it. But that holds in any construct that obeys the field axioms, even if it otherwise holds no relationship to the real numbers.

I like this comment. Lots of ringing true going on. I don't suppose you care to weigh in on the seemingly arbitrary 0!=1?

Your main complain here is that 0 introduces a lot of exception or "problems", and certainly this is true. But can you suggest an alternative that introduces fewer problems? If you simply discard 0, you don't fix any problems, but you introduce many more, like "what is 1-1?". So if you don't like 0, propose something better.

Or if you want to keep your scope narrow, what is a better definition of 0! ? Can you give us a definition of 0! that will allow us to use it in addition and multiplication, but is better than 0! = 1?

BTW, Wikipedia has some discussion on this topic, though it has mostly all been said in this thread already.

dudiobugtron wrote:As flownt said, 0! is obviously just the empty product. 1 is the multiplicative identity. This is definitely a sensible way to define it.

There definitely are 'dogma' discussions in maths (1 isn't a prime number because it would be inconvenient. 0 is or isn't a natural number. dy/dx vs f'(x). etc...), but this isn't one of them.

Sizik wrote:n!/n = (n-1)!

3!/3 = 6/3 = 2 = 2!

2!/2 = 2/2 = 1 = 1!

1!/1 = 1/1 = 1 = 0!

Thus, 0! = 1.

0!/0 = 1/0 = (-1)! ?

To add to the list of dogmas: Axiom of choice or not, law of the excluded middle or not.

Some words that will hopefully trigger a penny-drop regarding the "empty product" idea (at least, they did for me, when I was learning this stuff)...

Say you had a sequence of numbers, and you wanted to find their sum, by going through them one by one, and keeping a running total. At each step, we have a sequence, of numbers we've already seen, and a running total of those numbers, and then we get a new number, which we can append to our sequence, and add to our total. Now, we want it to be when we add our first number, call it x, we end up with the running total set to x (because obviously the sum of one thing is itself)... so when we start, with an empty sequence of "things we've seen so far", we need to start our running total at x - x = 0. This makes sense because zero is the "do-nothing" number when it comes to addition - the additive identity - any number plus zero gives you that same number.

So, for instance, if we wanted to add up 2 + 3 + 4:Initially, our sequence is {}, and our running total is 0Then, our sequence is {2} and our running total is 0+2 = 2Then, our sequence is {2,3} and our running total is 2+3 = 5Finally, our sequence is {2,3,4} and our running total is 5+4 = 9So we get our final total as 9.

But notice that, by design, on every line, our running total is equal to the sum of the numbers in our sequence of seen numbers. And that includes the first line. The empty sum, ie the sum of no numbers, is zero. If you take no numbers, and add them all together, you get zero.

So far, so intuitive. Now, take the same idea and apply it to multiplication.

Again, we want to start our running product at a "do-nothing" number, a number such that when we multiply our first number x into the product, we end up with x. And the multiplicative identity is 1 - any number times 1 is that same number. And so, for instance, if we wanted to multiply 2 × 3 × 4:Initially, our sequence is {}, and our running product is 1Then, our sequence is {2} and our running product is 1×2 = 2Then, our sequence is {2,3} and our running product = 2×3 = 6Finally, our sequence is {2,3,4} and our running product is 6×4 = 24So we get our final product as 24.

And so, by the same reasoning, our "empty product", our product of no numbers, should be 1, the multiplicative identity. Because anything else would make this whole thing not work. If the empty product was anything else, then the idea that "if we append a new number onto a sequence, then the product of the new sequence is equal to the product of the original sequence times the new number" would break down for no good reason when the empty sequence was used.

It's a little bit counterintuitive at first, but when you actually break it down, it makes perfect sense to be that way.

phlip wrote:Some words that will hopefully trigger a penny-drop regarding the "empty product" idea (at least, they did for me, when I was learning this stuff)...

That is exactly what I was going to say. One thing I will add: it should be the case that if you take the product of one collection of numbers and multiply it by the product of another collection of numbers, the result is the product of the two collections combined. In order for this to work with empty products, empty products must equal 1.

I'm looking forward to the day when the SNES emulator on my computer works by emulating the elementary particles in an actual, physical box with Nintendo stamped on the side.

So what did your program do for n=0? Seems like you should be more upset you didn't think about error checking, rather than what the community of mathematicians have generally agreed upon is a convenient thing to mean when we write 0!.

And if your program did indeed error check and didn't spit out 1, then I definitely won't disagree with the fact that 0! = 1 is kind of weird, and it would definitely make sense to define it as 0. Or make it undefined. But over the years, people noticed formulas work out nicer if it's just 1, so that's what we call it. So now you know and won't make the mistake again!

What they (mathematicians) define as interesting depends on their particular field of study; mathematical anaylsts find pain and extreme confusion interesting, whereas geometers are interested in beauty.

phlip wrote:Some words that will hopefully trigger a penny-drop regarding the "empty product" idea (at least, they did for me, when I was learning this stuff)...

Say you had a sequence of numbers, and you wanted to find their sum, by going through them one by one, and keeping a running total. At each step, we have a sequence, of numbers we've already seen, and a running total of those numbers, and then we get a new number, which we can append to our sequence, and add to our total. Now, we want it to be when we add our first number, call it x, we end up with the running total set to x (because obviously the sum of one thing is itself)... so when we start, with an empty sequence of "things we've seen so far", we need to start our running total at x - x = 0. This makes sense because zero is the "do-nothing" number when it comes to addition - the additive identity - any number plus zero gives you that same number.

So, for instance, if we wanted to add up 2 + 3 + 4:Initially, our sequence is {}, and our running total is 0Then, our sequence is {2} and our running total is 0+2 = 2Then, our sequence is {2,3} and our running total is 2+3 = 5Finally, our sequence is {2,3,4} and our running total is 5+4 = 9So we get our final total as 9.

But notice that, by design, on every line, our running total is equal to the sum of the numbers in our sequence of seen numbers. And that includes the first line. The empty sum, ie the sum of no numbers, is zero. If you take no numbers, and add them all together, you get zero.

So far, so intuitive. Now, take the same idea and apply it to multiplication.

Again, we want to start our running product at a "do-nothing" number, a number such that when we multiply our first number x into the product, we end up with x. And the multiplicative identity is 1 - any number times 1 is that same number. And so, for instance, if we wanted to multiply 2 × 3 × 4:Initially, our sequence is {}, and our running product is 1Then, our sequence is {2} and our running product is 1×2 = 2Then, our sequence is {2,3} and our running product = 2×3 = 6Finally, our sequence is {2,3,4} and our running product is 6×4 = 24So we get our final product as 24.

And so, by the same reasoning, our "empty product", our product of no numbers, should be 1, the multiplicative identity. Because anything else would make this whole thing not work. If the empty product was anything else, then the idea that "if we append a new number onto a sequence, then the product of the new sequence is equal to the product of the original sequence times the new number" would break down for no good reason when the empty sequence was used.

It's a little bit counterintuitive at first, but when you actually break it down, it makes perfect sense to be that way.

You forgot the "/thread" at the end of your post. I see your "Restorer of Worlds" title is not honorary.

I apologize to those I've antagonized, but this was a much more entertaining and intensive method to cram a bunch of information in my head than reading boring wiki articles. And the next time someone complains about 0!=1, phlip has prepared a concise, fairly exhaustive and incredibly simple explanation. The only improvement I could imagine, for people who don't quite catch on is a third sequence run-through showing how it applied directly to factorials. Remember, finding ways to express your ideas as concisely as possible also improves your own understanding (unless phlip's explanation was rote, in which case I also apologize for all those who came before me).

z4lis wrote:So what did your program do for n=0? Seems like you should be more upset you didn't think about error checking, rather than what the community of mathematicians have generally agreed upon is a convenient thing to mean when we write 0!.

And if your program did indeed error check and didn't spit out 1, then I definitely won't disagree with the fact that 0! = 1 is kind of weird, and it would definitely make sense to define it as 0. Or make it undefined. But over the years, people noticed formulas work out nicer if it's just 1, so that's what we call it. So now you know and won't make the mistake again!

My program just returned zero, as it wasn't designed to incorporate it. The problem was defined as attempting to find the possible combinations of any positive integer number of lego bricks. It said nothing about factorials or including zero.