0^0 shouldn't be undefined. It should be 1.
–
Moor XuJul 20 '10 at 21:38

13

@Coltin: Wolframalpha doesn't say that 0^0 is 1. If you enter 0^0 it says "indeterminate". It only says that lim x^x as x->0 is 1, which is perfectly true, but entirely besides the point.
–
sepp2kJul 20 '10 at 21:48

3

If you take the more general case of lim x^y as x,y -> 0 then the result depends on exactly how x and y both -> 0. Defining 0^0 as lim x^x is an arbitrary choice. There are unavoidable discontinuities in f(x,y) = x^y around (0,0).
–
Neil MayhewJul 21 '10 at 14:14

3

0⁰ = 1 if you want to write polynomials like 2x + 3 as 2x¹ + 3x⁰ (when x = 0...). There are probably other applications where having 0⁰ = 0 or having 0⁰ undefined is as useful, but I'm not aware of one.
–
badpJul 23 '10 at 11:44

3

By definition, $0^0 = |\operatorname{Hom}_{\mathbf{Sets}}(\emptyset,\emptyset)|=1$. It is also true that the function $f(x,y) = x^y$ is discontinuous at $(0,0)$, but that is immaterial to the fact that $0^0=1$.
–
Yuri SulymaMar 9 '13 at 8:17

9 Answers
9

For non-zero bases and exponents, the relation $ x^a x^b = x^{a+b} $ holds. For this to make sense with an exponent of $ 0 $, $ x^0 $ needs to equal one. This gives you:

$ x^a \cdot 1 = x^a\cdot x^0 = x^{a+0} = x^a $

When the base is also zero, it's not possible to define a value for $0^0$ because there is no value that is consistent with all the necessary constraints. For example, $0^x = 0$ and $x^0 = 1$ for all positive $x$, and $0^0$ can't be consistent with both of these.

Another way to see that $0^0$ can't have a reasonable definition is to look at the graph of $f(x,y) = x^y$ which is discontinuous around $(0,0)$. No chosen value for $0^0$ will avoid this discontinuity.

@bryn, changed "non-zero" to "positive" and added a paragraph about graph discontinuity based on my comment in the discussion about the question.
–
Neil MayhewJul 22 '10 at 13:29

5

#Neil: Sorry, I have not made myself clear. You say 0^0 can't have a reasonable definition because of a continuity argument, what I call the analytical reason. But it is quite possible to have a value here with the function the same here: it's just as discontinuous. You've talked about necessary constraints on x^y for +ve x,y, but there is a reason to extend x^0 to all x, but no reason to extend 0^y to cover all y>=0. So these constraints don't look equally compelling.
–
Charles StewartJul 23 '10 at 20:34

This is a question of definition, the question is "why does it make sense to define $x^0=1$ except when $x=0$?" or "How is this definition better than other definitions?"

The answer is that $x^a \cdot x^b = x^{a+b}$ is an excellent formula that makes a lot of sense (multiplying $a$ times and then multiplying $b$ times is the same as multiplying $a+b$ times) and which you can prove for $a$ and $b$ positive integers. So any sensible definition of $x^a$ for numbers $a$ which aren't positive integers should still satisfy this identity. In particular, $x^0 \cdot x^b = x^{0+b} = x^b$; now if $x$ is not zero then you can cancel $x^b$ from both sides and get that $x^0 = 1$. But if $x=0$ then $x^b$ is zero and so this argument doesn't tell you anything about what you should define $x^0$ to be.

A similar argument should convince you that when $x$ is not zero then $x^{-a}$ should be defined as $1/x^a$.

An argument using the related identity $(x^a)^b = x^{ab}$ should convince you that $x^{1/n}$ is taking the $n$th root.

If $a$ and $b$ are natural numbers, then $a^b$ is the number of ways you can make a sequence of length $b$ where each element in the sequence is chosen from a set of size $a$. You're allowed replacements. For example $2^3$ is the number of 3 digit sequences where each digit is zero or $1$: $000, 001, 010, \ldots, 111.$

There is precisely one way to make a zero length sequence: one. So you'd expect $0^0=1$.

"undefined, because there is no way to chose one definition over the other" --- except that many people do define it, so it is not a matter of simply being 'undefined'; and it seems to me that the majority of those who define it, define it to be equal to 1 for the reasons outlined by Noah Snyder above. Definitions are chosen for purposes of utility.
–
Niel de BeaudrapSep 15 '10 at 11:26

It seems to be the number of b-char strings, where every char may is drawn from a-letter alphabet. Given a and b you may form $a^b$ strings. The empty alphabet, a=0, cannot produce any strings other than the empty string, b=0. So, $0^b = 0$ everywhere except at $b=0$ where you have $0^0=1$.
–
ValSep 11 '13 at 14:20

One of the definitions of the power $a^b$ is $e^{b \log a}$, where $$e^x=1+x+\frac{x^2}{2}+\dots $$

If $a$ is nonzero, and $b=0$, then $a^b=e^0=1$.

If $a=b=0$, the expression $b \log a=0 \log 0$ is an indeterminate form, thus $a^b$ is undefined. Nonetheless, in many applications we assign the indeterminate form $0 \log 0$ the limiting value $\lim_{x \to 0}x \log x=0$, so that $0^0$ is defined to be $1$ as well.

The key point is to understand the meaning of multiplication. The $a \times n$ means that you add n a-items to zero rather than "together" because "together" leaves you with uncertainty when no items are taken. If we understand that 0 must be the answer when n=0, we understand what counting means: 0 is the "constant of integration" that lives in any empty set that we add your items into in order to count them. You start counting with 0. If you embed that into definition, a lot of confusion is resolved right away. So, instead of defining $$a \times n = \underbrace{a + a + \cdots + a}_n, $$ you define $$a \times n = \mathbf{0} \underbrace{+ a + a + \cdots + a}_n.$$ You note that 0 is also the additive identity. So, counting starts with additive identity.

Similarly, when you multiply n identical a-items together, what you actually do is you take the product of those items with 1, $a ^ n = \mathbf{1} \underbrace{\times a \times a \times \cdots \times a}_n.$

Now, if n = 0, everything but identities vanishes. You will have $$a\times n = 0 \;\;\;\text{and}\;\; a^0 = 1.$$ Note that "$\times nothing$" in "$1 (\times nothing)$" does not mean scale 1 with 0. It means that you take 1 alone and do not scale it with anything. When n=0, I do not "add" anything to my "constants of integration". Even if the items that I ignore to "count" are of size a=0 or whatever, I still have
$$a\times n = 0 \;\;\;\text{and}\;\; a^0 = 1.$$ Making base zero does not change anything, if you know what is behind $a^n$ rather than blindly drill the rule $0^{anything} = 0.$ This is what my programmer's mindset tells me. Might be I am too bold by viewing the product as the computation

Yet, it was found intuitive. This is exactly what product (exponentiation) means. That is why I say that injecting the identity in front of the sums/products is a natural (i.e. true) thing to do and this must be reflected in the definition. I am happy that it meshes well with the combinatorial argument.

If you can explain how two definitions underpin each other or if you have any objection, please comment. I believe that all problems that arise in continuous analysis are because you do not have pure zeroes there. That is, formally, you may consider $\quad \lim_{t \to 0^+} \left(e^{-\frac{1}{t^2}}\right)^t$ as $ 0^0$. But, I believe that actually you have infinitesimals there, which are not zeroes exactly. Consider the total contribution as a product of a items, whereby every contributes 1/a. If you start reducing the item contribution, $1/a \to 0$, total contribution is unchanged. However, if you really manage to ignore the contributions completely then you will get $0 \times \infty = 0$. You just do not care about the sizes of contributions that you do not count.

Similarly is with $1 \times a \times a \cdots$. If you cut all contributions, you are left with 1, regardless of magnitudes of a. If you receive result that is different from 1 then you have failed to zero all the contributions.