One possibility is the integration by substitution formula. But this doesn't explain why $\int_1^{\infty}x^{-3}dx$ is easy to evaluate (and rational) whilst $\sum_1^{\infty}n^{-3}$ is unknown (and seemingly not a simple combination of well known numbers). It is true that $x^{-3}$ has a simple inverse derivative whilst $n^{-3}$ does not have a known simple inverse difference, but is there more to it than that?

I am not sure I find the assertion in the title obviously true
–
Yemon ChoiFeb 18 '11 at 1:00

7

It seems that many of your questions have a title "Why is X?" where the premise X involves subjective judgment ("easy"; "important"; "rigorous") and in some cases seems debatable rather than self-evident...
–
Yemon ChoiFeb 18 '11 at 1:05

I am open to suggestions. Would questions of this nature be welcome at mathstackexchange? Do you think it best if I not ask questions here again?
–
teilFeb 18 '11 at 5:08

4

To continue with Yemon's point. Why is $\sum_{m=0}^{\infty} \frac{1}{\Gamma(m+1)}$ easier to evaluate than $\int_0^{\infty} \frac{1}{\Gamma(t+1)} dt$? In particular is anything known about my integral?
–
HelgeFeb 18 '11 at 5:32

2

What's in a name? That which we call Negative refraction by any other word would smell as sweet.
–
Yemon ChoiFeb 18 '11 at 6:39

7 Answers
7

Well, some things are easier in one place, and other things are easier in the other. The many assertions in answers that integrals of $x^n$ are easier than the corresponding sums is misleading. In the continuous world, $x^n$ is a very natural object. In the discrete world, however, the natural object is the falling factorial: $x^{\underline{n}}$ is defined as $x(x-1)\dots(x-n+1)$. Compute the derivative of $x^{\underline{n}}$ and you get a mess. Compute the forward difference and you get the beautifully simple $f(x+1)-f(x) = (x+1)^{\underline{n}} - x^{\underline{n}} = n x^{\underline{n-1}}$.

That is,
$$\sum_{i=1}^n i^9= \frac{1}{20} n^2 (n+1)^2 \left(n^2+n-1\right) \left(2 n^4+4 n^3-n^2-3 n+3\right)$$
is a little messy. Certainly messier than $\int x^9 dx$. On the other hand,
$$\sum_{i=1}^n i^{\underline{9}} = \frac1{10} (n+1)^{\underline{10}}.$$
while the integral $\int x^{\underline{9}}dx$ is bad enough that I dare you to work it out by hand.

The truth here is that the discrete world is often easier (and more fundamental, too), but history has prejudiced us to be more familiar with the continuous world. Zeilberger (who else?) has written passionately about this, and if you know of other sources for discrete calculus (online or off) please add them as comments to this answer!

I'm not sure this question is appropriate for MO, but I'll play along. One of the main reasons integrals of elementary functions are easier to algebraically evaluate than sums of elementary functions is that derivatives are much better behaved than differences. For a fairly mellow example, consider the product rule:
$$ \partial(f\cdot g) = (\partial f)\cdot g + f\cdot (\partial g) \quad\quad \Delta(f\cdot g) = (\Delta f)\cdot g + f\cdot (\Delta g) + (\Delta f) \cdot (\Delta g)$$
(Here $\Delta f(n) = f(n+1) - f(n)$, and $\partial$ is the usual smooth derivative; $\cdot$ denotes multiplication.)
So "integration by parts" is slightly simpler than "summation by parts". But since there exists a product rule for differences, there also exists a "summation by parts" formula.

On the other hand, there is no chain rule for differences:
$$ \partial(f\circ g) = ((\partial f) \circ g) \cdot \partial g \quad\quad \Delta(f\circ g) = ?????$$
If you try to write down chain rules, you'll find they're very complicated, and require weird appearances of integrals/sums.

The chain rule for derivatives is precisely the u-substitution formula for integrals, and as you say in your question, there is no similar formula for sums.

Related to both of these is that power functions $n\mapsto n^\alpha$ have very nice derivatives and not-nice differences. At least the difference of a polynomial is a polynomial, but in general in a function with non-smooth parts / poles (like $n\mapsto n^\alpha$ for $\alpha \not\in \mathbb N$), taking differences generally create more poles. So it's extremely hard to imagine anti-differencing a function with only one pole — the antidifference would have to have many poles, that magically almost cancel upon differencing.

However, I'd like to emphasize that all of this answer has to do with algebraic methods (finding "closed form expressions"). $\sum_{n=1}^\infty n^{-3}$ is a perfectly good number, and $m \mapsto \sum_{n=1}^m n^{-3}$ is a perfectly good sequence; it's just that we don't know the answer to certain questions about them, and we can't write them as compositions of a certain very restricted set of symbols. One can compute these things very accurately. For any given $m$, I can, with a little time, give you the value of $\sum_{n=1}^m n^{-3}$ as a rational number. I can also compute, again given enough time, $\sum_{n=1}^\infty n^{-3}$ to as good accuracy as you want. Indeed, I can compute it more accurately (i.e. faster for a given accuracy) than I can for some other functions that you do have in your limited vocabulary of "basic" functions (assuming your vocabulary is what I expect it is).

Of course, $n\mapsto \binom{n}{k}$ has nice differences but not nice sums, at least if we stick to that basis...
–
Harry AltmanFeb 18 '11 at 6:50

Aren't you essentially saying that differentiable functions are better behaved than piecewise linear functions (whose integrals are what we also call sums)?
–
Sándor KovácsFeb 18 '11 at 7:03

1

So perhaps the idea is: In the context of derivatives we have $\epsilon^2=0$, but in the context of differences not.
–
Martin BrandenburgFeb 18 '11 at 10:29

Can one put this also in terms of the difference between first and second order logic?
–
Kaveh KhodjastehFeb 18 '11 at 14:24

@Harry: Absolutely. There are guaranteed to be sequences with nice sums, and those sequences are of independent interest. But they're hard to write in the vocabulary of a very basic calculator. @Sandor: I'm trying to say something different. A piecewise function with steps at the integers is the same data as a sequence, but it's not the same thing. A sequence is a function on the integers, and the basic operations you can do to them include shifting, and hence differencing and summing. These are algebraic operations, that are worse-behaved than their smooth cousins.
–
Theo Johnson-FreydFeb 18 '11 at 17:05

While I don't really think that this is an answer to your question in the generality that you ask it (indeed, it says nothing of the one example you cite), here is something that comes to mind.

Integrals of $x^n$ for positive $n$ are much simpler than sums of $x^n$. The reason is that in evaluating the former, one first evaluates the latter and then passes to a limit. It is in this limiting process that all the "hard stuff" vanishes, and one is basically left with leading terms. In other words, in this case, the limiting processes of calculus eliminate all but what is, on some level, dominant behavior.

The same (vague) argument can be made of derivatives versus differences (as in Theo's answer). Finite differences may as well be hugely large scale differences - there's no objective scale. Derivatives, on the other hand, capture something that is in retrospect much simpler - truly local behavior.

This is a layman's answer. I'm by no means near an expert on integrals.

My first reaction to this question was that I am not sure that I agree and that since sums are integrals, this does not even make sense. Then I realized that the statement can be rephrased as

Continuous functions are easier to integrate than discontinuous ones

If we put it that way then perhaps the answer is

a) a lot of things are easier to do with a continuous function, or

b) in particular it is easier to find the indefinite integral of a continuous function than of a discontinuous one. Especially since the second one cannot have a nice anti-derivative (For instance it cannot be continuously differentiable for one).

Disclaimer This is a soft-answer and is not intended to contain mathematically rigorous statements or self-evident truths.

It depends on the object being summed/integrated. There are examples of sums that are handled by the theory of hypergeometric functions whose integrals would probably defeat most CAS systems out there.

They are different because the methods to simplify both processes are different. The focus (in American education) on the integral is because there were more examples of application in the mid 20th century. With the advent of algorithm design and applied combinatorics, more weight is being placed on evaluation of sums; even so, I think it is safe to say that integration will continue to be emphasized over summation for the near future.

Riemann integrals are defined to be limits of sums. In this context, integrals are easier since if we can evaluate the sums explicitly then the limit is generally straightforward to compute. For instance, let $\mathcal{P}$ be a convex polytope with integer vertices. Let $i(\mathcal{P},n)$ be the number of integer points in the dilation $n\mathcal{P}$, $n\geq 1$ (the Ehrhart polynomial of $\mathcal{P}$). Often one can express $i(\mathcal{P},n)$ as an iterated sum, and one can express the volume of $\mathcal{P}$ as a completely analogous iterated integral. By the definition of Riemann integral, the leading coefficient of $i(\mathcal{P},n)$ is the volume of $\mathcal{P}$, so the integral is easier to compute. As a specific example, let $\mathcal{P}$ be the triangle with vertices $(0,0), (1,0), (0,1)$. The iterated sum is $\sum_{j=0}^n\sum_{k=0}^j 1 = {n+2\choose 2}$, while the iterated integral is
$\int_{x=0}^1\int_{y=0}^x 1\hspace{.1em}dy\hspace{.1em}dx = 1/2$.

"integrals are easier since if we can evaluate the sums explicitly then the limit is straightforward" --- that's a nonsequitor. The conclusion should be: "integrals are often not much harder". Or perhaps I misunderstand your whole point...
–
Kevin O'BryantFeb 19 '11 at 5:13

Kevin, as an example of what I mean consider $$ \int_0^1 x^2dx =\lim_{n\to\infty}\frac 1n\sum_{k=1}^n\left( \frac kn \right)^2 = \lim_{n\to\infty}\frac{n(n+1)(2n+1)}{6n^3}. $$ It is trivial to compute that the limit on the right is 1/3. Thus the sum is harder to compute than the integral; once we know the sum, the integral is easy (but not conversely).
–
Richard StanleyFeb 19 '11 at 21:21

It might be more accurate to suggest that we are more familiar with easy integrals than easy sums.

That is, there has been much more work done (and certainly much more practical experience, especially among engineers and scientists) with integrals than sums. The world is (seems) mostly continuous, so we focus primarily on integration.

That might seem like a non-answer but it's the one that comes to mind.

@Robby: The world is mostly continuous... Is that so?
–
DidFeb 18 '11 at 0:33

6

You may be putting the cart before the horse - it may be that we make believe the world is continuous because we have better tools for dealing with the continuous than with the discrete.
–
Gerry MyersonFeb 18 '11 at 0:34

2

@Gerry fair enough; but whether the world is actually more continuous or merely seems more continuous, we certainly have more experience with continuous phenomena.
–
Robby SlaughterFeb 18 '11 at 0:50