It is well known that the operations of differentiation and integration are reduced to multiplication and division after being transformed by an integral transform (like e.g. Fourier or Laplace Transforms).

My question: Is there any intuition why this is so? It can be proved, ok - but can somebody please explain the big picture (please not too technical - I might need another intuition to understand that one then, too ;-)

Convolution integral reps exist for the appropriate integration ops and the derivative acting on suitable functions for the Fourier, Laplace, and Mellin transforms. Applying the associated convolution theorems gives products in the reciprocal space. The extremely simple derivations of these convolution theorems provide an accurate and intuitive pic of the separability into products in the reciprocal space as largely based on the simple group properties $e^{-p(x+y)} = e^{-px}e^{-py}$ and $(x/y)^{s-1} = x^{s-1}y^{1-s}$. The transforms of the Heaviside step fct are required for int ops.
–
Tom CopelandFeb 23 at 1:58

6 Answers
6

It might help you to think about a discrete model: consider complex valued functions on $Z/n$. The discrete Fourier transform takes $f(k)$ to
$g(j) :=\sum_{k=1}^n f(k) \zeta^{jk}$ where $\zeta=e^{2 \pi i/n}$. It is pretty easy to see that, if we change $f(k)$ to $f(k+1)$, we change $g(j)$ to $g(j)*\zeta^j$.

Similarly, changing $f(k)$ to $f(k+1)-f(k)$ changes $g(j)$ to $g(j)*(\zeta^j-1)$. So, in this discrete model, taking a difference becomes multiplication by $(\zeta^j-1)$. In a similar way, in the continuous setting, taking a derivative becomes multiplication by $x$.

Thank you, David! I think this is something to experiment with. Unfortunately it is not clear to me at the moment under what index the sum is being run and what a primitive n-th root of 1 might be - so perhaps you could elaborate a little bit on that and give a more concrete example? That would be great! (I am no mathematician, just an amateur fascinated by math!)
–
vonjdOct 27 '09 at 18:20

Made a few minor edits -- hope they help
–
David SpeyerOct 28 '09 at 11:48

The Fourier and Laplace transforms are defined by testing the given function f by special functions (characters in the case of Fourier, exponentials in the case of Laplace).

These special functions happen to be eigenfunctions of translation: if one translates a character or an exponential, one gets a scalar multiple of that character or an exponential.

As a consequence, the Fourier or Laplace transforms diagonalise the translation operation (formally, at least).

Whenever two linear operations commute, they are simultaneously diagonalisable (in principle, at least). As such, one expects the Fourier or Laplace transforms to also diagonalise other linear, translation-invariant operations.

Differentiation and integration are linear, translation-invariant operations. This is why they are diagonalised by the Fourier and Laplace transforms.

Diagonalisation is an extremely useful tool; it reduces the non-abelian world of operators and matrices to the abelian world of scalars.

...at least for operators on finite-dimensional spaces. Someone else will have to pipe in on the infinite-dimensional vector spaces.
–
alekzanderOct 27 '09 at 16:56

Thank you Terry! Unfortunately this is too complicated for me :-( I am just an amateur fascinated by math! I think you would need to give me an example of nearly every sentence. I don't know no abelian worlds nor do I know what translation-invariant operations are... Sorry, but some clearification would be appreciated!
–
vonjdOct 27 '09 at 18:33

7

I though the point of Math Overflow was for well-defined questions that might be of interest or easily answerable by other experts. What you seem to be looking for, vonjd, is a place to learn maths that interests you. Not quite the same thing IMHO
–
Yemon ChoiOct 31 '09 at 9:55

1

I love this answer and to me, it really answer the very interesting question of Vonjd about a big picture explanation (not too technical). Well, translation e.g. $f(x) \mapsto f(x+h)$, differentiation with respect to a variable e.g. $f(x) \mapsto f'(x)$, multiplication by such a variable e.g. $f(x) \mapsto x\,f(x)$ etc… are all examples of linear operators. They basically take a function $f(x)$ and spit another function linearly. As such it makes sense to speak of eigenfunctions. Monomials $x^k$ are eigenfunctions of the Euler operator $\theta = x d/dx$ because $\theta x^k = k\, x^k$.
–
Samuel VidalMay 20 '12 at 19:23

You can think of integral transforms as a change of coordinates. One of the key tricks in physics is to pick a coordinate system that makes your problem simpler. For example, you may set your coordinates so that the action you're interested in happens along an axis.

You could think of a Fourier transform as a rotation in a function space. Differentiation is particularly simple in the rotated coordinate system, just as forces are simpler when the coordinate system lines up with the force.

Fourier transform really is a rotation of sorts (an "orthogonal transformation"). If you apply Fourier transform four times, you get back your original function, just as you get sine back when you differentiate four times.

Thank you John! First question (hopefully not too stupid): Does "orthogonal transformation" mean that you will always need 4 transformations to get from where you started? Doesn't it depend on the dimension you are working in? Second question: How does this transformation relates to differentiation/integration being reduced to multiplication and division?
–
vonjdOct 27 '09 at 18:28

Orthogonal transformation means distances (norms) are preserved. Think of a rotation in 3-D. The axes start out perpendicular ("orthogonal") and stay perpendicular after the rotation. Orthogonal just means things aren't skewed. In terms of Fourier transforms, ||f||_2 = ||\hat{f}||_2, i.e. the function and its transform have the same length.
–
John D. CookOct 27 '09 at 20:20

The fact that four applications of the Fourier transform gets you back where you started is special. Not all orthogonal transforms have this property. But it does mean that a Fourier transform is analogous to rotating the complex plane by a quarter turn (i.e. multiplying by i).
–
John D. CookOct 27 '09 at 20:25

The analogy is quite strong, since the quasi-classical limit is a rotation in the cotangent bundle of R.
–
S. Carnahan♦Oct 28 '09 at 2:32

Simply because the exponential function epx(xy) as a fonction of x is an eigenfunction of the derivative operator and also the integration operator so we have:

$(d/dx) exp(xy)=y exp(xy)$

If we think of integration as the inverse of the derivative operation we have:

$(d/dx)^{-1} exp(xy)=(1/y) exp(xy)$

The situation is likely as we are working in a "continuous" basis exp(xy) indexed by the continuous parameter $y$, the derivation an integration being the diagonal matrixes $diag(y)$ and $diag(1/y)$ respectively. So because the family $exp(xy)$ with $y=-i\omega$ in the case of the fourier transform, is a "basis" for functions, the operations of differentiation and integration are reduced to multiplication and division for functions admiting such decompositions.

One unifying way to look at many of the transforms is through the eyes of quantum theory. For example the Fourier transform is a change of basis of the quantum Hilbert space between the coordinate and momentum representations. The unitarity of the transform is an expression of the fact that they preserve quantum probabilities and there is no difference in the physics of the problem if you use either representation.

The theory of geometric quantization is actually the rigorous way to express this unified point of view. There are many transforms for example the Fourier-Wiener and the Berezin transform which share this property (conservation of quantum probability).

Differentiating this means $n e^{n x}$ - which is simply multiplication by $n$

Integrating this means $\frac{1}{n} e^{nx}$ - which is simply division by $n$

This holds true for power series which are the discrete form of integral transforms (sort of) and originally stems from the power rule (for differentiation) - although you would have a pesky division by the base using "ordinary" power terms. This is prevented by using the exponential function.