A differentiation identity

Here’s a cute identity I discovered by accident recently. Observe that

and so one can conjecture that one has

when is even, and

when is odd. This is obvious in the even case since is a polynomial of degree , but I struggled for a while with the odd case before finding a slick three-line proof. (I was first trying to prove the weaker statement that was non-negative, but for some strange reason I was only able to establish this by working out the derivative exactly, rather than by using more analytic methods, such as convexity arguments.) I thought other readers might like the challenge (and also I’d like to see some other proofs), so rather than post my own proof immediately, I’ll see if anyone would like to supply their own proofs or thoughts in the comments. Also I am curious to know if this identity is connected to any other existing piece of mathematics.

45 comments

Here is an approach:
Let , and let be the following operator: it takes derivative with respect to x, then divides the result by , then again takes derivative with respect to and then again divides result by . Note that . Let , and . All we need to show is that . Note that . Now I think it should not be difficult to iterate this process and get the desired result.

Let , , and , for odd. Using and (here prime denotes differentiation w.r.2 ) we find . It is easy to show the identity for odd: . Using this identity then follows for odd. Assuming that holds for all , odd, we can write: , where we used (*). The second term is computed from (**) and we find: The terms cancel and we find that the identity is valid for (odd) as well. There is some sort of duality that appears here, which could be exploited for a short proof.

In the neighborhood of infinity, can be written as a series of decreasing powers of , starting from . If we differentiate times, the nonnegative powers disappear, and the negative powers give powers of exponent at most . Hence, the -th derivative is at infinity. On the other hand, an immediate induction implies that it is a polynomial times : the domination implies that the polynomial is constant. This constant can be computed by taking times the term in in the expansion of at zero.

[This is essentially the proof I eventually came up with also :-) – T.]

Thanks Anonymous, sorry for all this posts, i was very tired because of i had the idea of demonstration this night! So, the most general contour on which the integral is calculated is every closed curve containing in the -strip :

The proof of the identity that I originally had is essentially that found in this previous anonymous comment (the only difference being that I computed the constant through Taylor expansion around infinity, rather than around zero), but I just worked out a Fourier-analytic proof which has the advantage of extending to non-integer values of , and which is somewhat reminiscent of the proof of the functional equation for the zeta function via modularity of the theta function (confirming the suspicion of Ivan Z. above). For any , we scale the Gamma function

to express as the average of Gaussians:

This formula is valid for any . Taking (distributional) Fourier transforms, we conclude that

Making the change of variables we conclude

If we compare this with (*) with replaced by (assuming temporarily that ), we conclude that

which on inverting the Fourier transform leads to the distributional identity

This formula was only derived for , but using analytic (or meromorphic) continuation in the space of tempered distributions it is valid for all . Specialising to the case for odd , we obtain

Paul Nelson (private communication) has informed me that the fact that is a scalar multiple of can be interpreted as a consequence of the isomorphism of the principal series representation of with , where one can express as the space of (suitably nice) functions obeying the homogeneity for , with the usual action of . One can view this principal series representation as the induced representation of the character on the Borel subgroup B of upper triangular matrices (I may have some signs incorrect here).

The homogeneous version of can be viewed as generating the spherical functions in this principal series representation, that is to say the functions invariant under the action of the maximal compact subgroup . Because the two representations are isomorphic, their Whittaker models (that is to say, G-maps to the representation induced from a character of the nilpotent group ) must agree up to scalar multiples, and in particular the two spherical functions mentioned above must become scalar multiples of each other after applying their respective Whittaker maps. On computing this map explicitly, this ends up showing that the functions and are scalar multiples of each other, giving the claim after inverting the Fourier transform.

So it seems the “high level” explanation of the identity stems from the isomorphism between the principal series representations and , but I don’t know of a simple way to establish this isomorphism (it seems to be a medium-length computation involving Frobenius reciprocity and Haar measure integrations). In any event, this seems to indicate that the similarity with the Riemann zeta functional equation is not entirely coincidental.

[Disclaimer: my grasp of the representation theory of Lie groups is a little shaky, and any inaccuracies in the above are due to myself and not to Nelson. -T.]

Just to clarify my point a bit, the fifth display in your Fourier-analytic proof precisely states in the light of 8.432.5 in Gradshteyn-Ryzhik. This symmetry has been known classically for the -Bessel function, while later it has been realized to be a reflection of (or an equivalent form of) . So in a sense you re-discovered all these symmetries.

which means that the left side (with k replaced by 2*k-1), let’s call it A(k),
satisfies the first-order recurrence

-(2*k+1)^2*A(k)+(1+x^2)*A(k+1)=0,
that together with the trivial value for A(0) implies the original statement.
For a verbose version, type:
AZpapd((2*k)!*(1+z^2)^(k-1/2)/(z-x)^(2*k+1) ,z,k,K);

Note that the proof is fully rigorous. Procedure AZd is an implementaion
of the Almkvist-Zeilberger algorithm, part of WZ theory.

k does not have to be an integer, and the recurrence is valid for all k,
hence one can get Gamma functions.

Of course, AZd can give recurrences for ANY expression of the form

D_x^(k) ( P(x)^k*Q(x)),

for a very wide class of P(x) and Q(x), but, alas, the recurrence is
usually not first-order. All the Rodriguez formulas for the classical
orthogonal polynomials yield the respective three-term recurrences.

A reformulation of the identity (in the more general Gamma function form) is as a two-dimensional identity

where . Amusingly, this identity is essentially its own Fourier transform!

Another amusing consequence of the identity is that the function is “very convex” on the positive real axis in the sense that the first k+1 derivatives are all non-negative there. (This follows from repeated integration of the differentiation identity together with a Taylor expansion at the origin.) I don’t know of any other way to prove this “convexity” without going through the differential identity.

Probably a shot in the dark. Notice that the equation for k = n has, on the right side, a function of the left side of the equation for k = n+2. Inductively, test that k = 1 works. Suppose k = n works, then reciprocate and multiply by the double factorial squared. Then take the “k+2″nd derivative of the resulting equation. Looking to simplify it to the equation where k = n + 2.

For commenters

To enter in LaTeX in comments, use $latex <Your LaTeX code>$ (without the < and > signs, of course; in fact, these signs should be avoided as they can cause formatting errors). See the about page for details and for other commenting policy.