I have recently come back to try another whack at tetration after such a long time away from the subject, this time going for the bold gold -- an explicit, analytic formula for the tetrational to base , that is, the Kneser tetrational.

I dusted off the Hermite polynomial "gadget" that was mentioned here:

with a long and wild and crazy idea to ... drumroll ... attempt to actually solve the continuum sum analytically to obtain an explicit series formula for the tetrational function. Unfortunately, the results from this latest endeavor look to be disappointing, but they also make one wonder about the nature of the continuum sum formula itself, and in particular, possible (non-)uniqueness considerations regarding the solution.

To recap. First off, a long time ago, for those who have not heard it, a poster that used to be here called "Ansus" suggested that tetration could be expressed using the following weird formula:

.

This is called "weird" because the right hand side has a sum whose upper bound may not be an integer. The key to using the formula, then, is to suitably generalize the summation operator so as to admit a non-integer upper bound, in a suitably "natural" way. The idea here being that it is easier to generalize summation than to generalize iterated functions, as summation is as a neater, more well-behaved operation.

Now, this author fell in love with this formula at the drop of a hat. This author had experimented with the idea of this kind of "generalization of sum", which he now calls "Continuum sum", before, and got results, and so this could be a very interesting method.

As a way to illustrate the possibilities, consider the well-known formula

.

Clearly, there is nothing stopping one from plugging a non-integer into the right. This is a simple generalization. For example, if we take , we get . One can even plug complex numbers, e.g. gives a "sum" of .

One can also do similar to generalize sums of powers , , and so forth -- one them obtains a classic result known as Faulhaber's formula. One might then be led to try and use this to find continuum sums of highly general functions by applying it to a power series expansion, like

.

The trouble is, however, that if one does this, for many analytic functions the resulting continuum sums do not converge. In particular, if the analytic function has any singularities on the complex plane, or it is entire but with suitably fast growth (much slower than tetration), there will be no convergence.

This led the author to play around with various "divergent summation" methods, also not to much avail since many of them do not permit nice, combinatorial formulae for the series, at best allowing only tantalizing numerical results.

But recently, this method of Hermite polynomials was discovered. Instead of representing a function as a power series, it can be represented as a Hermite polynomial series:

if it is suitably bounded on the real axis. Now tetration is unbounded and singular there, so that won't work. But if we take as hypothesis that the "right" tetrational should be the Kneser tetrational, then it stands to reason it is limited along the imaginary axis, and the formula we would want to use would be

.

Taking the results from the given link -- which I will not reproduce here for brevity -- we can define a continuum sum for this as

where are Bernoulli numbers.

So now we return to the continuum sum tetration formula:

The exponential is a big problem -- that's a nasty operation on a power series that makes the coefficients into Bell polynomials of the original coefficients, and I have no idea how to work it out for a Hermite series. So to avoid this difficulty we take logs:

.

.

Then differentiate once to clear the log:

Now we have to work out what each side is in terms of unknown coefficients for which

.

First, the continuum sum:

.

Now we differentiate once. To do this is relatively simple -- the formula for the derivative of a Hermite polynomial is , so by the chain rule the suitable derivative for the one with is and we thus get

We now form the derivative of itself:

.

For what it's worth, we might as well go a step ahead and take the second derivative too while we're at it, crackin' away:

.

Now we need to do the Hermite series multiplication. This gets rather damned nasty and UGLY, so to make this easy let us first define

and

so that

and

.

Now we multiply those two azz-whuppin series together, to get

To simplify this further, however, we find we need a formula for the product of two Hermite polynomials. This led to some searching, which led to finding this paper:

EEGZ! That was a lot of math. Good thing we've got TeX and cut/paste to write it with!

We now set the second-derivative expression equal to this:

Then by manipulation, we get

We now equate coefficients to get

and plugging in the expressions for [math]C_k[/math] and [math]D_n[/math] gives

... and there, our hope dies, pitifully.

This equation system is a non-linear infinite series equation (note the quadratic order terms ) for the respective coefficients ... which means that a solution, if any, is likely to be highly non-unique, and there is no easy way to make a closed form or even a recurrence relation for the . Now it may be there is a unique convergent solution, but I would have no idea how to determine if that is the case (theoretically, there should be least one, the Kneser tetrational, but is there more than one?). It might also be possible to impose some regularity condition on the but again, how we would determine if that would even work at all ... I have no idea.

One possible thought: I am wondering if somehow, the use of the derivative somehow "loses information" as to the solution, which would be another constant only for regular differential equations, but with this continuum-sum thing in there...

Does anyone have any thoughts about all this mess that we just cooked up here?

(Although maybe I made an algebra mistake, which, given all that stuff up there is possible alright, but I don't think it would change the outcome that drastically as to make this solvable. You have the product for sure, and that screws it all up.)

I only have two ideas that might make all this a bit easier. One is a simple identity I never saw you use, which looked like it could help your equations out, especially when you take the logarithm and the second derivative.

This follows very basically because commutes with the derivative, and then quite obviously its inverse must equally commute with the derivative. I use this idea frequently when I'm tasked with solving indefinite sums. I work a lot on this operator, but mostly I only deal with functions of a nice exponential bound in the right half plane (something kneser's tetration definitely is not).

The second point I have, a few years ago I solved a way of representing continuum sums in a vertical strip of the complex plane. I normalized it, so that it was better written. Essentially if is holomorphic for and for some then

where

and

Now can be kouznetsov's iteration method because it tends to a constant as the imaginary argument grows. I'm not sure how kneser's solution behaves in the complex plane, but maybe you can represent its continuum sum this way.

This leads me to wonder if we don't continuum sum the series term by term, but instead try to solve a functional equation f satisfies when using the fact that

I can't continue to discuss what I mean at the moment, but I always had a rough idea on how the equations might work out using this. I'll work more out on paper so that what I say makes more sense, I'm preoccupied at the moment.

I'm floored that you got so far using hermite polynomials though, I would've shied away the moment the power series attempt failed.

Consider the very simple case . The derivative is . For this simple polynomial we can use Faulhaber's formula and that gives us

.

Integrating that, which should give , gives , yet , and these differ by a non-constant amount. Likewise, differentiating the latter expression for the sum gives .

But looking at this, the derivative of the sum does just have a constant shift, so perhaps what we should really say is

up to a constant, which, when integrated, yields a linear term, where this is just indefinite continuum sum, not definite.

yes yes, yes, I should've been more explicit. I was being too brief. I always tend to just write it and drop the C because the solution still works. Using your notation made me forget about that little and, nonetheless as you can see, it still satisfies the difference equation which was the point I was making. Nonetheless, it does give a much simpler form of your equation

granted we know what is.

Plus, when I work with it I tend to work with the exponential indefinite sum, if

then the constant is zero.

This is just like how if

then

where there is no constant error. Of course this definite sum does not really work in this case though, it's really rather restrictive.