concerning the iteration of the function f(x)=exp(x)-1 I came across the remark of Erdös & Jabotinsky, that due to I.N.Baker a fractional version f°h(x) would not exist. This was stumbling me, since I just had computed lots of nice values for those iterates... In the refered article Baker actually proves that only integer iterates have positive radius of convergence.
Well - phew - with this I can live better than with the Erdös/Jabotinsky-statement.

Yes, there are other posts about this, because it is common to slip on this aspect. I think Henryk said it best in this post particularly. I think I have seen something similar in tetration itself as opposed to iter-dec-exp, but I'm not sure how to phrase it.

I first saw it when I was following Galidakis' research with his Puiseux series expansions about log(b). Galidakis uses series about the base b, whereas natural iteration uses series expansions about the iterator t in . With the linear approximation to tetration, as well as higher approximations, the table that Galidakis gives for the Puiseux series expansions of will give a discontinuity when the number of times you are differentiating is equal to t. In other words, the function:

is continuous and differentiable for all . This can only be seen if you already have a continuous extension of tetration to non-integer heights, if not, then its just a bunch of points. This has been bothering me for quite some time now, but I think it may be related to the iterability of . Since tetration and are topologically conjugate, this may help explain this as well...

andydude Wrote:Yes, there are other posts about this, because it is common to slip on this aspect. I think Henryk said it best in this post particularly. I think I have seen something similar in tetration itself as opposed to iter-dec-exp, but I'm not sure how to phrase it.

I first saw it when I was following Galidakis' research with his Puiseux series expansions about log(b). Galidakis uses series about the base b, whereas natural iteration uses series expansions about the iterator t in . With the linear approximation to tetration, as well as higher approximations, the table that Galidakis gives for the Puiseux series expansions of will give a discontinuity when the number of times you are differentiating is equal to t. In other words, the function:

is continuous and differentiable for all . This can only be seen if you already have a continuous extension of tetration to non-integer heights, if not, then its just a bunch of points. This has been bothering me for quite some time now, but I think it may be related to the iterability of . Since tetration and are topologically conjugate, this may help explain this as well...

Andrew Robbins

Indeed - I missed that thread (I was some days outside and had no internet access. Also the discussion took place in August, and that was the time, when I was completely absorbed by my Eigensystem-analyses) But now I remember I've read in this thread (and even wrote one reply to Jay... )

Great resource, our forum!

I see two approaches to construct powerseries for iterates f°h(x) of f(x) = b^x-1 , (setting u=log(b))

1) determine the coefficients as polynomials in h (recursive insertion of the basic powerseries of f(x) for x, expand & collect terms, then interpolation, or matrix-logarithm) if u=1, so that we have
f°h(x) = g(x,h)

2) determine the coefficients as polynomials in u^h via eigensystem-decomposition, if u<>1

The question was open, what *is* the rate of growth of coefficients; I've possibly an answer now by the symbolic eigensystem-analyis.

If the powerseries in x is expressed according to 1) we have polynomials in h, like( a_k h^k + b_k h^(k-1) + ... + j_k) * x^k dominated by some a_k * h^k where k is the power of x and also the (numerical) coefficients a_k increase - again with unknown rate.

If it is expressed according to 2) I get, that the coefficients at the powers of x grow with ~ where again k is the power of x and u^kh is the highest power of u in the coefficient, where also the A_k are growing, but far less than the a_k in the method 1)

Since eigensystem-analysis is impossible, if b=exp(1) and u=log(b)=1 because of singularities by the vanishing denominators this can only be useful in the view of approximation.

Anyway - I don't have much more at the moment; I'll try to describe the 2)-terms soon.

I remember posting graphs of the root test of in this thread, but now I wonder if it is any different for other bases. For example, I wonder if the root test is any different for which should be representative of all bases between 1 and e.