Quote:However perhaps it could be that is real or generally that is real, which I dont believe. Can someone just compute it?

Hmm, I don't know, whether I understand you correctly. If h=1 then all eigenvalues except the first are zero ( = [1,0,0,....]) and the result is always the same, independent of any power of log(h) since the "height" y of the tower occurs only as exponent of the eigenvalues....
Did I misread something obvious?

As said is not 1 but ., is the power derivation matrix of (I think this is the transpose of your matrix ), and is (though we can also apply the powerseries directly to ). Hence the power series has as coefficients the first row of (think transposed in your notation).
And now tetration is defined as . We set not .

ok, I misread that as x^^t instead of {h,x}^^t, as in Andrew's notational references. I got it now. If there are only two parameters given as in I automatically assume, that it is {x,1}^^t instead of {<context>,x}^^t. I think I'll have to get used to it now...

bo198214 Wrote:If then we have the negative Eigenvalues in the power derivation matrix A of . Now we compute . It has the Eigenvalues . Take for example then we see that has also non-real Eigenvalues and hence has also non-real entries. Supposed had only real coefficients then would have only real coefficients. So it is clear that must have some non-real coefficients and is a non-real function.

However perhaps it could be that is real or generally that is real, which I dont believe. Can someone just compute it?

I computed for hl = log(h) = -1/2 , h = 0.60653066, b= 0.43851527
such that h^(1/h) = b
So b is our usual base-parameter, and it is made sure by the above computation, that it is in the admissible range.

Using my analytic solution:
Since hl is negative, we have for the half-iterate complex values for the eigenvalues of the constructed matrix-operator.

The half-iterate is (using 16 terms for the result) = b^^(1/2) = 0.58983992+0.24626917*I

Using that result as input, the next half-iterate is = b^^(2/2) = 0.43851527-2.1267671E-11*I
using 16 terms again.

Indeed, the result approximates b^^1 = b very good.

Gottfried
edit: I adapted the s^^ (1/2) expression to b^^(1/2) for consistency

Here I show some results for -19/20 <= log(h) <= -1/20
which means, that bases b are in the bounds 0.086<b<0.95
As in the previous post, h = h(b) or h^(1/h)=b, and -1<log(h)<0,
so we have complex eigenvalues for the half-iterate.

I used 32 terms; for log(h)<-4/5 the convergence was bad, so I used
Euler-summation of hand-selected orders to sum up the terms
(see third table)

In the next table are the first 6 terms [update:reduced from7] for the evaluation at each log(h).
Let the coefficients in the first row be a_k, the coefficients in the second row be b_k (only the first six are documented),
then the result (the value for the half-iterate) is sum (a_k*b_k)

We see, that convergence is good for some log(h)>-4/5, but the first examples
(smaller log(h)) are "un-nice", and the convergence must be accelerated to get
usable results with 32 terms only.

Hmm, it may be worth noting, that these terms are exact (in the sense as we assume logarithms,
powers of logarithms and exp(x) are exact and their finite linear combinations) due to my
analytic solution.
They don't change when the dimension of matrices is increased, only their number increases
with the higher dimensions.

Gottfried

[update: I reduced the number of terms because the wide format also affected the readability of the other postings on my screen]

Thank you Gottfried! So the hypothesis of complex values for (and ) is strongly supported by the matrix operator method (where this method quite precisely reflects the inutition behind fractional iteration, as it yields the expected results in all major application cases, as I will show in a subsequent post somewhen.)

@Jay
I nearly completely agree with you with the one exception that I dont think there is a direct relation between the development at the fixed points and the different branches of the . Because if this would be true there had to exist a fixed point such that regular iteration of at this fixed point is real, which, I would guess, dont exist. And even the matrix operator method yields real values for .

For variety of views: The multiple branches of (which are spirals in the dependency of a real as Jay already mentioned: ) can also be explained as in the following. If we take as the solutions of we have two solutions: +1 and -1. And generally if we take we have n distinct solutions on the unit circle with the arguments , . If we now consider we have n solutions if the fraction is cancelled. So we have defined for all rational as some sort of complex cloud. Now the different branches of are all the possible continuous functions through this cloud!

In the same way there are n complex solutions for the nth iterative root of a powerseries of the form , . And we get infinitely many branches of a continuous (i.e. where the coefficients continuously depend on the iteration exponent) iteration.

As it turns out, the idea of exponentiation of a negative base is not just a good analogy, it is in fact directly relevant.

Find a fixed (preferably real) point of iteration. If you subtract the fixed point from successive iterates, you'll find that an iterate is a multiple of the distance from the fixed point. Iterative multiplication is, of course, exponentiation. The slope of the function at that point is then the base of exponentiation for determining the iterates. Only when the slope is 1 can we fall back on parabolic iteration methods.

Therefore, if the slope is negative, then you're going to be dealing with exponentiation of a negative base. This explains why every other value is high or low, because the iterates of multiplication of a negative number alternate positive and negative.

Fractional iteration of a negative number will necessarily be complex. Therefore, we should expect a complex spiral around the fixed point of the tetrational bases between e^-e and 1. We start at the fixed point at positive infinity, create a complex spiral of period 2 and infinitesimal radius, and then take iterative logarithms to get back to the origin. Voila!

For bases less than e^-e, the fixed point is actually repelling, much as the upper asymptote for bases between 1 and eta is repelling. This complicates the matter but does not make it intractable. We can likely start at the fixed point and exponentiate our way to some complex ring asymptote, then use logarithms just outside this ring to recover the tetration. However, this last idea is at best a guess until I can investigate it.

Finally, complex slopes should be similarly solvable, allowing us to solve iterative exponentiation of complex bases, so long as we can find a suitable (preferably real) fixed point.

The main remaining question, of course, is what to do with bases greater than eta. We still lack real fixed points. For bases of the form , with k non-zero, we might actually be able to find complex fixed points. But for the "primary branch", no real fixed points exist. Other methods exist which seem to work, but their inner workings are far more esoteric than the simplicity of hyperbolic iteration from a fixed point.

I like your model of "repelling fixed points", particularly for the upper branch in the interval 1<b<eta and in the case of 0<b<e^(-e). In this second situation, the fixed points seem to mee real, attracting, but ... "trans-infinite (!?!)". Tomorrow I shall post two simulation plots, for clarifying what I really mean (... dream?).

I think the idea of iterating from the fixed point at 0.318... + 1.337...i is valid!

Start with a selection of numbers between 0 and 1, and begin taking the natural logarithm over and over. The values will settle on that particular fixed point (depending on which branch of the logarithm your program uses).

Therefore, we should be able to come up with a fixed point hyperbolic solution that yields REAL numbers after enough exponentiations to get us back to the range (0,1).

The question is, will it match the inverse of Andrew's slog?

By the way, this was a quick, off the cuff test with the xnumbers library in Excel, so until someone can confirm it in a more reliable library, I reserve the right to be wrong.

Hmm, my initial excitement has been dulled. It seems that, while the iterated natural logarithms of the real interval (0,1) do seem to converge on a fixed point, they do so in a very fascinating fractal manner. Unfortunately, this fractal convergence to the fixed point makes it difficult to identify the "correct" way to fractionally iterate. A straight line or complex spiral would be very straightforward, but this fractal isn't. This isn't to say that I've given up hope of getting some useful information here, but it's not nearly as simple as I'd hoped.

I plotted a curve of regular tetration at base (which is in the range ) at the lower (attracting & real) fixed point here. One can see very well that real values are only achieved at integer iterations.