Obviously, there should be a way to demonstrate the equivalence of both, because they are trying to solve the same problem; looking for the same solution.

But as I understand, the Carleman matrix A only contains powers of a_i coefficients, yet if you look at the red side, it cannot be written as a matrix product A*Bb, because it needs to have products of a_i coefficients (like ). Maybe it is a power of A.Bb, or something like A^Bb?

The Pascal matrix on the blue side is the exponential of a much simpler matrix

Maybe the equation can be greatly simplified by taking a logarithm of both sides.

Obviously, there should be a way to demonstrate the equivalence of both, because they are trying to solve the same problem; looking for the same solution.

But as I understand, the Carleman matrix A only contains powers of a_i coefficients, yet if you look at the red side, it cannot be written as a matrix product A*Bb, because it needs to have products of a_i coefficients (like ). Maybe it is a power of A.Bb, or something like A^Bb?

No, no ... In your convolution-formula you have in the inner of the double sum powers of powerseries (the red-colored formula in your first posting ) with the coefficients of the a()-function (not of its single coefficients), and if I decode this correctly, then this matches perfectly the composition of

Only, that after removing of the left V(x)-vector we do things in different order:

V(x)*A * Bb = V(x)*(A * Bb )

and I discuss that remaining matrix in the parenthese of the rhs. That V(x) can be removed on the rhs and on the lhs of the matrix-equation must be justified; if anywhere occur divergent series, this becomes difficult, but as far as we have nonzero intervals of convergence for all dot-products, this exploitation of associativity can be done /should be possible to be done (as far as I think). (The goal of this all is of course to improve computability of A, for instance by diagonalization of P or Bb and algebraic manipulations of the occuring matrix-factors).

Anyway - I hope I didn't actually misread you (which is always possible given the lot of coefficients... )

I misinterpreted what the Carleman matrix was. I tough that it contained the powers of the derivatives of a function (valued at zero), but it contains the derivatives of the powers of a function, so it actually haves the products of the aᵢ coefficients (of bᵢ in your notation).

________________

I tried to use this method to find the coefficients for exponentiation: bˣ=Σbᵢ.xⁿ

The condition is
b.(x+1)=b.Σbᵢ.xⁿ

which translates into
P.[bᵢ]=b.[bᵢ]

or
[P-b.I].[bᵢ]=0

The solution should be bᵢ=ln(b)ⁱ / i!

I found bᵢ=c. (ln(b)ⁱ/i!), where c is an arbitrary constant, because, obviously c.b⁽ˣ⁺¹⁾=b.(c.bˣ)

I was bugged for the fact that any equation for solving tetration I tried seems to have at least one degree of liberty. I think now that it should be explained by one (at least) arbitrary constant in the solution.

This looks analogous to constants found in the solution of differential equations, so I wonder if the evolvent of the curves generated by the constant is also a solution, and what is his meaning.

This is perhaps a good starting point to explain the use of Carleman-matrices in my (Pari/GP-supported) matrix-toolbox, because you've just applied things analoguously to how I do this, only you didn' express it in matrix-formulae.
To explain the basic idea of a Carlemanmatrix:
consider a powerseries

We express this in terms of the dot-product of two infinite-sized vectors
where the column-vector A_1 contains the coefficients and the row-vector
Now to make that idea valuable for function-composition /- iteration it would be good, if the output of such an operation were not simple a scalar, but of the same type ("vandermonde vector") as the input ( ).

This leads to the idea of Carlemanmatrices: we just generate the vectors where the vector contains the coefficients for powers of f(x), such that ... in a matrix getting the operation: or

Having this general idea we can fill our toolbox with Carlemanmatrices for the composition of functions for a fairly wide range of algebra.
For instance the operation INC
and its h'th iteration ADD
is then only a problem of powers of P

and as an exercise, we see, that if we right-compose this with the INC -operation, we get the ordinary EXP operator, for which I give the matrix-name B:

Of course, iterations of the EXP require then only powers of the matrix B.

To see, that this is really useful, we need a lemma on the uniqueness of power-series. That is, in the new matrix-notation:
If a function is continuous for a (even small) continuous range of the argument x, then the coefficients in A_1 are uniquely determined.
That uniqueness of the coefficients in A_1 is the key, that we can look at the compositions of Carleman-matrices alone without respect of the notation with the dotproduct by V(x) and for instance, we can make use of the analysis of Carlemanmatrix-decompositions like
and can analyze
directly, for instance to arrive at the operation LOGP : log(1+x)

<hr>
Now I relate this to that derivation which I've quoted from marraco's post.

(05/01/2015, 01:57 AM)marraco Wrote: This is a numerical example for base a=e
(...)
So you get this systems of equations (blue to the left, and red to the right):

First we see the Pascalmatrix P on the lhs in action, then the coefficients of the Abel-function in the vector, say, A_1 . So the left hand is

To make things smoother first, we assume A as complete Carlemanmatrix, expanded from A_1. If we "complete" that left hand side to discuss this in power series we have

It is very likely, that the author wanted to derive the solution for the equation ; so we would have for the right hand side
and indeed, expanding the terms using the matrixes as created in Pari/GP with, let's say size of 32x32 or 64x64 we get very nice approximations to that descriptions in the rhs of the quoted matrix-formula.

What we can now do, depends on the above uniqueness-lemma: we can discard the V(x)-reference, just writing and looking at the second column of B only we get as shown in the quoted post.

So indeed, that system of equations of the initial post is expressible by
and the OP searches a solution for A.

<hr>
While I've -at the moment- not yet a solution for A this way, we can, for instance, note that if A is invertible, then the equation can be made in a Jordan-form:
which means, that B can be decomposed by similarity transformations into a triangular Jordan block, namely the Pascalmatrix - and having a Jordan-solver for finite matrix-sizes, one could try, whether increasing the matrix-sizes the Jordan-solutions converge to some limit-matrix A.

For the alternative: looking at the "regular tetration" and the Schröder-function (including recentering the powerseries around the fixpoint) one gets a simple solution just by the diagonalization-formulae for triangular Carlemanmatrices which follow the same formal analysis using the "matrix-toolbox" which can, for finite size and numerical approximations, nicely be constructed using the matrix-features of Pari/GP.