I am interested in obtaining an analog of the Lagrange inversion formula, starting from a generalization of the implicit equation. Ordinary Lagrange reversion, as I am familiar with it, starts with the implicit equation
$$u(z) = tw(u(z))$$
where $w$ is a specified generating function for a formal power series. The goal is to solve for the generating function $u$ in the formal parameter $z$. When $w$ is sufficiently analytic and convergent near the origin, the Lagrange Inversion formula states that
$$[z^n]{u} = \frac{1}{n!}( \frac{d^{n-1}}{du^{n-1}} w^n )$$
I am interested in obtaining an analogous (i.e. direct) formula for the same coefficients in the case where we start with
$$u(z) = zw(u(z^\gamma))$$
with $\gamma \in \mathbb{R}$. By computing the first few terms, I find that, unlike in the original case, the coefficients $[z^n]{u}$ can not be expressed as polynomials in $w,w',w'',\ldots, w^{(n)}$, but rather are rational polynomials in $w,w',w'',\ldots, w^{(n)}$ (and also in $\gamma$). This suggests to me that it may be possible to obtain an analog of the inversion formula in terms of some appropriate rational polynomial in $w,w',w'',\ldots, w^{(n)}$. However, the proof of the Lagrange inversion formula I am aware of explicitly uses the fact that $w$ is polynomial, which has impeded my efforts to directly apply the analogy. Is it possible to generalize to my case of interest?

P.S.
The context in which I obtain this problem may be of interest. The "skewed" formula arises in the construction of approximate mean field solutions to $x\in \mathbb{R}^N$ in $(I+\gamma A)x=b$ where $A$ is the adjacency matrix of a Watt-Strogatz-Newman type directed random network. I obtain an expression for the moment generating function $X(t)$ of the random variable $x_i$ as
$$X(t) = e^tA(X(-\gamma t))$$
where $A(y) = \sum_a' P(a')y^{a'}$ is the factorial moment generating function of the in-degree of the network. The moment generating function here is related to the ordinary generating functions in the statement of Langrange's inversion formula through the $z=e^t$ (it is both conventional and convenient to transform the variables in this way). As such, the mean value of $x_i$ is given $X'(0)$, the variance is given by $X''(0)-(X'(0))^2$ and so on. For example, in the case where the in-degree at every node is some constant $a$ with probability one, we have $A(y) = y^a$. One finds that
$$
X'(0) = \frac{A(X(0))}{1+\gamma A'(X(0))} ~~~~~~= \frac{1}{1+\gamma a}
$$
which could have been deduced from the original matrix equation by symmetry considerations. Continuing in the same fashion, one finds that $X''(0) = (X'(0))^2$, so that the variance is zero, which should have also been expected, since the equation in this case determined by the value of $a$. However, in the most common case of interest, we will allow every edge in the network to be realized independently with probability $p$. In this case $A(y) = (1-p+py)^{N-1}$.

Just to double check: there is no $t$ in your implicit equation - is it correct?
–
Max AlekseyevDec 1 '12 at 6:57

1

The question seems interesting, but could you please read it carefully and correct the misprints in the formulas ? It is incomprehensible. (What is $t$ in the first equation? "Polynomials in the derivatives of $u$ should be "polynomials in the derivatives of $w$" ? And so on.
–
Alexandre EremenkoDec 1 '12 at 15:01

Thanks for pointing this out. I have fixed the misprints and expanded on the connection between the problem statement and my application. I seem to have run into a length restriction at the end of the edit, so I will comment here. In the case where $A(y) = (1+-p+py)^{N-1}$ the variance (and higher order cumulants) no longer vanish for general $p$. The equation I obtained allows for recursive computation of $X^(m)(0)$, but this is somewhat unwieldy, which motivates my interest in the Lagrange inversion analog.
–
Gabriel MitchellDec 3 '12 at 0:37