I'd like to gather information and references on the following functional equation for power series $$f(f(x))=x+f(x)^2,$$$$f(x)=\sum_{k=1}^\infty c_k x^k$$

(so $c_0=0$ is imposed).

First things that can be established quickly:

it has a unique solution in $\mathbb{R}[[x]]$, as the coefficients are recursively determined;

its formal inverse is $f^{-1}(x)=-f(-x)$ , as both solve uniquely the same functional equation;

since the equation may be rewritten $f(x)=f^{-1}(x)+x^2$, it also follows that $f(x)+f(-x)=x^2$, the even part of $f$ is just $x^2/2$, and $c_2$ is the only non-zero coefficient of even degree;

from the recursive formula for the coefficients, they appear to be integer multiples of negative powers of $2$ (see below the recursive formmula). Rmk. It seems (but I did not try to prove it) that $2^{k-1}c_k$ is an integer for all $k$, and that $(-1)^k c_{2k-1} > 0$ for all $k\geq 2$.

Question : how to see in a quick way
that this series has a positive radius
of convergence, and possibly to
compute or to evaluate it?

[updated]
A more reasonable question, after the numeric results and various comments, seems to be, rather:
how to prove that this series does
not converge.

Note that the radius of convergence has to be finite, otherwise $f$ would be an automorphism of $\mathbb{C}$. Yes, of course I did evaluate the first coefficients and put them in OEIS, getting the sequence of numerators A107700; unfortunately, it has no further information.

Motivation. I want to understand a simple discrete dynamical system on $\mathbb{R}^2$, namely the diffeomorphism $\phi: (x,y)\mapsto (y, x+y^2)$, which has a unique fixed point at the origin. It is not hard to show that the stable manifold and the unstable manifold of $\phi$ are
$$W^s(\phi)=\mathrm{graph}\big( g_{|(-\infty,\\ 0]}\big)$$
$$W^u(\phi)=\mathrm{graph}\big( g_{|[0, \\ +\infty)}\big)$$

for a certain continuous, strictly increasing function $g:\mathbb{R}\to\mathbb{R}$, that solves the above functional equation. Therefore, knowing that the power series solution has a positive radius of convergence immediately implies that it coincides locally with $g$ (indeed, if $f$ converges we have $f(x)=x+x^2/2+o(x^2)$ at $x=0$ so its graph on $x\le0$ is included in $W^s$, and its graph on $x\ge0$ is included in $W^u$: therefore the whole graph of $f$ would be included in the graph of $g$,implying that $f$ coincides locally with $g$). If this is the case, $g$ is then analytic everywhere, for suitable iterates of $\phi$ give analytic diffeomorphism of any large portion of the graph of $g$ with a small portion close to the origin.

One may also argue the other way, showing directly that $g$ is analytic, which would imply the convergence of $f$. Although it seems feasible, the latter argument would look a bit indirect way, and in that case I'd like to make sure there is no easy direct way of working on the coefficients (of course, it may happen that $g$ is not analytic and $f$ is not convergent).

Details: equating the coefficients in both sided of the equation for $f$ one has, for the 2-Jet
$$c_1^2x+(c_1c_2+c_2c_1^2)x^2 =x + c_1^2x^2,$$
whence $c_1=1$ and $c_2=\frac 1 2;$ and for $n>2$
$$2c_n=\sum_{1\le j\le n-1}c_jc_{n-j}\,-\sum_{1 < r < n \atop \|k\|_1=n}c_rc_{k_1}\dots c_{k_r}.$$

More details: since it may be of interest, let me add the argument to see $W^s(\phi)$ and
$W^u(\phi)$ as graphs.

Since $\phi$ is conjugate to $\phi^{-1}=J\phi J $ by the
linear involution $J:(x,y)\mapsto (-y,-x)$, we have $W^u(\phi):=W^s(\phi^{-1})=J\\
W^s(\phi)$, and it suffices to study $\Gamma:=W^s(\phi)$. For any
$(a,b)\in\mathbb{R}^2$ we have $\phi^n(a,b)=(x_n,x_{n+1})$, with $x_0=a$, $x_1=b$,
and $x_{n+1}=x_n^2+x_{n-1}$ for all $n\in\mathbb{N}$. From this it is easy to see
that $x_{2n}$ and $x_{2n+1}$ are both increasing; moreover, $x_{2n}$ is bounded
above iff $x_{2n+1}$ is bounded above, iff $x_{2n}$ converges, iff $x_n\to 0$, iff
$x_n\le 0 $ for all $n\in\mathbb{N}$.

In particular, for any $a\leq 0$ there exists at least a $b\leq 0$ such that
$(a,b)\in \Gamma$: to prove that $b$ is unique, that is, that $\Gamma$ is a graph
over $(\infty,0]$, the argument is as follows. Consider the function
$V:\Gamma\times\Gamma\to\mathbb{R}$ such that $V(p,p'):=(a-a')(b-b')$ for all
$p:=(a,b)$ and $p':=(a',b')$ in $\Gamma$.

Showing that $\Gamma$ is the graph of a
strictly increasing function is equivalent to show that $V(p,p')>0$ for all pair of
distinct points $p\neq p'$ in $\Gamma$.

By direct computation we have
$V\big(\phi(p),\phi(p')\big)\leq V(p,p')$ and $\big(\phi(p)-\phi(p')\big)^2\geq
\|p-p'\|^2+2V(p,p')(b+b')$. Now, if a pair $(p,p')\in\Gamma\times\Gamma$ has
$V(p,p')\le0$, then also by induction $V\big(\phi^n(p),\phi^n(p')\big)\leq 0$ and
$\big(\phi^n(p)-\phi^n(p')\big)^2\geq \|p-p'\|^2$ for all $n$, so $p=p'$ since both
$\phi^n(p)$ and $\phi^n(p')$ tend to $0$. This proves that $\Gamma$ is a graph of a
strictly increasing function $g:\mathbb{R}\to\mathbb{R}$: since it is connected, $g$
is also continuous. Of course the fact that $\Gamma$ is $\phi$-invariant implies that $g$ solves the functional equation.

why it has unique solution, not two solutions (one for $c_1=1$, one for $c_1=-1$)?
–
Fedor PetrovDec 17 '10 at 14:46

This is strongly reminiscent of problems 163 and 165 in the currnet (December 15 version) of EC1, 2nd edition: math.mit.edu/~rstan/ec/ec1.pdf (pp. 147-8 and 204 onward). Maybe some of the ideas or references there will be helpful. (Warning: this is a massive pdf.)
–
JBLDec 17 '10 at 16:32

In case of usefulness, I had already put a number of references on functional equations at: $$ $$ zakuski.math.utsa.edu/~jagy/other.html $$ $$ emphasizing early work of Irvine Noel Baker. That is not to say that your problem can be written in his framework. But from what I have seen, it is easy for a functional equation solution to behave as $ e^{-1/x^2},$ that is $C^\infty$ on the real line, analytic except at $0,$ but when extended to the complex plane (if that is even possible) the origin becomes an essential singularity.
–
Will JagyDec 17 '10 at 23:35

3

I also put three early items by Kuczma at $$ $$ zakuski.math.utsa.edu/~jagy/other.html $$ $$ one being a survey, see page 29, formula (82), as well as two of his articles about $$ \phi ( \phi(x)) = g(x, \phi(x)) $$ which includes your example. What I am not seeing is anything about smoothness, either at a fixpoint, or away from it.
–
Will JagyDec 20 '10 at 22:36

10 Answers
10

Having thought about this question some more, including making some plots of the trajectories of points under iterated applications of $f$ (see Gottfried's circular trajectories), it is possible to devise a numerical test which should show that the series expansion diverges (if indeed it does). Whether it is a practical test or not depends on how badly it fails. My initial rough attempt wasn't conclusive, so I haven't determined the answer to this yet.

This answer still needs working through the details, but I'll post what I have so far. Also, I think that much of what I have to say is already well known. As the thread is now community wiki and anyone can edit this post, feel free to add any references or further details.

The main ideas are as follows, and should apply much more generally to analytic functions defined near a fixed point via an iterative formula, such as $f(f(z))=\exp(z)-1$ in this MO question.

There are two overlapping open regions bounding the origin from the right and left respectively, and whose union is a neighbourhood of the origin (with the origin removed). The functional equation $f(f(z))=z+f(z)^2$ with $f(z)=z+O(z^2)$ can be uniquely solved in each of these regions, on which it is an analytic function.

The solution $f$ on each of the regions has the given power series as an asymptotic expansion at zero. Furthermore, it is possible to explicitly calculate a bound for the error terms in the (truncated) expansion.

The functional equation has a solution in a neighbourhood of the origin (equivalently, the power series has a positive radius of convergence) if and only if the solutions on the two regions agree on their overlap.

One way to verify that $f$ cannot be extended to an analytic function in a neighbourhood of the origin would be to accurately evaluate the solutions on the two domains mentioned above at some point in their overlap, and see if they differ. Another way, which could be more practical, is to use the observation that after the second order term, the only nonzero coefficients in our series expansion are for the odd terms, and the signs are alternating [Edit: this has not been shown to be true though and, in any case, I give a proof that this implies zero radius of convergence below]. Consequently, if we evaluate it at a point on the imaginary axis, truncating after a finite number of terms, we still get a lower bound for $\vert f\vert$. If it does indeed diverge, then this will eventually exceed the upper bound we can calculate as mentioned above, proving divergence. Looking at the first 34 terms from OEIS A107700 was not conclusive though.

Choose a point $z_0$ close to (and just to the right of) the origin. Using the power series to low order, we approximate $z_1=f(z_0)$. Then, the functional equation can be used to calculate $z_n=f(z_{n-1})=z_{n-2}+z_{n-1}^2$. Similarly, choosing points just to the left of the origin, we can calculate good approximations for the iterates of $f^{-1}$. Doing this for a selection of initial points gives a plot as follows.

Concentrating on a small region about the origin, the iterates of $f$ give clearly defined trajectories - the plot includes a region of radius 0.26 about the origin (much larger, and the paths do start to go wild). As can be seen, those paths leaving the origin do one of two things. Either they move to the right, curving up or down, until they exit the region. Or, they bend round in a circle and re-enter the origin from the left. The iterates of $f^{-1}$ leaving the origin from the left behave similarly, but reflected about the imaginary axis.

This should not be too surprising, and is behaviour displayed by any analytic function of the form $f(z)=z+\lambda z^2+O(z^3)$ where $\lambda > 0$. Consider approximating to second order by the Mobius function $f(z)\approx g(z)\equiv z/(1-\lambda z)$. Then, $g$ preserves circles centered on the imaginary axis and passing through the origin, and will move points on these circles in a counterclockwise direction above the origin and clockwise below the origin. A second order approximation to $g$ should behave similarly. In our case, we have $\lambda=1/2$ and $g$ actually agrees with $f$ to third order, so it is not surprising that we get such accurate looking circles (performing a similar exercise with $f(z)=\exp(z)-1$ gave rather more lopsided looking 'circles').

One thing to note from the plot above: the circles of diameter 0.25 above and below the origin are still very well-defined. So, if $f$ does define an analytic function, then its radius of convergence appears to be at least 0.25, and $f(\pm0.25i)$ is not much larger than 0.25 in magnitude. I wonder if summing a few hundred terms of the power series (as computed by Gottfried) will give a larger number? If it does, then this numerical evidence would point at $f$ not being analytic, and a more precise calculation should make this rigorous.

To understand the trajectories, it is perhaps easiest to consider the coordinate transform $z\mapsto -1/z$. In fact, setting $u(z)=-1/z$, then the Mobius transform above satisfies $g(u(z))=u(z+\lambda)$. More generally, we can calculate the trajectories exiting and entering the origin for a function $f(z)=z+\lambda z^2+O(z^3)$ as follows
$$
\begin{align}
&u_1(z)=\lim_{n\to\infty}f^{n}(-1/(z-n\lambda)),\\\\
&u_2(z)=\lim_{n\to\infty}f^{-n}(-1/(z+n\lambda)).
\end{align}\qquad\qquad{\rm(1)}
$$
Then, $u_1$ and $u_2$ map lines parallel to the real axis onto the trajectories of $f$ and $f^{-1}$ respectively and, after reading this answer, I gather are inverses of Abel functions. We can do a similar thing for our function, using the iterative formula in place of $f^{n}$. Then, we can define $f_i$ according to $f_i(z)=u_i(\lambda+u_i^{-1}(z))$, which will be well-defined analytic functions on the trajectories of $f$ (resp. $f^{-1}$) before they go too far from the origin (after which $u_i$ might not be one-to-one). Then $f_i$ will automatically satisfy the functional equation, and will give an analytic function in a neighbourhood of the origin if they agree on the intersection of their domain (consisting of the circular trajectories exiting and re-entering the origin).

The trajectories leaving the origin from the right and entering from the left do not agree with each other, and intersect. This is inconsistent with the existence of a function $f$ in a neighborhood of the origin solving the functional equation, as the two solutions $f_1,f_2$ defined on trajectories respectively leaving and entering the origin do not agree. And, if the solutions $f_1,f_2$ do not agree on the larger trajectories then, by analytic continuation, they cannot agree close to the origin. So, if it can be confirmed that this behaviour is real (and not numerical inaccuracies), then the radius of convergence is zero.

Update 2: It was noted in the original question that, for $n\ge3$, all coefficients $c_n$ in the power series expansion of $f$ are zero for even $n$, and that the odd degree coefficients are alternating in sign, so that $(-1)^kc_{2k-1}\ge0$ for $k\ge2$. This latter observation has not been proven, although Gottfried has calculated at least 128 coefficients (and I believe that this observation still holds true for all these terms). I'll give a proof of the following: if the odd degree coefficients $c_n$ are alternating in sign for $n\ge3$, then the radius of convergence is zero.

To obtain a contradiction, let us suppose a positive radius of convergence $\rho$, and that the odd degree coefficients are alternating in sign after the 3rd term. This would imply that
$$
f(it)=-t^2/2 + it(1-t^2/4 - h(t^2))\qquad\qquad{\rm(2)}
$$
where $h$ has nonnegative coefficients, so $h\ge0$ for real $t$. Also, $h(t)\to\infty$ as $t\to\rho_-$. For small $t$, $f(it)=it + o(t)$ has positive imaginary part so, by continuity, there will be some $0 < t_0 < \rho$ such that $\Im[f(it_0)]=0$. Choose $t_0$ minimal. If $\rho > 2$ then (2) gives $\Im[f(2i)]\le2(1-2^2/4)=0$ so, in any case, $t_0\le2$. Then, for $0 \le t \le t_0$, the imaginary part of $f(it)$ is bounded by $t(1-t^2/4)$ and (2) gives
$$
\begin{align}
\vert f(it)\vert^2 &\le t^4/4 + t^2(1-t^2/4)^2\\\\
&=t^2(1-t^2(4-t^2)/16)\\\\
&\le t^2 < \rho^2.
\end{align}
$$
So, $f(it)$ is within the radius of convergence for $0 \le t \le t_0$. Also, by construction, the functional equation $f(f(it))=it+f(it)^2$ holds for $\vert t\vert$ small. Then, by analytic continuation the functional equation holds on $0\le t \le t_0$ so,
$$
\Im[f(f(it_0))]=\Im[it_0+f(it_0)^2]=t_0\not=0.
$$
However, $f(it_0)$ and the power series coefficients are all real, so $f(f(it_0))$ is real, giving the contradiction.

The existence of these circular trajectories seems close to a contradiction situation for the following reason too. It looks like (though I didn't try to prove it) that the odd degree coefficients of the formal power series have alternate sign, that in terms of the formal series reads: f(it)=−t^2/2+it(1−h(t^2)), with h a series with positive coefficients. Thus, assuming R>0, h(t)→+∞ for t→R−, and Im(f(it))→−∞ as t→R, in contrast with the existence of circle orbits (at least, if they exist up to the height iR).
–
Pietro MajerDec 22 '10 at 16:19

@Pietro: yes, if the odd degree coefficients have alternating sign, it is possible to show that the radius of convergence is zero using the fact that f(it) will pass through the real axis.
–
George LowtherDec 22 '10 at 19:33

Concerning the remark, that the function for diameters 0.25 and below could be convergent. I've not yet new definitive results. But another heuristic: if you parallel the coefficients of the powerseries with the bernoulli-numbers (shifted by an index-difference of 1) and divide $c_k / b_{k-1} $ the sequence of ratios gives a very civilized picture. The ratios seem slowly to increase - but now without much up&down. So I take this as another strong suggestion for the rate-of-increase as hypergeometric and thus the convergence-radius of the powerseries of f as zero
–
Gottfried HelmsDec 22 '10 at 23:37

@George: do you mean you see how to prove the implication "$(-1)^k c_{2k-1} > 0$ for all $k\ge 3$ $\Rightarrow R=0$"? It seems to me that one needs to prove the existence of Gottfried's circles up to the quote $iR$.
–
Pietro MajerDec 23 '10 at 9:48

@Pietro: Yes. I added this to the answer.
–
George LowtherDec 23 '10 at 16:40

I computed the coefficients of the formal powerseries for $f(x)$ to $n=128$ terms. As it was already mentioned in other answers/comments, the coefficients seem to form a formal powerseries of concergence-radius zero; equivalent to that the rate of growth of absolute values of the coefficients is hypergeometric.

To get a visual impression of the characteristics of the function I evaluated $f(x)$ for some $ 0 < x <2 $

Method 1 : I used (an experimental version of) Noerlund-summation with which I could estimate approximations for that interval of $ 0 < x < 2 $ .

Method 2, just to crosscheck that results: I repeated the evaluation for that range of x using the functional equation.

I computed the "primary interval" of $ x_0=0.1 \text{ to } 0.105\ldots -> y_0=0.105\ldots \text{ to } 0.111\ldots $ which defines one unit-interval of iteration height where the Noerlund sum seems to converge very good (I assume error $<1e-28$ using 128 terms of the powerseries). Then the functional equation allows to extend that interval to higher x, depending on whether we have enough accuracy in the computation of the primary interval $ x_0 \text{ to } y_0 $.

Result: both computations gave similar and meaningful results with an accuracy of at least 10 digits - but all this requires a more rigorous analysis after we got some likelihood that we are on the right track here.

Here is a plot for the estimated function $ f(x) $ for the interval $0 < x < 4.5 $ (the increase of the upper-bound of the interval was done using the functional equation)

and here a plot which includes the inverse function to see the characteristic in terms of $ x^2 = f(x)-f^{o-1}(x) $

[update] Based on the second plot I find this representation interesting which focuses on squares. It suggests to try to use integrals to find the required coordinates for the positioning of the squares. Does the sequence of green-framed squares has some direct representation which allows to determine the coordinates independently from recursion? The "partition" of the big green square into that red ones alludes to something like the golden ratio...[end update]

Thank you, very clear pictures. Another possibility should be iterating the image of the positive half-axis $\\{(0,y): y\ge0\\}$ via the map $\phi:(x,y)\mapsto(y,x+y^2)$. This way we get a sequence of graphs that converge to the graph of $f$, alternately above and below.
–
Pietro MajerDec 18 '10 at 15:38

Why do you think that the concergence-radius is zero? From the plot oeis.org/A107699/graph it seems that the numerators of the coefficients grow exponentially.
–
J.C. OttemDec 18 '10 at 17:57

Very nice! I'm wondering if its possible to do something similar, using the functional equation to extend $f$ to the right half-plane $\Re[z] > 0$? Just a wild stab in the dark, but maybe it does extend to an analytic function on that region, and explodes as you approach the imaginary axis?
–
George LowtherDec 18 '10 at 23:47

@George: I plotted a couple of trajectories for RE(z) = 0.05 and small imaginary values. Couriosly - they look like perfect circles. Is this something what you were asking for?
–
Gottfried HelmsDec 19 '10 at 17:02

@Gottfried: Yes! I was expecting that the trajectories might go astray. Or maybe that different trajectories cross each other. But what you observed seems perfectly consistent with an analytic function.
–
George LowtherDec 19 '10 at 17:42

Here is a plot for the trajectories starting at a couple of initial values in the complex plane. Here I computed the initial coordinates using the powerseries (including Noerlund-summation) and used then the functional equation to produce the trajectories for the iterates.

[Update] I tried to find more information about the deviation of the trajectories from the circular shape. It seems, that these deviations are systematic though maybe small. Here is a plot of the trajectories from $ z_1= 0.2 î $; the center of the perfect circle was assumed at $ 0.1 î $ I modified the value of $z_0$ for best fit (visually) of all $z_{k+1} = z_k^2+z_{k-1}$ and with some examples it seemed, that indeed the powerseries-solutions are the best choice for $ z_0 $
Here is the plot for the near circular trajectories in positive and negative directions (grey-shaded, but difficult to recognize the "perfect circle")

The increasing wobbling at the left and right borders seems due to accumulation of numerical errors over iterations (I used Excel for the manual optimization)

Note: for purely imaginary $z_1$ the Noerlund-summation for the powerseries does not work because we get a divergent series (of imaginary values) with non-alternating signs beginning at some small index.
[end update]

I have to say, it does look like $f$ is a well defined analytic function in a neighbourhood of the origin. And, you might be able to put together a proof by showing that these circles are well-defined, so $f$ is well defined. Then apply the functional equation to $f^\prime$ to show that $f$ is differentiable.
–
George LowtherDec 19 '10 at 19:15

However, I get a very similar result for $f(f(z))=e^z-1$. i53.tinypic.com/1zd2gcl.png. Yet, that does not give an analytic function in a neighbourhood of the origin (according to this answer: mathoverflow.net/questions/4347/…). I'm not sure what goes wrong in that case. I'll have to try reading the linked paper of Baker, although I don't read german, so it could be tricky.
–
George LowtherDec 19 '10 at 19:48

I think the issue could be the following: consider calculating the path taken by the iterates of f starting in the upper-right quadrant, which will go around the circles in an anticlockwise direction, eventually approaching the origin from the upper-left quadrant. Try the same exercise, taking iterates of $f^{-1}$, which go around the circles in a clockwise direction. Do they agree? If they do, then it looks like you have an analytic function. Otherwise, you only get an analytic function if you exclude either the positive or negative real line.
–
George LowtherDec 19 '10 at 20:23

Hmm, two observations. First: iterating $f$ from an initially real value goes to real infinity. Second: I tried also $z=0.05+0.001*î$ but got an overflow in the exponent at some iterate (didn't investigate this further) - which is remarkable when using Pari/GP. Don't know how things behave/how the radius of the circle/ellipse/or whatever "explodes" for z0 even nearer to the real axis.
–
Gottfried HelmsDec 19 '10 at 21:07

Note: this should go as an answer to the comment of J.C.Ottern but I want to show data and thus need the entry for answers to the original questions.

Hi J.C. - For the following I took each second coefficient. To adapt the signs I multiply by powers of $ î =\sqrt{-1} $ to get coefficients say $ d_k $. Then I show the quotients of subsequent $d_k$: $q_k = d_k/d_{k-1} = - c_{2k}/c_{2k-2}$

If that behaviour continues the generated powerseries must have convergence-radius zero.

[update] Here are the first 24 terms of the powerseries for $f(x)$ as I got them. left column in float, middle column in most-cancelled rational format,right column normalized rational format (numerators can be found in OEIS):

This answer should go as a comment at George's analysis, but I had an error in the comments and also I couldn't format them properly. So here it goes.

To force a symmetry around the imaginary axis means to introduce a definition independent of the ansatz by the formal powerseries. We'll have to check, whether they finally match.

Assume some circle of radius $c$ over the origin with center at $0+c*î$
Then we choose $z_0 = 2*c*î$, just the top of the circle.
Using the functional equation $z_1 = z_0^2+z_{-1} $ together with the symmetry-assumtion and the assumtion, that $z_1$,$z_0$,$z_{-1}$ are on the circumference of the circle we can uniquely determine all needed coordinates and have thus a "germ" for the iteration. What we get is the following.

The resulting trajectory is very near to that which was computed using the truncated powerseries. It is symmetric by construction, however it leaves the circumference of the circle already at $z_2$

[update] here is a comparision of the "circularity" of the trajectories (as it was already computed) using the powerseries and that using the above ansatz with the assumtion of a symmetric and circular initializing around the center $(0,i*c)$

It seems interesting to complexify the map you are studying. Thus we can study now $$F:\binom{x}{y} \mapsto \binom{y}{x+y^2}$$
as a map from $\mathbb{C}^2$ to itself ( the so-called "complex Henon map"). Now near the origin, the second iterate $F \circ F$ is tangent to the identity. There is now a whole body of results concerning germs of maps of $\mathbb{C}^N$ that are tangent to the identity, that can be applied here.

In particular, such germs can have "parabolic curves" (analytic images of a disk $\Delta$ in $\mathbb{C}^2$ such that $(0,0) \in \partial \Delta$), included in the stable manifold of the origin. Such curves should be seen as generalizations of the "Leau-Fatou flowers" appearing in the study of parabolic fixed points in $\mathbb{C}$.

A very nice survey about local dynamics in $\mathbb{C}^N$ has been written by M. Abate (in LNM 1998, Springer "Holomorphic dynamical systems").

When studying such Henon maps in $\mathbb{C}^2$ it is also usually helpful to plot slices of the set $K^+$ (points with bounded forward orbits): here is a horizontal slice y=0 of $K^+$. In blue, you see the set of points with unbounded forward orbit.
.

But there is a good hint. Thanks to the rescaling argument the equation $f(f(x))=x+f(x)^2$ for analytic germs is equivalent to $g(g(x))=x+\lambda g(x)^2$,for $\lambda>0$, even for a small one, and the latter may be treated as aperturbation of $g(g(x))=x$ by means of the implicit funciton theorem.
–
Pietro MajerDec 18 '10 at 1:08

I again claim that there is no positive radius of convergence of the power series (which seems to go along with the numerical evidence provided).

Well I failed to redeem myself.

Some Observations:

First of all formally one has:
$f^{-1}(x)=-f(-x)$ then it follows that for $g(x)=-f(x)$ one has $g(g(x))=-f(-f(x))=f^{-1}(f(x))=x$.

Notice that (fomally) $g(x)=-x-\frac{1}{2} x^2+O(x^3)$ and that if $f$ had a positive radius of convergence then so would $g$.

Lemma: Let $g$ be real analytic on some interval $I=(-\epsilon,\epsilon)$ and have an expansion $g(x)=x+O(x^2)$ about $0$. If $g(g(x))=x$ for all $x\in (-\delta, \delta)$ then $g(x)=x$ for all $x\in I$.

Remark: I realized after posting this that as stated i.e $g=x+O(x^2)$ that there aren't even (non-trivial) formal solutions. However, if $g(x)=-x+O(x^2)$ there are formal solutions but but I don't know if there are actual analytic solutions

Proof: Let $r>0$ be the radius of convergence of $g$ at $0$ notice that $g$ extends to a holomorphic function (which we continue to denote by $g$) on

$$D_r=\lbrace z: |z|< r \rbrace \subset \mathbb{C}.$$

Notice that $g(z)=z+O(z^2)$ and $g'(z)=1+O(z)$. In particular, by the inverse function theorem there is a $r>\rho>0$ so that

1) $g(D_\rho)\subset D_r$ is open.

2) $g$ is an conformal isomorphism between $D_\rho$ and $g(D_\rho)$.

Notice, that on $D_\rho$ one has $g(g(z))=z$ by analytic continuation.

Now let $U=D_\rho\cap g(D_\rho)$ notice that $U$ is open and $0\in U$ by shrinking $\rho$ if needed we can ensure that $U$ is convex and hence simply-connected, we can also ensure $g(g(z))=z$ on $U$. We claim that $g(U)=U$. To see this note that if $p\in U$ then $p\in D_\rho $ and $p=g(p')$ for $p'\in D_\rho$. But $p'=g(g(p'))=g(p)\in g(D_\rho)$ so $p'\in U$ so $p\in g(U)$. Hence $U\subset g(U)$. On the other hand, if $q\in g(U)$ then $q=g(q')$ with $q'\in U$ but as we just showed $q'=g(q'')$ for $q''\in U$ and so $q=g(g(q''))=q''\in U$.

By the Riemann mapping theorem there is a conformal isomorphism $\psi:
U\to D_1$ so that $\psi(0)=0$ and $\psi'(0)=\lambda$ for $\lambda>0$. Now consider the map:

$$G=\psi \circ g \circ \psi^{-1}.$$

This is a conformal automorphism of $D_1$ and hence is a Mobius transformation that also satisfies $G(G(z))=z$. Now one checks that the only fractional linear transformations that that satisfy $G(G(z))=z$ are $G(z)=z$ and $G(z)=-z$. The latter can't occur as it would contradict $g(z)=z+O(z^2)$. In particular, one has $G(z)=z$. But then $g(z)=\psi^{-1}( G( \psi(z)))=\psi^{-1}( \psi(z))=z$ as claimed.