Obviously I cannot apply the implicit function theorem. Under which circumstances can I still express y as a function of x locally aoround (0,0)? It seems as though it should be $y'(0) = \pm \sqrt{-\frac{F_{xx}(0,0)}{F_{yy}(0,0)}}$...

Is $F_{xx}(0,0)/F_{yy}(0,0) > 0$ a typo? As you wrote it (in $(0,0)$ $F_x=F_y=F_{xy}=0$; $F_{xx}\neq0, F_{yy}\neq0$; $F_{xx}F_{yy}>0$) the origin is either a strict local minimum or a strict local maximum for $F$, so it is an isolated point of the zero set of $F$.
–
Pietro MajerJan 12 '12 at 14:50

Assuming you had $F_{xx}(0,0) F_{yy}(0,0) < 0$, it is true that the zero-set of $F$, locally at $(0,0)$, is the union of two graphs of monotone $C^1$ functions $y_+(x)$ and $y_-(x)$.
–
Pietro MajerJan 12 '12 at 15:04

Yes, that was a typo. Thanks, I'm just now reading up on morse theory.
–
dettonvilleJan 12 '12 at 15:35

1

I would never want to stop anyone from learning Morse theory, but: (1) Morse theory is not necessary to answer your question, and (2) depending on what treatment of Morse theory you look at, it may not be sufficient either, because you don't need the Morse Lemma to do Morse Theory!
–
Tom GoodwillieJan 12 '12 at 15:54

Note: it's quite easy to show that there is a small rectangle $[-\delta,\delta]\times[-\epsilon,\epsilon]$ about the origin where you can use the intermediate value theorem and say that the zero set of $F$ there is exactly the union of two functions $y_+$ and $y_-$ $[-\delta,\delta]\to[-\epsilon,\epsilon]$, arranged so that sign $x y_\pm(x)$ is positive resp. negative. Also, the standard Implicit Function Theorem tells you that these functions are indeed at least as regular as $F$ for $x\neq0$. The delicate point is to show that the functions are smooth at $x=0$ too.
–
Pietro MajerJan 12 '12 at 16:59

2 Answers
2

This is a job for the Morse lemma. The second degree Taylor polynomial of $F(x,y)$ has the form $ax^2+by^2$ where $ab<0$. (You said $>0$ but that can't be what you meant.) The Morse Lemma says, for a sufficiently smooth function of several variables, that if it has zero constant and linear parts and a nondegenerate quadratic part then there is a change of variables making the function purely quadratic. In the case at hand (two variables and indefinite quadratic part) that means that $F=uv$ where $u(x,y)$ and $v(x,y)$ have no constant term and have linearly independent linear parts whose product is $ax^2+by^2$. Now the implicit function theorem applies to $u$ and to $v$ and you find that the solution set of $F(x,y)=0$ is locally the union of the graphs of two smooth functions, one with positive slope and one with negative slope.

EDIT Here is a proof of the Morse Lemma:

First, in one variable it says that if $f(0)=f_x(0)=0$ and $f_{xx}(0)\neq 0$ then in a neighborhood of $x=0$ $f(x)$ is plus or minus the square of a smooth function. This is true because the vanishing of $f(0)$ means that $f(x)=xg(x)$ for some smooth $g$, and the vanishing of $f_x(0)=g(0)$ means that $g(x)=xh(x)$ for some smooth $h$, and the nonvanishing of $f_{xx}(0)=2g_x(0)=2h(0)$ means that $h$ is locally plus or minus a square.

Now suppose that $F(x,y)$ is such that $F$, $F_x$, and $F_y$ vanish at $x=y=0$ but $F_{yy}$ does not, and suppose that the homogeneous quadratic part of the Taylor series is nondegenerate. The implicit function theorem applied to $F_y$ shows that $F_y=0$ along the graph of a smooth function $y=k(x)$ with $k(0)=0$. Define $f(x)=F(x,k(x))$. Then $F(x,y)-f(x)$ is such that both it and its $y$-derivative vanish along that graph. Therefore we may write $F(x,y)-f(x)=(y-k(x))^2H(x,y)$ for some smooth $H$. Furthermore $2H(0,0)=F_{yy}(0,0)\neq 0$, so $H$ is plus or minus the square of a smooth function. Now $F(x,y)$ is the sum of $F(x,y)-f(x)$, which is the square of a smooth function, and $f(x)$, which must also be plus or minus the square of a smooth function by the one-variable case.

Therefore there exist $\delta > 0$ and $\epsilon > 0$ such that
$F_{yy}(x,y) > 0$ for $|x| \le \eta$ and $|y|\le\epsilon$. So for all $|x|\le \eta$
the function $y\mapsto F(x,y)$ is strictly convex on the interval
$[-\epsilon,\epsilon]$. In particular $F(0,\pm\epsilon) >0$ because
$F(0,0)=F_y(0,0)=0$. Since $F(0,\pm\epsilon) >0$ and $F_{xx}(0,0) < 0$, we also
have by continuity $F(x,\pm\epsilon) >0$ and $F_{xx}(x,0) < 0$, for all
$|x|\le\delta$ for some $0 < \delta\le\eta$; thus $F(x,0) < 0$ for $0 < |x|\le
\delta$. Now, since for all $|x|\le\delta$ the function $y\mapsto F(x,y)$ is
strictly convex on the interval $[-\epsilon,\epsilon]$, positive at $y=\pm\epsilon$
and negative at $y=0$, for any $0 < |x|\le\delta$ we have $F(x,y)=0$ exactly for one
$0 < y < \epsilon$ and one $-\epsilon < y < 0$, always with $F _ y (x,y)\neq0$,
while $F(0,y)=0$ exactly for $y=0$ if $|y|\le\epsilon$. This proves that the trace
of the zero-set of $F$ on $[-\delta,\delta]\times [-\epsilon,\epsilon]$ is the union
of the graphs of two functions, $y_+: [-\delta,\delta]\to [-\epsilon,\epsilon]$ and
$y_-: [-\delta,\delta]\to [-\epsilon,\epsilon]$ defined so that $\operatorname{sgn}
y _ + (x)=\operatorname{sgn} x$ and $\operatorname{sgn}y _ - (x)=-\operatorname{sgn}
x$. Note that the fact that $\epsilon$ is arbitrary immediately implies that $y_+$
and $y_-$ are continuous at $x=0$ and vanish there. Actually, if we locate the zero-set of $F$ with a bit more care we also have that $y _ \pm (x) $
is derivable at $x=0$: this follows from the fact that $F$ satisfies an inequality locally at the origin:
$$\big(F_{xx}(0,0)+o(1) \big)x^2/2 +\big( F_{yy}(0,0)+o(1) \big)y^2/2 \le F(x,y) $$

Wow, thanks so much. I can't believe you even did all the epsilons and deltas! Working through it now, looks like I'll learn Morse Theory some other time.
–
dettonvilleJan 13 '12 at 8:34

Once started, I couldn't stop :) I edited and tried to improved it. It seems the points are: (1) show that the zero-set in a small rectangle is the union of two graphs (this is just the Intermediate Value Thm); (2) Locating the zero set more precisely tells you that each function admits a derivative at $0$; (3) use the standard Implicit Function Theorem to prove that both functions are $C^1$ away form $0$; (4) using the expansion of $F$ at the origin prove that the derivatives of the two functions are indeed continuous at $0$.
–
Pietro MajerJan 13 '12 at 10:47