NOTES ON MINIMAL SURFACES

Transcription

1 NOTES ON MINIMAL SURFACES DANNY CALEGARI Abstract. These are notes on minimal surfaces, with an emphasis on the classical theory and its connection to complex analysis, and the topological applications to 3-manifold geometry/topology. These notes follow a course given at the University of Chicago in Spring 014. Contents 1. First variation formula 1. Minimal surfaces in Euclidean space 7 3. Second variation formula 1 4. Acknowledgments 15 References First variation formula In this section we derive the first variation formula, which characterizes those submanifolds that are critical for the volume functional among compactly supported variations: the critical submanifolds are precisely those with vanishing mean curvature. We also develop some elements of the theory of Riemannian geometry in coordinate-free language (as far as possible) Grad, div, curl. There are many natural differential operators on functions, vector fields, and differential forms on R 3 which make use of many implicit isomorphisms between various spaces. This can make it confusing to figure out the analogs of these operators on Riemannian 3-manifolds (or Riemannian manifolds of other dimensions). In this section we recall the co-ordinate free definitions of some of these operators, which generalize the familiar case of Euclidean R Gradient. On a smooth manifold M, there is a natural differential operator d, which operates on functions and forms of all orders. By definition, if f is a smooth function, df is the 1-form such that for all vector fields X, there is an identity df(x) = Xf Where f is nondegenerate, the kernel of df is a hyperplane field, which is simply the tangent space to the level set of f through the given point. Date: May 5,

2 DANNY CALEGARI If M is a Riemannian manifold with inner product denoted,, there are natural isomorphisms between 1-forms and vector fields (called the sharp and the flat isomorphisms) defined by α, X = α(x) for a 1-form α and a vector field X, and X (Y ) = X, Y for vector fields X and Y. Using these isomorphisms, for any function f on a Riemannian manifold the gradient, denoted grad(f) or sometimes f, is the vector field defined by the formula grad(f) := (df) In other words, grad(f) is the unique vector field such that, for any other vector field X, we have grad(f), X = df(x) For any 1-form α, the vector field α is perpendicular to the hyperplane field ker(α); thus grad(f) is perpendicular to the level sets of f, and points in the direction in which f is increasing, with size proportional to the rate at which f increases. The zeros of the gradient are the critical points of f; for instance, grad(f) vanishes at the minimum and the maximum of f Divergence. On an oriented Riemannian n-manifold there is a volume form dvol, and a Hodge star taking k-forms to (n k)-forms, satisfying α α = α dvol This does not define α uniquely; we must further add that α is orthogonal (with respect to the pointwise inner product on (n k)-forms) to the subspace of forms β with α β = 0. In other words, α is the form of smallest (pointwise) norm subject to α α = α ω. With this notation, dvol is the constant function 1; conversely for any smooth function f, we have f = fdvol. If X is a smooth vector field, then (at least locally and for short time) flow along X determines a 1-parameter family of diffeomorphisms φ(x) t. The Lie derivative of a (contravariant) tensor field α, denoted L X α, is by definition L X α = d φ(x) t α dt t=0 For forms α it satisfies the Cartan formula L X α = ι X dα + dι X α where ι X β is the interior product of X with a form β (i.e. the form obtained by contracting β with X). The divergence of a vector field X, denoted div(x) (or sometimes X or X), is the function defined by the formula div(x) = (L X dvol) By Cartan s formula, L X dvol = dι X dvol, because dvol is closed. vector field X we have the identity ι X dvol = (X ) Furthermore, for any

3 NOTES ON MINIMAL SURFACES 3 which can be verified pointwise, since both sides depend only on the values at a point. Thus we obtain the equivalent formula div(x) = d (X ) The operator d on 1-forms is often denoted d ; likewise we sometimes we denote the operator div(x) by, on the grounds that (X) = d (X ). If X is a vector field and f is a compactly supported function (which holds automatically for instance if M is closed) then X, f dvol = df(x)dvol = df ι X dvol M Now, d(fι X dvol) = df ι X dvol + fdι X dvol = df ι X dvol + fdiv(x)dvol But if f is compactly supported, d(fι M Xdvol) = 0 and we deduce that X, f dvol = f div(x)dvol M M So that div (i.e. ) is a formal adjoint to grad (i.e. ), justifying the notation. The divergence of a vector field vanishes where L X dvol = 0; i.e. where the flow generated by X preserves the volume Laplacian. If f is a function, we can first apply the gradient and then the divergence to obtain another function; this composition (or rather its negative) is the Laplacian, and is denoted. In other words, M M f = div grad(f) = d df Thus formally, is a non-negative self-adjoint operator, so that we expect to be able to decompose the functions on M into a direct sum of eigenspaces with non-negative eigenvalues. Indeed, if M is closed, then L (M) decomposes into an (infinite) direct sum of the eigenspaces of, which are finite dimensional, and whose eigenvalues are discrete and non-negative. A function f with f = 0 is harmonic; on a closed manifold, the only harmonic functions are constants. On Euclidean space, harmonic functions satisfy the mean value property: the value of f at each point is equal to the average of f over any round ball (or sphere) centered at f. In general, the value of a harmonic function f at each point is a weighted average of the values on a ball centered at that point; in particular, a harmonic function on a compact subset of any Riemannian manifold attains its maximum (or minimum) only at points on the frontier Curl. Now we specialize to an oriented Riemannian 3-manifold. The operator d takes 1-forms to 1-forms. Using the sharp and flat operators, it induces a map from vector fields to vector fields. The curl of a vector field X, denoted curl(x) (or sometimes X), is the vector field defined by the formula Notice that this satisfies the identities curl(x) = ( d(x )) div curl(x) = d d(x ) = 0

4 4 DANNY CALEGARI (because = ±1 and d = 0) and curl grad(f) = ( ddf) = 0 On a Riemannian manifold of arbitrary dimension, it still makes sense to talk about the -form d(x ), which we can identify (using the metric) with a section of the bundle of skew-symmetric endomorphisms of the tangent space. Identifying skew-symmetric endomorphisms with elements of the Lie algebra of the orthogonal group, we can define curl(x) in general to be the field of infinitesimal rotations corresponding to d(x ). On a 3-manifold, the vector field curl(x) points in the direction of the axis of this infinitesimal rotation, and its magnitude is the size of the rotation Flows and parallel transport. On a Riemannian manifold, there is a unique torsionfree connection for which the metric tensor is parallel, namely the Levi-Civita connection, usually denoted (when we mix the connection with gradient in formulae, we will denote the gradient by grad). If X is a vector field on M, we can generate a 1-parameter family of automorphisms of the tangent space at each point by flowing by X, then parallel transporting back along the flowlines of X by the connection. The derivative of this family of automorphisms is a 1-parameter family of endomorphisms of the tangent space at each point, denoted A X. In terms of differential operators, A X := L X X, and one can verify that A X Y is tensorial in Y. Thus, X determines a section A X of the bundle End(T M). On an oriented Riemannian manifold, the vector space End(T p M) = T p M T p M is an o(n)-module in an obvious way, and it makes sense to decompose an endomorphism into components, corresponding to the irreducible o(n)-factors. Each endomorphism decomposes into an antisymmetric and a symmetric part, and the symmetric part decomposes further into the trace, and the trace-free part. In this language, (1) the divergence of X is the negative of the trace of A X. As a formula, this is given pointwise by div(x)(p) = trace of V V X on T p M () the curl of X is the skew-symmetric part of A X ; and (3) the strain of X (a measure of the infinitesimal failure of flow by X to be conformal) is the trace-free symmetric part of A X. 1.. First variation formula. Let M be a Riemannian n-manifold, and let Ω be a smooth bounded domain in R k. Let f : Ω M be a smooth immersion with image Σ, and let F : Ω ( ɛ, ɛ) M be a one-parameter variation supported in the interior of Ω. Let t denote the coordinate on ( ɛ, ɛ), and let T = df ( t ), which we think of (at least locally) as a vector field on M generating a family of diffeomorphisms φ(t ) t (really we should think of T as a vector field along F ; i.e. a section of the pullback of T M to Ω by F ). Under this flow, Σ evolves to Σ(t), and at each time is equal to F (Ω, t). The flow T determines an endomorphism field A T along f. This endomorphism decomposes into a skew-symmetric part (which rotates the tangent space to Σ but preserves volume) and a symmetric part. The derivative at t = 0 of the area of an infinitesimal plane tangent to T Σ(t) is the negative of the trace of A T restricted to T Σ. As in this can

5 NOTES ON MINIMAL SURFACES 5 be expressed as the trace of V V T restricted to T Σ. If e i is an orthonormal frame field along T Σ, we obtain a formula d volume(σ(t)) = ei T, e i dvol dt t=0 The integrand on the right hand side of this formula is sometimes abbreviated to div Σ (T ). If we decompose T into a normal and tangential part as T = T + T, this can be expressed as d volume(σ(t)) = div(t ) + ei T, e i dvol dt t=0 Σ i where div(t ) means the divergence in the usual sense on Σ of the vector field T, thought of as a vector field on Σ. But ei T, e i = e i T, e i T, ei e i = T, ei e i by the metric property of the Levi-Civita connection, and the fact that T is orthogonal to e i. Similarly, Σ div(t )dvol = 0 by Stokes formula, because T is compactly supported in the interior. The sum H := i e i e i (where denotes the normal part of the covariant derivative) is the mean curvature vector, which is the trace of the second fundamental form, and is normal to Σ by definition; thus T, i e i e i = T, H. Putting this together, we obtain the first variation formula: Proposition 1.1 (First Variation Formula). Let Σ be a compact immersed submanifold of a Riemannian manifold, and let T be a compactly supported vector field on M along Σ. If Σ(t) is a 1-parameter family of immersed manifolds tangent at t = 0 to the variation T, then d volume(σ(t)) = T, H dvol dt t=0 Σ Consequently, Σ is a critical point for volume among compactly supported variations if and only if the mean curvature vector H vanishes identically. This motivates the following definition: Definition 1.. A submanifold is said to be minimal if its mean curvature vector H vanishes identically. The terminology minimal is widely established, but the reader should be warned that minimal submanifolds (in this sense) are not always even local minima for volume. Example 1.3 (Totally geodesic submanifolds). If Σ is 1-dimensional, the mean curvature is just the geodesic curvature, so a 1-manifold is minimal if and only if it is a geodesic. A totally geodesic manifold has vanishing second fundamental form, and therefore vanishing mean curvature, and is minimal. An equatorial sphere in S n is an example which is minimal, but not a local minimum for volume. Warning 1.4. It is more usual to define the mean curvature of a k-dimensional submanifold Σ to be equal to 1 k i e i e i. With this definition, the mean curvature is the average of the principal curvatures of Σ i.e. the eigenvalues of the second fundamental form rather Σ i

6 6 DANNY CALEGARI than their sum. But the convention we adhere to seems to be common in the minimal surfaces literature; see e.g. [1] p. 5 or [3] p Calibrations. Now suppose that F is a codimension 1 foliation of a manifold M. Locally we can coorient F, and let X denote the unit normal vector field to F. For each leaf λ of the foliation we can consider a compactly supported normal variation fx, and suppose that λ(t) is a 1-parameter family tangent at t = 0 to fx. Then d volume(λ(t)) = div λ (fx)dvol dt t=0 and because X is normal, this simplifies (by the Leibniz rule for covariant differentiation) to d volume(λ(t)) = fdiv λ (X)dvol dt t=0 λ Because X is a normal vector field of unit length, it satisfies div(x) = div λ (X). Thus we obtain the lemma: Lemma 1.5 (Normal field volume preserving). A cooriented codimension 1 foliation F has minimal leaves if and only if the unit normal vector field X is volume-preserving. Now, suppose F is a foliation with minimal leaves, and let X be the unit normal vector field. It follows that the (n 1)-form ω := ι X dvol is closed. On the other hand, it evidently satisfies the following two properties: (1) the restriction of ω to T F is equal to the volume form on leaves; and () the restriction of ω to any (n 1) plane not tangent to T F has norm strictly less than the volume form on that plane. Such a form ω is said to calibrate the foliation. Lemma 1.6. Let F be a foliation with minimal leaves. Then leaves of F are globally area minimizing, among all compactly supported variations in the same relative homology class. Proof. Let λ be a leaf, and let µ be obtained from λ by cutting out some submanifold and replacing it by another homologous submanifold. Then volume(µ) ω = ω = volume(λ) µ where the middle equality follows because ω is closed, and the inequality is strict unless µ is tangent to F. But µ agrees with λ outside a compact part; so in this case µ = λ. Example 1.7. Let Σ be an immersed minimal (n 1)-manifold in R n. Let p Σ be the center of a round disk D in the tangent space T p Σ. Let C be the cylindrical region obtained by translating D normal to itself. Then C Σ is a graph over D, and we can foliate C by parallel copies of C Σ, translated in the normal direction. Thus there exists a calibration ω defined on C, and we see that C Σ is least volume among all surfaces in C obtained by a compactly supported variation. But C is convex, so the nearest point projection to C is volume non-increasing. We deduce that any immersed minimal (n 1)-manifold in R n is locally volume minimizing. This should be compared to the fact that geodesics in any Riemannian manifold are locally distance minimizing. λ λ

7 NOTES ON MINIMAL SURFACES Gauss equations. Recall that the second fundamental form on a submanifold Σ of a Riemannian manifold M is the symmetric vector-valued bilinear form II(X, Y ) := XY and H is the trace of II; i.e. H = i II(e i, e i ) where e i is an orthonormal basis for T Σ. The Gauss equation for X, Y vector fields on Σ is the equation K Σ (X, Y ) X Y = K M (X, Y ) X Y + II(X, X), II(Y, Y ) II(X, Y ) where X Y := X Y X, Y is the square of the area of the parallelogram spanned by X and Y, and R(X, Y )Y, X K(X, Y ) := X Y is the sectional curvature in the plane spanned by X and Y, where R(X, Y )Z := X Y Z Y X Z [X,Y ] Z is the curvature tensor. The subscripts K Σ and K M denote sectional curvature as measured in Σ and in M respectively; the difference is that in the latter case curvature is measured using the Levi-Civita connection on M, whereas in the former it is measured using the Levi-Civita connection on Σ, which is :=. If Σ is a -dimensional surface in a 3-manifold M, then we can take X and Y to be the directions of principal curvature on Σ (i.e. the unit eigenvectors for the second fundamental form). If we coorient Σ, the unit normal field lets us express II as an R-valued quadratic form, and the principal curvatures are real eigenvalues k 1 and k. Since H = k 1 + k, for Σ a minimal surface we have k 1 = k and K Σ = K M II Where II := i,j II(e i, e j ). In other words, a minimal surface in a 3-manifold has curvature pointwise bounded above by the curvature of the ambient manifold. So for example, if M has non-positive curvature, the same is true of Σ.. Minimal surfaces in Euclidean space In this section we describe the classical theory of minimal surfaces in Euclidean spaces, especially in dimension 3. We emphasize throughout the connections to complex analysis. The local theory of minimal surfaces in Riemannian manifolds is well approximated by the theory of minimal surfaces in Euclidean space, so the theory we develop in this section has applications more broadly..1. Graphs. A smooth surface in R 3 can be expressed locally as the graph of a function defined on a domain in R. It is useful to derive the first variation formula expressed in such terms. For historical reasons, minimal graphs are sometimes referred to as nonparametric minimal surfaces. Fix a compact domain Ω R with boundary, and a smooth function f : Ω R. Denote by Γ(f) the graph of f in R 3 ; i.e. Γ(f) = {(x, y, f(x, y)) R 3 such that (x, y) Ω}

8 8 DANNY CALEGARI Ignoring the last coordinate defines a projection from Γ(f) to Ω, which is a diffeomorphism. An infinitesimal square in Ω with edges of length dx, dy is in the image of an infinitesimal parallelogram in Γ(u) with edges (dx, 0, f x dx) and (0, dy, f y d y ) so area(γ(f)) = (1, 0, f x ) (0, 1, f y ) dxdy = 1 + grad(f) dxdy Ω Define a 1-parameter family of functions f(t) := f + tg for some g : Ω R with g Ω = 0. For each t we get a graph Γ(t) := Γ(f(t)), all with fixed boundary. We can compute d d area(γ(t)) = 1 + grad(f) + tgrad(g) dxdy dt t=0 Ω dt t=0 grad(f), grad(g) = dxdy 1 + grad(f) = Ω Ω g div ( Ω grad(f) 1 + grad(f) ) dxdy where the last line of the derivation follows by integration by parts, using the fact that g has support in the interior of Ω (this is just our previous observation that div is a formal adjoint for grad). Thus: f is a critical point for area (among all smooth 1-parameter variations with support in the interior of Ω) if and only if f satisfies the minimal surface equation in divergence form: ( ) grad(f) div = grad(f) Expanding this, and multiplying through by a factor (1 + grad(f) ) 3/ we obtain the equation (1 + f y )f xx + (1 + f x)f yy f x f y f xy = 0 This is a second order quasi-linear elliptic PDE. We explain these terms: (1) The order of a PDE is the degree of the highest derivatives appearing in the equation. In this case the order is. () A PDE is quasi-linear if it is linear in the derivatives of highest order, with coefficients that depend on the independent variables and derivatives of strictly smaller order. In this case the coefficients of the highest derivatives are (1 + f y ), (1 + f x) and f x f y which depend only on the independent variables (the domain variables x and y) and derivatives of order at most 1. (3) A PDE is elliptic if the discriminant is negative; here the discriminant is the discriminant of the homogeneous polynomial obtained by replacing the highest order derivatives by monomials. In this case since the PDE is second order, the discriminant is the polynomial B 4AC, which is equal to 4f xf y 4(1 + f y )(1 + f x) = 4(1 + f x + f y ) < 0 Solutions of elliptic PDE are as smooth as the coefficients allow, within the interior of the domain. Thus minimal surfaces in R 3 are real analytic in the interior.

9 NOTES ON MINIMAL SURFACES 9 If f is constant to first order (i.e. if f x, f y ɛ) then this equation approximates f xx +f yy = 0; i.e. f = 0, the Dirichlet equation, whose solutions are harmonic functions. Thus, the minimal surface equation is a nonlinear generalization of the Dirichlet equation, and functions with minimal graphs are generalizations of harmonic functions. In particular, the qualitative structure of such functions their singularities, how they behave in families, etc. very closely resembles the theory of harmonic functions... Mean curvature. The vector field N := ( f x, f y, 1) 1 + grad(f) is nothing but the unit normal field to the graph Γ(f). We can extend N to a vector field on Ω R by translating it parallel to the z axis. We obtain the identity: ( ) grad(f) div R 3(N) = div R 1 + grad(f) where the subscript gives the domain of the vector field where each of the two divergences are computed. Now, if Σ is a hypersurface in an arbitrary Riemannian manifold M, we can always find some smooth function h so that Σ is a level set of h. The unit normal vector field to the level sets of h is grad(h)/ grad(h), and the mean curvature is ( ) grad(h) H = div grad(h) In other words, the divergence form of the minimal surface equation is just a restatement of H = 0, as observed by Meusnier..3. Conformal parameterization. A parametric surface in R n is just a smooth map from some domain Ω in R to R n. Let s denote the coordinates on R by u and v, and the coordinates on R n by x 1,, x n, so by abuse of notation we denote our map by x : Ω R n. The Jacobian is the matrix with column vectors x x and, and where this matrix has rank u v the image is smooth, and the parameterization is locally a diffeomorphism to its image. The metric on R n makes the image into a Riemannian surface, and every Riemannian metric on a surface is locally conformally equivalent to a flat metric. Thus after precomposing x with a diffeomorphism, we may assume the parameterization is conformal (one also says that we have chosen isothermal coordinates). This means exactly that there is a smooth nowhere vanishing function λ on Ω so that x u = x v = λ and x u x v = 0 We can identify Ω with a domain in C with complex coordinate ζ := u + iv, and for each coordinate x j define φ j := x j ζ = x j u i x j v Then we have the following lemma:

10 10 DANNY CALEGARI Lemma.1 (Conformal parameterization). Let Ω C be a domain with coordinate ζ, and x : Ω R n a smooth immersion. Then x is a conformal parameterization of its image (with conformal structure inherited from R n ) if and only if φ j = 0, where φ j = x j / ζ. Furthermore, functions φ j as above with φ j = 0 define an immersion if and only if φj > 0. Proof. By definition, φ j = ( j ( ) xj u ( ) ) ( xj x j i v u j whose real and imaginary parts vanish identically if and only if the parameterization is conformal, where it is an immersion. Furthermore, φj = λ x j v ) so the map is an immersion everywhere if and only if φ j > 0. Recall that the second fundamental form II is a symmetric bilinear pairing on T Σ with values in the normal bundle of Σ. If N is a normal vector field, we can contract with N to get a symmetric bilinear pairing on T Σ, which we denote II, N. For a parametric surface x : Ω R n, if e 1, e are vector fields on Ω with dx(e i ) orthonormal on x(ω), the matrix entries of II, N are just the second partial derivatives e i (e j (x)) N. Note that since [e i, e j ](x) = dx([e i, e j ]) is tangent to T Σ (and therefore orthogonal to N), this pairing is symmetric. If the parameterization x : Ω R n is conformal, we can take e 1 = λ 1 u and e = λ 1 v so that the mean curvature H i.e. the trace of the second fundamental form is the vector ( ) H = λ x u + x = λ x v Thus we obtain the following elegant characterization of minimal surfaces in terms of conformal parameterizations: Lemma. (Harmonic coordinates). Let Ω C be a domain with coordinate ζ, and x : Ω R n a conformal parameterization of a smooth surface. Then the image is minimal if and only if the coordinate functions x j are harmonic on Ω; equivalently, if and only if the functions φ j := x j ζ are holomorphic functions of ζ. Proof. All that must be checked is the fact that the equation x j = 0 is equivalent to the Cauchy Riemann equations for φ j : x j = ( ) xj u u i x j + i ( ) xj v v u i x j = φ j v ζ

11 NOTES ON MINIMAL SURFACES 11 A holomorphic reparameterization of Ω transforms the coordinate ζ and the functions φ j, but keeps fixed the 1-form φ j dζ. Combining this observation with the two lemmas, we obtain the following proposition, characterizing minimal surfaces in R n parameterized by arbitrary Riemann surfaces: Proposition.3. Every minimal surface in R n is obtained from some Riemann surface Ω together with a family of n complex valued 1-forms φ j satisfying the following conditions: (1) (conformal): φ j = 0; () (minimal): the φ j are holomorphic; (3) (regular): φ j > 0; and (4) (period): the integral of φ j over any closed loop on Ω is purely imaginary. The map x : Ω R n may then be obtained uniquely up to a translation by integration: ( ζ ) x j = Re φ j + c j Proof. All that remains is to observe that the period condition is both necessary and sufficient to let us recover the coordinates x j by integrating the real part of the φ j. If the φ j are holomorphic and not identically zero, then φ j can equal zero only at isolated points in Ω. Near such points the map from Ω to its image is branched. We say that a surface is a generalized minimal surface if it is parameterized by some Ω as in Proposition.3, omitting the condition of regularity..4. Conjugate families. Let Ω be a Riemann surface, and φ j a collection of n holomorphic 1-forms satisfying φ j = 0. Integrating the φ j along loops in Ω determines a period map H 1 (Ω; Z) C n, and an abelian cover Ω corresponding to the kernel of the period map. Then we obtain a further integration map Φ : Ω C n whose coordinates z j are given by z j = ζ φ 0 j. The standard (complex) orthogonal quadratic form has the value zj on a vector z with coordinates z j ; by Proposition.3 the image Φ( Ω) is isotropic for this orthogonal form. Consequently we obtain a family of minimal surfaces in R n parameterized by the action of the complex affine group C n (C O(n, C)) where the first factor acts on C n by translation, and the second factor acts linearly. The C n action just projects to translation of the minimal surface in R n, and R O(n, R) just acts by scaling and rotation; so this subgroup acts by ambient similarities of R n. The action of S 1 C is more interesting; the family of minimal surfaces related by this action are said to be a conjugate family. At the level of 1-forms this action is a phase shift φ j e iθ φ j. Lemma.4. Let x(θ) : Ω Σ θ be a conjugate family of generalized minimal surfaces in R n ; i.e. their coordinates are given by integration 0 x j (θ)(p) = Re p 0 e iθ φ j for some fixed family of 1-forms φ j on Ω with φ j = 0. Then for any θ the composition x(0) x(θ) 1 : Σ θ Σ 0 is a local isometry, and each fixed point p Ω traces out an ellipse x( )(p) : S 1 R n.

12 1 DANNY CALEGARI Proof. If we write φ j = a j + ib j then a j = b j and a j b j = 0; i.e. the vectors a and b are perpendicular with the same length, and this length is the length of dx(0)( u ) in T x(0)(ω). But the length of dx(θ)( u ) in T x(θ)(ω) is just cos(θ)a + sin(θ)b = a = b for all θ, so x(0) x(θ) 1 : Σ θ Σ 0 is an isometry as claimed. The second claim is immediate: ( p ) ( p ) x j (θ)(p) = cos(θ)re φ j + sin(θ)re iφ j Second variation formula The first variation formula shows that minimal surfaces are critical for volume, among smooth variations, compactly supported in the interior. To determine the index of these critical surfaces requires the computation of the second variation. In this section, we derive the second variation formula and some of its consequences Second variation formula. We specialize to the case that Σ is a hypersurface in a Riemannian manifold. We further restrict attention to variations in the normal direction (this is reasonable, since a small variation of Σ supported in the interior will be transverse to the exponentiated normal bundle). Denote the unit normal vector field along Σ by N, and extend N into a neighborhood along normal geodesics, so that N N = 0. Let F : Ω ( ɛ, ɛ) M satisfy F (, 0) : Ω Σ, and if t parameterizes the interval ( ɛ, ɛ), there is a smooth function f on Ω ( ɛ, ɛ) with compact support so that T := df ( t ) = fn. Then define Σ(t) := F (Ω, t). Let e i be vector fields defined locally on Ω so that df (, 0)(e i ) are an orthonormal frame on Σ locally, and extend them to vector fields on Ω ( ɛ, ɛ) so that they are constant in the t direction; i.e. they are tangent to Ω t for each fixed t, and satisfy [e i, t ] = 0 for all i. By abuse of notation we denote the pushforward of the e i by df also as e i, and think of them as vector fields on Σ(t) for each t. For each point p Σ corresponding to a point q Ω the curve F (q ( ɛ, ɛ)) is contained in the normal geodesic to Σ through p, and is parameterized by t. Along this curve we define g(t) to be the matrix whose ij-entry is the function e i, e j (where we take the inner product in M). The infinitesimal parallelepiped spanned by the e i at each point in Σ(t) has volume det(g(t)). Projecting along the fibers of the variation, we can push this forward to a density on Σ(0) which may be integrated against the volume form to give the volume of Σ(t); thus volume(σ(t)) = det(g(t))dvol We now compute the second variation of volume. Taking second derivatives, we obtain d t=0 d t=0 volume(σ(t)) = det(g(t))dvol dt dt Σ(0) Σ

14 14 DANNY CALEGARI Putting this together gives d t=0 volume(σ(t)) = dt Integrating by parts gives Σ d dt t=0 volume(σ(t)) = f II + grad Σ (f) f Ric(N)dvol Σ ( Σ II Ric(N))f, f dvol (remember our convention that = div grad). If we define the stability operator L := Σ II Ric(N) (also called the Jacobi operator), we obtain the second variation formula: Proposition 3.1 (Second Variation Formula). Let Σ be a compact immersed codimension one submanifold of a Riemannian manifold, and let T = fn where N in the unit normal vector field along Σ, and f is smooth with compact support in the interior of Σ. Suppose that Σ is minimal (i.e. that H = 0 identically). If Σ(t) is a 1-parameter family of immersed manifolds tangent at t = 0 to the variation T, then d dt t=0 volume(σ(t)) = Σ L(f)fdvol where L := Σ II Ric(N) and Σ := div Σ grad Σ is the Laplacian on Σ. A critical point for a smooth function on a finite dimensional manifold is usually called stable when the Hessian (i.e. the matrix of second partial derivatives) is positive definite. This ensures that the point is an isolated local minimum for the function. However, in minimal surface theory one says that minimal submanifolds are stable when the second variation is merely non-negative: Definition 3.. A minimal submanifold Σ is stable if no smooth compactly supported variation can decrease the volume to second order. By the calculation above, this is equivalent to the so-called stability inequality: Proposition 3.3 (Stability inequality). If Σ is a stable codimension 1 minimal submanifold of M, then for every Lipschitz function f compactly supported in the interior of Σ, there is an inequality ( ) Ric(N) + II f dvol grad Σ f dvol Σ Stability can also be expressed in spectral terms. The operator L is morally the Hessian at Σ on the space of smooth compactly supported normal variations. Thus, as an operator on functions on Σ, it is linear, second order and self-adjoint on the L completion of C 0 (Σ), which we denote L (Σ). It is obtained from the second order operator Σ by adding a 0th order perturbation II Ric(N). The spectrum of Σ is non-negative and discrete, with finite multiplicity, and L (Σ) admits an orthogonal decomposition into eigenspaces. The eigenfunctions are as regular as Σ, and therefore as regular as M (since Σ is minimal), so for instance they are real analytic if M is. When we obtain L from Σ by perturbation, finitely many eigenvalues might become negative, but the spectrum is still discrete and bounded below, so that the index (i.e. the number of negative eigenvalues, counted with multiplicity) is finite. Σ

BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

Chapter 7 Div, grad, and curl 7.1 The operator and the gradient: Recall that the gradient of a differentiable scalar field ϕ on an open set D in R n is given by the formula: ( ϕ ϕ =, ϕ,..., ϕ. (7.1 x 1

Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

State of Stress at Point Einstein Notation The basic idea of Einstein notation is that a covector and a vector can form a scalar: This is typically written as an explicit sum: According to this convention,

Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

Vectors, Gradient, Divergence and Curl. 1 Introduction A vector is determined by its length and direction. They are usually denoted with letters with arrows on the top a or in bold letter a. We will use

The cover SU(2) SO(3) and related topics Iordan Ganev December 2011 Abstract The subgroup U of unit quaternions is isomorphic to SU(2) and is a double cover of SO(3). This allows a simple computation of

Notes prepared by Andy Huang (Rice University) In this note, we will discuss some motivating examples to guide us to seek holomorphic objects when dealing with harmonic maps. This will lead us to a brief

LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

3.8 Finding Antiderivatives; Divergence and Curl of a Vector Field 77 3.8 Finding Antiderivatives; Divergence and Curl of a Vector Field Overview: The antiderivative in one variable calculus is an important

Math 312, Fall 2012 Jerry L. Kazdan Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday. In addition to the problems below, you should also know how to solve

7. Divisors Definition 7.1. We say that a scheme X is regular in codimension one if every local ring of dimension one is regular, that is, the quotient m/m 2 is one dimensional, where m is the unique maximal

Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

Math 33 AH : Solution to the Final Exam Honors Linear Algebra and Applications 1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each) (1) If A is an invertible

1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

olutions for Review Problems 1. Let be the triangle with vertices A (,, ), B (4,, 1) and C (,, 1). (a) Find the cosine of the angle BAC at vertex A. (b) Find the area of the triangle ABC. (c) Find a vector

some algebra prelim solutions David Morawski August 19, 2012 Problem (Spring 2008, #5). Show that f(x) = x p x + a is irreducible over F p whenever a F p is not zero. Proof. First, note that f(x) has no

Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable

THE EIGENVALUES OF THE LAPLACIAN ON DOMAINS WITH SMALL SLITS LUC HILLAIRET AND CHRIS JUDGE Abstract. We introduce a small slit into a planar domain and study the resulting effect upon the eigenvalues of

MAT 051 Pre-Algebra Mathematics (MAT) MAT 051 is designed as a review of the basic operations of arithmetic and an introduction to algebra. The student must earn a grade of C or in order to enroll in MAT

Section 4.4 Inner Product Spaces In our discussion of vector spaces the specific nature of F as a field, other than the fact that it is a field, has played virtually no role. In this section we no longer

Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

DIVISORS AND LINE BUNDLES TONY PERKINS 1. Cartier divisors An analytic hypersurface of M is a subset V M such that for each point x V there exists an open set U x M containing x and a holomorphic function

Chapter 4 Ideal Class Group and Units We are now interested in understanding two aspects of ring of integers of number fields: how principal they are (that is, what is the proportion of principal ideals

Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?

Chapter 7 Moving Least Squares Approimation An alternative to radial basis function interpolation and approimation is the so-called moving least squares method. As we will see below, in this method the

Manifold Learning Examples PCA, LLE and ISOMAP Dan Ventura October 14, 28 Abstract We try to give a helpful concrete example that demonstrates how to use PCA, LLE and Isomap, attempts to provide some intuition

Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

Math 5311 Gateaux differentials and Frechet derivatives Kevin Long January 26, 2009 1 Differentiation in vector spaces Thus far, we ve developed the theory of minimization without reference to derivatives.

Coefficient of Potential and Capacitance Lecture 12: Electromagnetic Theory Professor D. K. Ghosh, Physics Department, I.I.T., Bombay We know that inside a conductor there is no electric field and that

.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a

2.2. Creaseness operator 31 2.2 Creaseness operator Antonio López, a member of our group, has studied for his PhD dissertation the differential operators described in this section [72]. He has compared