I've been studying (teaching myself about) Hilbert spaces for a while now as they have a habit of popping up in many of the papers I'm come across (I'm a computer scientist). I understand what completeness means (re Cauchy sequences), most texts make that clear. What I wish the texts made clearer is why completeness is necessary.

One result in Hilbert space theory that uses the fact that Hilbert spaces are complete is: If H is a Hilbert space and if E is a nonempty, closed, convex subset of H, then E contains a unique element of minimal norm.
–
Amitesh DattaAug 17 '10 at 8:53

The above result is, of course, very elementary. There are other deeper results that "subtly" use completeness. I suppose results that talk about "locating an element" (such as the result above) are typically the most elementary ones that use completeness. I hasten to repeat that there are others as well.
–
Amitesh DattaAug 17 '10 at 8:59

For the record, if you remove the completeness condition, what you obtain is called a pre-Hilbert space. They're barely studied, as: (1) you lose the benefits of all the nice properties mentioned here for complete spaces (complement, min norm...), and (2) historically, the examples that most people really cared about were complete anyway.
–
Thierry ZellAug 18 '10 at 4:25

12 Answers
12

The answers already posted are quite satisfying, I'd just like to add one more point of view (at the risk of making the thing more confused for the OP :). When Sobolev started solving PDEs, he did not have reasonable function spaces available: working in $C^2$ is a nightmare as soon as you want to do calculus of variations, and it is immediately clear that 'something is missing'. You naturally construct solutions by approximating them (with minimizing sequences, with smooth approximations etc. etc.). The original approach of Sobolev was: well, all I have is this approximating sequence, so THIS SEQUENCE is my solution, whatever that means. This was his original definition of 'weak solution'.

As you see, he was dispensing completely with completeness, and working only with functions in a dense subspace. This is perfectly fine, and I'm tempted to answer to the original question with the paradox: completeness is not really necessary, even from a theoretical standpoint, since of course you can embed every normed space in a complete one. But this is very awkward; it is vastly more economical to 'define' the limit of your approximating sequence. Indeed, this procedure is precisely what is called completion. Working in a complete space makes it possible to take the limit of your approximation and define a solution as a concrete object. 100 yeasr later, we find this approach totally natural. I think this was one of the driving forces behind the universal adoption of complete spaces in analysis.

To temper the temptation to think dense spaces are good enough. The nice thing about L1 spaces, are that they are the completion of continuous functions on compact sets with respect to the L1 norm. L1 spaces are equivalence classes of functions, not sequences. Which is pleasant.
–
PolymerMar 25 at 18:03

There are many uses of completeness. One of the first (and one of the most striking) is the fact that a closed subspace has a complement. This in turn uses the fact that for a point not on the closed subspace there is a point in the subspace which is closest to it. This in turn is shown by constructing a Cauchy sequence whose limit (which exists by completeness) is the desired point. (Note that the construction also uses that we have not just a norm but a scalar product and that the scalar product is also used in proving the uniqueness.)

I had a thought about this today. Completing a space is a bit like getting a bank loan to buy something really nice (bear with me on this). Because:

If you have enough money already, getting a bank loan isn't worth the hassle.

If you can do your analysis without needing completion, then it's simpler to just do it.

The loan still has to be paid back, but having a loan means that you put off paying until later.

As others have said, a common use of completion is to use Hilbert space techniques to study non-Hilbert spaces. But often one wants to know that the final result is in the original space. So using Hilbert space techniques is a way of putting off questions of existence until later.

If you're a financial wizard, you can take out the load, use the money to make more money, repay the original loan and end up ahead of the game.

Sometimes, just sometimes, once you've done the Hilbert space stuff then all the rest just falls in to place.

The point, such as it is, is that when you have an incomplete Hilbert space then completing it adds in stuff that you didn't want - if you did want it then you would have put it there to begin with. Thus when we complete continuous functions to square-integrable ones, we do so in the knowledge that we'd really rather be using continuous functions as they're much better behaved than these nasty not-quite-functions.

John Baez is fond of a quotation attributed to Grothendieck: "It's better to work in a nice category with nasty objects than in a nasty category with nice objects.". One could adapt that to Hilbert spaces: "It's better to work in a nice vector space with nasty elements than in a nasty vector space with nice elements.". In this respect, Schwarz functions are some of the nicest functions you could met, but they live in student accommodation. On the other hand, square-integrable functions have some undesirable personal habits but live in a fantastic mansion.

And to underline my last point, sometimes it's possible to go to a party hosted by the Square Integrables in their posh mansion, but spend the whole time hanging out with the Schwarz family.

Mariano: I'd rather get listed in the "colourful language" thread, but I guess that that doesn't exist. (As a matter of fact, I just signed for something a bit like a loan today so that was on my mind, plus I'm teaching Hilbert spaces later this semester, and I did use the "nice house, shame about the people" illustration when I taught Fourier analysis a year or so ago.). Victor: Do you mean the one: "People who live in glass houses shouldn't."?
–
Loop SpaceAug 18 '10 at 21:02

It's interesting the way that you phrase the question. I always thought of Hilbert spaces as Banach spaces with extra structure and just took for granted that completeness is always desirable. (Banach are also complete, but their norm does not necessarily arise from an inner product.) Torsten names an important consequence. Another one: completeness ensures that all Hilbert spaces have an orthonormal basis. The basis allows one to handle Hilbert spaces much the same way that one thinks of finite dimensional vector spaces.
Whereas in the finite dimensional case a linear combination like $y= c_1x_1 + \cdots c_nx_n$ makes perfect sense, to say that $y$ equals some infinite sum $\sum_k c_k x_k$ is not so easy to interpret. When one defines "equals" topologically, i.e., $y$ equals the sum when the limit of partial sums
$y_n = \sum_{k=0}^n c_kx_k$ converges to $y$, as one does for series of real and complex numbers, the analogy between the finite dimensional and infinite dimensional case works.

I like this explanation. BTW, I've just realized that what bothered/bothers me is the fact that, regardless of the means by which the sequence is generated, every Cauchy (convergent) sequence converges to the space. My instinct was to ask why, and seek a proof. However, I'm starting to think differently. Could it be that this is condition/restriction is means by which a normed space is enlarged such that it contains the limit of any sequence, regardless of mechanism that generates the sequence.
–
OlumideAug 17 '10 at 8:50

I'm having trouble parsing the last sentence of your comment. Let me offer the following clarification, though. If a vector space has a norm, and a sequence $(x_n)$ converges to a point $x$ with respect to that norm, i.e., using the $\varepsilon$-$\delta$-definition, <em>then</em> the sequence is Cauchy. In complete normed spaces, therefore, the characterization "convergent if and only if Cauchy" holds for sequences. More generally this is true for metric spaces; that is, the vector space structure isn't necessary.
–
Jerry GagelmanAug 17 '10 at 13:07

I was suggesting that Cauchy convergence criterion is a means by which a pre-Hilbert space is enlarged such that it contains no "holes". What used to bother me was the statement that all Cauchy sequences converged in that space. It used to make me wonder, "How can I be really sure, without examining all possible Cauchy sequences?". I stopped worrying about this when I considered that the statement might be definition (intended to widen the space) and might not be a result to be proved. BTW, I'm not familiar with the $\varepsilon-\delta$-definition but I will look it up.
–
OlumideAug 18 '10 at 7:30

Is completeness really necessary to get the existence of an orthonormal basis? It seems to me that the usual Zorn's lemma argument goes through in any inner product space.
–
Nate EldredgeAug 18 '10 at 19:12

@Nate: if one defines orthonormal basis as "maximal orthonormal set," you're right. Through you're question I realize that I'm too used to thinking of an orthonormal basis as a collection $\{x_\alpha\}_{\alpha\in A}$ of orthonormal elements such that $y = \sum_\alpha (x_\alpha, y)x_\alpha)$ for any $y$. (Equality with the sum here means convergence of nets.) In Hilbert space, the two definitions are equivalent. Having just had a quick peek at Reed-Simon, completeness is used to prove that maximal orthonormal sets have the latter property.
–
Jerry GagelmanAug 18 '10 at 19:59

Not every Hilbert space has possess the reproducing property. Only Reproducing Kernel Hilbert Spaces do. Furthermore, this your remark does not address the role of completeness in a Hilbert space, which is what I wanted to know.
–
OlumideAug 29 '10 at 1:36

4

while this is unrelated to your original question, you're missing the point of the answer. What Tsuyoshi says is indeed correct: the RRT applies to any Hilbert space. The reproducing property requires that the "pointwise functional" that sends f to f(x) is continuous, which by the RRT then gives you the reproducing kernel. While I'm not sure, I suspect that dropping completeness might break the RRT.
–
Suresh VenkatAug 29 '10 at 3:40

3

By searching the web, I found a statement (with a proof) that if an inner product space satisfies the Riesz representation property, it must be complete (hence a Hilbert space). See e.g. the excerpt from Chapter 4 of “Functional Analysis” by Dzung Minh Ha (Matrix Editions, 2006) available online at matrixeditions.com/FA.Chap4.329-330.pdf
–
Tsuyoshi ItoAug 29 '10 at 12:11

2

@Olumide: I do not know what “reproducing kernel Hilbert spaces” are, but I do not think that it affects the correctness of the Riesz representation theorem. And I do not know why my answer does not address the role of completeness in a Hilbert space: I personally view the Riesz representation theorem as the very reason why I care about Hilbert spaces instead of arbitrary inner product spaces as a computer scientist working on quantum computation.
–
Tsuyoshi ItoAug 29 '10 at 12:28

This isn't as much about Hilbert Spaces per se. But generally, the Hilbert spaces that are interesting (to me) and arise naturally (say in quantum mechanics) are infinite dimensional.

Now once one tries to do linear algebra on infinite dimensional normed spaces, analysis becomes crucial. It is just a general principle that when doing analysis on some space, you want it to be complete.

The point is that you want to use analysis to approximate things, meaning that in order to study things, you instead study approximations,
(i.e. sequences that converge to them). So to do this properly you want to know that the sequences which you know should converge (Cauchy seq), actually do converge (Completeness).

Note: The requirement that it be complete isn't really a restriction, since any inner product space can be embedded densely into a complete one.

I don't quite agree with the note. In the theory of elliptic operators (on compact manifolds say) for instance one often wants to find smooth eigenfunctions. One way of doing it is to complete the space of smooth functions to get a Hilbert space, show using Hilbert space techniques, that there are eigenvectors in that completion and then use extra arguments to show that all eigenvectors lie in the uncompleted space. Hence Hilbert spaces are used but the uncompleted spaces are the ones one is interested in.
–
Torsten EkedahlAug 17 '10 at 7:12

3

@Torsten. Yes, this type of argument is used all the time in Operator Algebras. In fact, this is exactly what I meant in my note. That, for any inner-product space, you can use still use Hilbert space techniques. Sure the last step you mention is often times the really difficult one, but I don't think that goes against the point I was trying make.
–
Owen SizemoreAug 17 '10 at 7:59

3

I know this would sound tautological, but it's not the infinite-dimensionality of Hilbert spaces that's the issue, it's their completeness. Let me explain: rational numbers are "finite-dimensional" and not complete, hence in order to do analysis, we complete them to the reals. Now, a miracle occurs: a fin-dim real vector space endowed with $\textit{any}$ norm is automatically complete. So we are speaking prose already, whether realizing it or not. If for some reason we tried to do analysis in $\mathbb{Q}^n$ with a positive-definite inner product, we'd run into the same kind of issues.
–
Victor ProtsakAug 18 '10 at 20:45

Many points have been mentioned, but scanning through old questions I found this here: nobody mentioned that one can classify Hilbert spaces so easily via the size of the Hilbert basis. If you would only have pre Hilbert spaces, then there is an abundance of possibilities beyond any reasonable classification. Only after completions these differences become whiped out.

So completion has always two aspects for me: one get's something for free (and getting more is always better), the new limit point. On the other hand, one looses information about the dense subspace one started with. This might or might not be an advantage (classification becomes easier, but also coarser...)

My impression as a spectator of operator theory is that the classification (Gramsch-Luft?) is not all that helpful - we are usually interested in Hilbert spaces carrying extra structure depending on the problem at hand...
–
Yemon ChoiMay 18 '11 at 18:32

Dear Yemon: On one hand you're certainly right. The Hilbert spaces in real life come with important extra structure being e.g. function spaces or something like that. So an abstract isometric isomorphism between them is often of limited use. However, for certain applications the knowledge that there is only one (up to iso) infinite-dim separable Hilbert space out there is very useful. E.g. in AQFT one tries to build certain operator algebras associated to diamond shaped regions in Minkowski spacetime. There are general arguments that the resulting von Neumann algebras are the unique...
–
Stefan WaldmannMay 19 '11 at 14:46

oops, too long: ...hyperfinite III_1 factors. So I'm not quite sure if one can get through with a classification of factors without the uniqueness of the ambient Hilbert space. One might have to do the classification for each Hilbert space again (?). OK, this is probably a rather remote application, but the AQFT people really rely on these kind of statements as in their examples the Hilbert space comes quite abstractly out of a GNS construction....
–
Stefan WaldmannMay 19 '11 at 14:58

I would recommend browsing through some books on the theory of signal processing for engineers, e.g. the ones by Martin Vetterli and his co-authors, see e.g:
http://www.sp4comm.org/docs/sp4comm.pdf
It is remarkable how they "sell" completeness of Hilbert spaces through the use of orthonormal bases, in the sense that these allow to decompose EVERY signal of finite energy! Once in a seminar talk, I heard Vetterli exclaiming: "If Hilbert spaces weren't complete, we would have no television!". As a pure mathematician, I could only appreciate...

Let $H$ be a Hilbert space and let $\{u_{\alpha}:\alpha\in A\}$ be an
orthonormal set in $H$. Suppose $\phi$
is in the $l^2$-space of $(A,\mu)$ where $\mu$ is the counting measure on $A$. Then
$\phi=\widehat{x}$ for some $x\in H$ where
$\widehat{x}:A\to \mathbb{C}$ is
defined by
$\widehat{x}(\alpha)=(x,u_{\alpha})$,
the inner product of $x$ with
$u_{\alpha}$, for each $\alpha\in A$.

(I quote Theorem 4.17, page 89, in the second edition of Walter Rudin's Real and Complex Analysis.) In fact, I do not think it would be an exaggeration to say that the Riesz-Fischer Theorem is nothing but a reformulation of the completeness of $H$ - that is how crucial the assumption of completeness is to the proof.

Imagine if your computer program performed a Newton iteration on a non-complete space. The answer that pops out could be ``random'', as Newton iteration produces a Cauchy sequences, and will simply stop at your error tolerance without having converged.

I am not quite sure why you posted this as an answer to this question. Unfortunately MO is poor as a discussion medium (by design), so new/further questions should be asked separately. That said, the limit in what sense? When you treat a sequence of functions and assert that they converge to something, you need to also provide a topology or a norm under which they converge.
–
Willie WongAug 28 '10 at 17:24

It seemed too weak to stand as a question on its own. Besides the question sort of ties in with my original question of why it is necessary to complete a space.
–
OlumideAug 28 '10 at 18:00

Cauchy with respect to what norm or metric?
–
Yemon ChoiAug 28 '10 at 19:34

1

Erm, L^p is well-known to be complete, so whatever your examples are supposedly converging to in L^p, it can't be the Dirac delta. They all look like they converge weakly to the Dirac delta in some suitable sense, but that is different.
–
Yemon ChoiAug 29 '10 at 11:22