I assume I am correct when I say this but these two things aren't exactly the same thing, are they(dot product is a type of inner product right?) ? In what ways are they different? I looked it up but all I could find was some mentions of the Gram Schmidt Process. How does the Gram Schmidt orthonormalization process come about here?

Note: I was about to enter the tags for this question and I typed in "inner" and up came a suggestion "inner-product-spaces" and it said

An inner product space is a vector space equipped with an inner product. The inner product is a generalisation of the "dot" product.

How so? What kind of "generalization" does it mean? (I know this isn't all too suited question to be put up here but I hope for an answer :)

Generalization means that we observe the nice properties of a dot product (multilinear, etc) and define a new object - inner product - which is defined to be something having these nice properties. This means that a dot product is an inner product, but not every inner product is a dot product.
–
Asaf KaragilaMar 28 '12 at 12:10

Since you mention the Gram-Schmidt process, it works for any countable, linearly independent collection of vectors in an inner product space.
–
M TurgeonMar 28 '12 at 12:36

"dot product is a type of inner product right?": exactly! There is the dot product on $\mathbb{R}^n$, which is a special kind of inner product.
–
Najib IdrissiMar 28 '12 at 12:52

1

For finite dimensional spaces, any inner product can be regarded as a dot product with respect to some basis. The proof uses the Gram-Schmidt process to construct the basis.
–
Grumpy ParsnipMar 28 '12 at 12:59

2 Answers
2

Assume that we are working with finite dimensional spaces. Futhermore for simplicity assume that the underlying field $\mathbb{F}$ is the set of real numbers $\mathbb{R}$. An inner product on a vector space $V$ is a function $(.,.):V\times V\mapsto \mathbb{F}$ that obeys the following properties for all $u,v,w \in V $ and all $\lambda \in \mathbb{F}$:

$(u,v)=(v,u)$

$(\lambda v,u)=\lambda(v,u)$

$(v+w,u)=(v,u)+(w,u)$

$(u,u)\geq 0$ and $(u,u)=0$ iff $u=0$

Now if you are familiar with the Gram Schmidt process, any finite dimensional vector space equiped with an inner product (often called an euclidean space), has an orthonormal basis.

The proof of this involves two non trivial steps. The first step is to prove that any finite dimensional space has a basis. The second step is to show that this basis can be turned into an orthonormal basis. For a detailed explanation of how to do this I advice you to look into practically any book on linear algebra

If a vector space has an orthonormal basis one can show that any inner product on that space is the dot product in the orthonormal basis.

Proof: Let $u,v\in V$, then since $V$ is an euclidean space it has an orthonormal basis $\{e_1,…,e_n\}$, hence we have $u=x_1e_1+..+x_ne_n$ and $v=y_1e_1+…+y_ne_n$ for some scalars $x_1,..,x_n$ and $y_1,..,y_n$ in the underlying field. But then $$(u,v)=(x_1e_1+...+x_ne_n,y_1e_1+...+y_ne_n)=$$$$=\sum_{i=0}^n{}x_1y_i(e_1,e_i)+…+\sum_{i=0}^nx_ny_i(e_n,e_i)=x_1y_1+...+x_ny_n$$

Where we several times used that $\{e_1,…,e_n\}$ is an orthonormal basis , and that (.,.) is a norm. More precisely we used properties 2 and 3 of the norm.

The dot product on $\mathbb{R}^n$ is equivalent to matrix multiplication of a row and a column vector. The inner product generalizes this because it is defined more generally. For example, consider the vector space $V$ of continuous, square integrable complex-valued functions on $(-\pi,\pi)$. There is a nifty basis for this space, consisting of the functions $f_k(t)=e^{ikt}$. Turns out a nice inner product on the space is the integral of the product of two functions over the domain. The inner product of a function $f(t)\in V$ with a basis vector $f_k$ (or its complex conjugate, actually) then gives us the magnitude $c_k$ of the projection onto the subspace $\mathbb{C}f_k$, a.k.a. the Fourier coefficient $c_k=\int_{-\pi}^{\pi}f(t)\,e^{-ikt}\,dt$, and the function can be represented as the sum of its projections onto each such basis vector: $f=\sum_{k=0}^\infty c_k f_k$. This is called Fourier analysis. You can take any finite real interval $T$ as your domain, or you can take the whole real line, or higher dimensional cartesian products of these (tori, etc). There is a duality between the domains of the original function space and that of the Fourier transform: $T\cong\mathbb{S}^1\leftrightarrow\mathbb{Z}$, $\mathbb{R}\leftrightarrow\mathbb{R}$; in other words, Fourier transforms of periodic functions are Fourier series, while Fourier transforms of functions on the real line are again functions on the real line.

The above is intentionally weak on details, just to give you a taste of where this generalization can go.

What does that have to do with the question at hand? I guess you were going to define the dot product on $L^2(\mathbb{S}^1)$ etc., but as it is, your answer is barely one. Also the $f_k$ are an Hilbert basis or (more ambiguously) an orthonormal basis; not every element is in their span, but their span is dense in the space.
–
Najib IdrissiMar 28 '12 at 12:49

Yes, thanks, you're right. If you like, I can open up my answer as a community wiki and you can do what you like with it to improve it as you see fit. I only intended it to whet the appetite.
–
bginsMar 28 '12 at 16:55