I am trying to get a better feel for solving questions where creating a function with a unique fixed point is the crux of the proof.

In particular, the Inverse Function Theorem as well as the existence of solutions of certain ODE's can be proven by using contraction mappings. (Which have exactly one fixed point by Banach's Fixed Point Theorem.)

My question is then what other problems can be solved by this technique?
I am interested in any that come to mind.

There is a very slick proof (discussed here on MO) that every prime $p=4k+1$ is a sum of two squares, which looks at the set $S= \{(x,y,z) \in N^3: x^2+4yz=p \}$ and shows that a particular involution of $S$ has exactly one fixed point.

If you are interested in applications of fixed point theorems, you'll find entire journals dedicated to them. For example, you see fixed point techniques pop up in approximation theory where one is interested in finding the best approximation.

For a specific problem, consider the following:

You decide to take a break from the fast paced world of academia to climb Mt. Fuji. You begin your ascent at sunrise along a narrow path. Along the way, you stop a few times to take in the scenery and eat, maybe even work out a math puzzle or two. You reach the top at sunset. The next day you begin your descent at sunrise, again making leisurely stops along the way. It's reasonable to assume that going downhill is easier than uphill so let's assume your average downhill speed is greater than your average uphill speed. Show that there must be a place along the path that you occupy at the exact same time of day during your uphill and downhill trips.

I am familiar with a good example from the theory of 2nd order elliptic PDE. Technicalities omitted...

A special case of the Leray-Schauder Theorem says the following:

Let $T$ be a compact mapping of a Banach space X into itself and suppose there exists a constant $M$ such that $\|x\| \leq M$ for all $x$ in the set {$ x \in X : Tx = \sigma x\ \text{for some}\ \sigma \in [0,1]$}. Then $T$ has a fixed point.

One proves this by applying a sort of infinite-dimensional Brouwer's fixed-point theorem. The clever bit comes next:

Say you want to solve the Dirichlet problem for the 2nd order quasiliniear elliptic PDE

$Qu = a^{ij}(x,u,Du)D_{ij}u + b(x,u,Du) = 0$,

and you know from more basic (Schauder) theory how to solve linear problems.Then, I define an operator $T$ by sending $v \in C^{1,\beta}(\overline{\Omega})$ to the unique solution $u$ of the linear problem

$Qu = a^{ij}(x,v,Dv)D_{ij}u + b(x,v,Dv) = 0$.

(I won't bother writing in the boundary conditions). Then a fixed point of this map is exactly a solution of the quasilinear problem! The Leray-Schauder theorem thus advocates the apriori bound philosophy: To prove the existence of a solution, you can assume it exists and then just bound it in the relevant Banach space. The task is getting the bound $\|u\|_{C^{1,\beta}(\overline{\Omega})} < M$ for solutions of

Thurston's classification of diffeomorphisms of surfaces involves constructing a big space (projective measured lamination space) on which the diffeo acts, and studying the fixed points -- see the answer by Ryan Budney to this question. In the generic (pseudo-Anosov) case, there are exactly two fixed points, not one, though -- does that still count as an answer to your question?

The standard proofs of the existence of Nash equilibria in game theory all use either Brouwer's or Kakutani's fixed point theorem. See for example Nash's 1951 paper "Non-Cooperative Games," where he defines his equilibrium notion and gives the Brouwer-based proof.

Recent complexity-theory results by Daskalakis, Papadimitriou, etc. showing PPAD-completeness of computing Nash equilibria mean that in some sense a fixed point theorem (or equivalent) is necessary to prove existence of Nash equilibria.

A fixed point theorem like the Banach one does not in general apply to this problem, because there can be multiple equilibria.

I'm not sure if this is what you had in mind, but counting fixed points seem to come up often in elementary group theory, particularly in arguments involving group actions. In this setting, not only must you pick the right function (homomorphism of $G$ into an appropriate permutation group) but you also have to pick the correct "domain" (a suitable group $G$).

For example, one way to show that all $p$-Sylow subgroups of a group are conjugate involves counting fixed points of $p$-Sylow groups under conjugation by other $p$-Sylows; a simpler (and cuter!) example, is the proof that every group of size divisible by $p$ has an order-$p$ element. To the best of my memory, this (standard) proof is found in Hungerford:
Suppose $G$ is a group with $p \mid |G|$, and let $U = \{ (g_1,\dots,g_{p-1},x): (g_1\cdot \dots \cdot g_{p-1})\cdot x =1_{G} \}$, i.e. the set of all $p$-tuples of elements in $G$ whose product is the identity. Since $x$ is uniquely determined by the $g_i$, $|U| = |G|^{p-1}$, so $p \mid |U|$ as well.
Now, letting $Z/pZ$ act on $U$ by cyclic permutation yields a fixed set with size divisible by $p$, but greater than one, for at least one non-trivial element $(g_1,\dots,g_{p-1},x) \in U$ which is invariant under cyclic permutation, i.e. some $g_1=\dots=g_{p-1}=x \neq 1_{G}$. Consequently, we have $x^p = 1_{G}$, as desired.

In addition to ODE existence theorems, there are also uses for PDE existence/uniqueness theorems. An example of that is constructing weak solutions to the linear Boltzmann equation. I think this example is interesting because it is more of a philosophy, not so much precise "fixed point theorem" that is used here.

The linear Boltzmann equation is:

$\partial_t f + v\cdot \nabla_x f = Kf -af + Q$

where

$Kf = \int k(t,x,v,v') f(t,x,v')dv'$

By Duhamel's principle, we know that a strong solution would satisfy

$ f(t,x,v) = f_0(x-tv,v) + \int_0^t (Kf - af + Q)(s,x-(t-s)v,v)ds$.

We basically use this as our definition of a weak solution. Thus, we can rephrase the search for a weak solution as looking for a fixed point to the operator

$ g \mapsto F[f,Q] + \tau g $

where

$ F[f_0,Q] = f_0(x-vt,v) + \int_0^t Q(s,x-(t-s)v,v)ds$

and

$\tau g = \int_0^t (Kf - af)(s,x-(t-s)v,v)ds$.

Notice that the series

$\sum_{n\geq 0} \tau^n[F[f_0,Q]]$

would be such a fixed point if we had appropriate convergence (just hit it with $\tau$ and see what happens), so basically, we've reduced the problem to bounding the operator $\tau$ in the appropriate space which we would like weak solutions to live. As I mentioned above, this doesn't really use any "fixed point theorems" but is clearly still a fixed point argument.