Many mathematical areas have a notion of "dimension", either rigorously or naively, and different dimensions can exhibit wildly different behaviour. Often, the behaviour is similar for "nearby" dimensions, with occasional "dimension leaps" marking the boundary from one type of behaviour to another. Sometimes there is just one dimension that has is markedly different from others. Examples of this behaviour can be good provokers of the "That's so weird, why does that happen?" reaction that can get people hooked on mathematics. I want to know examples of this behaviour.

My instinct would be that as "dimension" increases, there's more room for strange behaviour so I'm more surprised when the opposite happens. But I don't want to limit answers so jumps where things get remarkably more different at a certain point are also perfectly valid.

33 Answers
33

Here's a fun little example that I thought was neat... it's quite simple but tends to go against most people's geometric instinct.

We consider the cube $[-2,2]^d$ in $\mathbb{R}^d$. At the points with all coordinates equal to 1 or -1 (e.g. in $\mathbb{R}^3$, points like (1,1,1), (1,-1,-1), etc) we put unit balls. We define the "central ball" $B_d$ to be the largest ball centered at the origin that does not intersect the interior of any of the other balls we have placed. You can easily visualize this in the case $d=2$, just think of the square $[-2,2]^2$, draw 4 unit discs, one centered in each quadrant, and then $B_d$ is the little disc in the center that is big enough to just hit the boundary of these 4 balls. The question is, what is the asymptotic relationship (as d goes to infinity) between the volume of $B_d$ and the volume of $[-2,2]^d$?

The answer is that $m(B_d)/m([-2,2]^d)$ goes to infinity! Most people will try to visualize this problem in $\mathbb{R}^2$ or $\mathbb{R}^3$ to get an intuition for the behavior, and just implicitly assume that $B_d$ is contained within $[-2,2]^d$. And it certainly is in those low dimensional cases. But when you actually compute the radius of $B_d$, you see that it's $\sqrt{d}-1$, and so $B_d$ is not even contained in in $[-2,2]^d$ for $d > 9$.

Any opinions on why this one happens? Aside from more or less restating the proofs. I bet there's a variety of opinions on this one.
–
Ryan BudneyNov 13 '09 at 17:52

7

@Ryan: Has that rainy day ever come? Anyway, closely related to this: there are topological 4-manifolds that admit no smooth structures (as opposed to smaller dimensions) and some that admit infinitely many (in sharp contrast to what happens in every other dimension).
–
Marco GollaApr 18 '11 at 20:09

4

@RyanBudney: There is a convincing brief explanation, or rather hint of this phenomenons on the fourth cover page of Scorpan's book on 4-dimensional manifolds: dimension 4 is large enough to allow strange things to happen, but too small to enable one to undo them. In particular, the fact that many problems have been understood in dimension 5 and greater seems to be due to the fact that strange things can happen in principle, but that they are in fact not strange (i.e. they can be shown to be equivalent to non-strange things by using the room given by the high dimension).
–
Benoît KloecknerAug 3 '14 at 20:16

My favorite example is regular polytopes. The number of regular polytopes is almost monotone decreasing, from countably many in $\mathbb{R}^2$, to five in $\mathbb{R}^3$ to 3 for $\mathbb{R}^n$ for $n>4$. But in $n=4$, we get six, which is kind of weird.

According to Coxeter, Schlafli really should get the credit for this result. Though most of his work went unnoticed in his lifetime, he did solve the problem completely, and before anyone else. There doesn't seem to be any question of his answer not being considered a "proof." (Though I don't read German, so I can't say for sure). What is true, though, is that there's been a lot of debate over the meaning of "polytope" in the time since then (Schlafli apparently uses "finite region bounded by a finite number of hyperplanes.")
–
Emily PetersNov 13 '09 at 22:09

The key fact here I guess is that these spheres sit inside spaces of the right dimension so they get multiplicative structure from the multiplication in the reals, complex numbers, quaternions, and octonians. To an algebraic topologist, your answer has to be the best for relating to this amazing fact, the Hopf Invariant One Theorem, hence my +1
–
David WhiteMay 5 '11 at 20:41

All manifolds in dimension $n\leq 3$ are triangulable. Conjecturally, all manifolds in dimension $n\geq 5$ can be triangulated by a simplicial complex which is not necessarily a combinatorial manifold. But "few" 4-manifolds are triangulable.
I don't think that this has anything to do with the fact that $R^4$ admits infinitely many PL structures. So perhaps dimension 4 is weird in topology for (at least) two completely different reasons.

Finite volume hyperbolic manifolds (usually) have deformations through complete structures in dimension two, but not in higher dimensions [Weil, Prasad]. In dimension three, they have deformations through incomplete structures [Thurston], and in dimensions four and up you don't even have that [Wang].

I feel I have to comment that this is equivalent to Adams' Hopf Invariant One theorem and relates to the fact that spheres in these dimensions sit in the reals, complex numbers, quaternions, and octonians. Hope you get the upvotes you deserve, this is a great answer
–
David WhiteMay 5 '11 at 20:43

I'm not sure if you want an example or commentary, so I'll give both: the example Michael Lugo gave above is that the Poincare conjecture was hardest to prove in three dimensions. My commentary as far as this being a general phenomenon is that in low dimensions one expects "local" obstructions to strange behavior whereas in high dimensions one expects "global" obstructions to strange behavior.

The smooth Poincare conjecture in dimension 4 is still open so there's an argument to be made that it's the most difficult case. But I suppose you could also argue that there's been less of a systematic effort on that problem.
–
Ryan BudneyNov 13 '09 at 17:56

6

Put another way, we expect topology to be governed by algebra in dimension $n\geq 5$, but not in dimensions 3 and 4, because of the Whitney embedding theorem.
–
Daniel MoskovichDec 31 '09 at 11:36

Quantum physics is a good source of these kinds of phenomena. Classical physics often allows us to formulate a theory uniformly in any dimension. But when we quantise systems, suddenly special dimensions pop out. Quantising often involves some kind of infinite limiting process and in the limit we end up losing a symmetry that was there in the original classical system. These are called anomalies. But in special dimensions we can arrange for these anomalies to cancel. For example the simplest string theory, bosonic string theory, only works in 26 dimensions (= 2+the 24 of the Leech Lattice mentioned in another answer, no coincidence BTW).

In each case there's an interesting mathematical story to be told. For example the dimensions in which superstring theory can be made to work are related to the dimensions picked out by the division algebras: 1, 2, 4, 8.

Phase changes in matter are sort of an example of this, if one replaces "dimension" by "energy." I don't know much about statistical mechanics, but as an enthusiastic amateur, the fact that (for instance) the 2-D Ising model undergoes a phase change still blows my mind.

There certainly are dimensionality issues in phase transitions as well, such as the fact that in dimensions greater than the "upper critical dimension" 4, all critical exponents of the Ising model become equal to their mean field values.
–
j.c.Dec 18 '09 at 18:14

2

I suspect it blew Ising's mind too. Based on his analysis of the one-dimensional case, he conjectured in his 1924 Ph.D. thesis that there would be no phase transition in any dimension. He then moved on to other things, and (according to an obituary cited by Wikipedia) didn't learn until 1949 that the Ising model had become widely studied. It must have been a shock to learn how differently it behaved from how he expected.
–
Henry CohnApr 19 '11 at 1:03

The Smith-Minkowski-Siegel mass formula implies that the number of unimodular lattices of given dimension eventually starts to increase more than exponentially fast, so one might expect that they are easy to classify in small dimensions and gradually become harder to classify in higher dimensions as the mass of the SMS formula increases. In fact this is not what happens: there is a quite precise dimension where the behavior changes qualitatively and the lattices become much harder to classify. This is the jump from dimension 25 to 26. The reason is related to the existence of the Leech lattice in dimension 24, which controls unimodular lattices in dimension up to 25. (The 25 dimensional ones were classified by hand about 30 years ago, but the 26 dimensional case is so much harder that no-one has attempted it since then even with the help of modern petaflop computers.)

The wave equation behaves differently in even and odd space dimensions. In odd-dimensional space, radial waves satisfy a modified version of the one-dimensional wave equation. In particular, Huygens' principle holds. This is not so in even-dimensional space. This difference is reflected in the usual existence proof for solutions of the wave equation, which is easier in odd-dimensional space. Then one handles the wave equation in even-dimensional space by adding a dimension.

There's a lot of dimension parity issues like 1) when a sphere has a non-zero vector field, or 2) when the n-sphere can be turned inside-out in euclidean (n+1)-space, etc.
–
Ryan BudneyNov 13 '09 at 18:08

I don't know if it can count as an answer, as the "dimension" involved here doesn't range through a discrete set of values, but:

For any subset $A$ of a given metric space, there is a specific dimension $\alpha$ (the Hausdorff dimension of $A$) for which the $\beta$-dimensional Hausdorff measure of $A$ is $\mathcal{H}^{\beta}(A)=+\infty$ for $\beta < \alpha$ and it is $\mathcal{H}^{\beta}(A)=0$ for $\beta > \alpha$.

Actually, this seems to be exactly the sort of answer the OP was asking for. Somehow a year ago when most of the answers were given people missed Hausdorff dimension, but it's a great example.
–
David WhiteMay 5 '11 at 20:31

Jones' index, for subfactors, is not quite an answer to this question. The range of possible values of indices of subfactors has both a discrete part (indices less than 4 must be of the form $4 \cos^2(\frac{\pi}{n})$ for $n \geq 3$) and a continuous part (any number $\geq 4$ is attainable).

The reason this is relevant is that the index measures the dimension of the subfactor inside the larger factor -- so the phenomenon which is observed to "jump" at dimension 4, is exactly the possible dimensions!

Projective spaces provide an example (counter to the usual trend)
where the jump from dimension 2 to 3 actually brings greater
simplicity. In any projective space of dimension 3 the Desargues
theorem holds, which implies that space can be coordinatized by
a skew field.

In dimension 2 (projective planes) the Desargues theorem need not
hold. As a result, projective planes cannot be founded on any
familiar algebraic structure and they are very hard to classify.

This is a very nice question. Let me describe the very different behavior of convex polytopes in 3-dimensions compared to convex polytopes in higher dimensions. In three dimensions we have the following facts:

1) Every triangulation of $S^2$ and, more generally every realization of $S^2$ by a polyhedral complex are combinatorially equivalent to the boundary complex of a convex polytope.

2) Every polytope is combinatorially equivalent to a rational polytope - namely to a polytope all whose vertices have rational coordinates.

3) Every automorphism of the face lattice of a convex polytope can be realized by a rigid motion of a combinatorially equivalent polytope.

These statements follow or extend a well known theorem of Steinitz. They are related to the Koebe-Andreev-Thurston circle packing theorem. They all fail very strongly in dimension 4 and higher.

For the sake of this answer, "dimension" should be interpreted as "number of variables."

In quantum logic, four is the smallest $n$ such that a classically unsatisfiable propositional formula in $n$ variables can be satisfied by substituting quantum propositions in a meaningful way. One example of such a proposition is $$((a\oplus b)\oplus(c\oplus d))\oplus((a\oplus c)\oplus(b\oplus d)),$$ where $\oplus$ is exclusive-or. Note that the grouping of expressions here is crucial: in quantum logic, two propositions can only be meaningfully combined by a logical connective if the corresponding projection operators (or equivalently, the "spin" operators) commute. (For example, the proposition "I have position X and momentum Y" is not meaningful.) One "satisfying assignment" for the formula above is given by (using the spin operator convention) $a=\sigma\_x\otimes 1, b=1\otimes\sigma\_x, c=\sigma\_z\otimes 1, d=1\otimes\sigma\_z$.

(Basically, boolean algebras are to classical logic as partial boolean algebras are to quantum logic, and every 3-generator partial boolean algebra can be embedded in a boolean algebra, so there are no such formulas with three or fewer variables.)

In one variable, von Neumann's inequality says that if $T$ is an operator on a (complex) Hilbert space $H$ with $\|T\|\leq1$ and $p$ is in $\mathbb{C}[z]$, then $\|p(T)\|\leq\sup\{|p(z)|:|z|=1\}$. Szőkefalvi-Nagy's dilation theorem says that (with the same assumptions on $T$) there is a unitary operator $U$ on a Hilbert space $K$ containing $H$ such that if $P:K\to H$ denotes orthogonal projection of $K$ onto $H$, then $T^n=PU^n|_H$ for each positive integer $n$.

These results extend to two commuting variables, as Ando proved in 1963. If $T_1$ and $T_2$ are commuting contractions on $H$, Ando's theorem says that there are commuting unitary operators $U_1$ and $U_2$ on a Hilbert space $K$ containing $H$ such that if $P:K\to H$ denotes orthogonal projection of $K$ onto $H$, then $T_1^{n_1}T_2^{n_2}=PU_1^{n_1}U_2^{n_2}|_H$ for each pair of nonnegative integers $n_1$ and $n_2$. This extension of Sz.-Nagy's theorem has the extension of von Neumann's inequality as a corollary: If $T_1$ and $T_2$ are commuting contractions on a Hilbert space and $p$ is in $\mathbb{C}[z_1,z_2]$, then $\|p(T_1,T_2)\|\leq\sup\{|p(z_1,z_2)|:|z_1|=|z_2|=1\}$.

Things aren't so nice in 3 (or more) variables. Parrott showed in 1970 that 3 or more commuting contractions need not have commuting unitary dilations. Even worse, the analogues of von Neumann's inequality don't hold for $n$-tuples of commuting contractions when $n\geq3$. Some have considered the problem of quantifying how badly the inequalities can fail. Let $K_n$ denote the infimum of the set of those positive constants $K$ such that if $T_1,\ldots,T_n$ are commuting contractions and $p$ is in $\mathbb{C}[z_1,\ldots,z_n]$, then $\|p(T_1,\ldots,T_n)\|\leq K\cdot\sup\{|p(z_1,\ldots,z_n)|:|z_1|=\cdots=|z_n|=1\}$. So von Neumann's inequality says that $K_1=1$, and Ando's Theorem yields $K_2=1$. It is known in general that $K_n\geq\frac{\sqrt{n}}{11}$. When $n>2$, it is not known whether $K_n\lt\infty$.

The fact that von Neumann’s inequality holds for two commuting contractions
but not three or more is still the source of many surprising results and
intriguing questions. Many deep results about analytic functions come
from this dichotomy. For example, Agler [used] Ando’s theorem to deduce an
analogue of the classical Nevanlinna–Pick interpolation formula
for analytic functions on the bidisk. Because of the failure of a von
Neumann inequality for three or more commuting contractions, the analogous
formula for the tridisk is known to be false, and the problem of finding the
correct analogue of the Nevanlinna–Pick formula for polydisks
in three or more variables remains open.