The first thing to say is that this is not the same as the question about interesting mathematical mistakes. I am interested about the type of false beliefs that many intelligent people have while they are learning mathematics, but quickly abandon when their mistake is pointed out -- and also in why they have these beliefs. So in a sense I am interested in commonplace mathematical mistakes.

Let me give a couple of examples to show the kind of thing I mean. When teaching complex analysis, I often come across people who do not realize that they have four incompatible beliefs in their heads simultaneously. These are

(i) a bounded entire function is constant;
(ii) sin(z) is a bounded function;
(iii) sin(z) is defined and analytic everywhere on C;
(iv) sin(z) is not a constant function.

Obviously, it is (ii) that is false. I think probably many people visualize the extension of sin(z) to the complex plane as a doubly periodic function, until someone points out that that is complete nonsense.

A second example is the statement that an open dense subset U of R must be the whole of R. The "proof" of this statement is that every point x is arbitrarily close to a point u in U, so when you put a small neighbourhood about u it must contain x.

Since I'm asking for a good list of examples, and since it's more like a psychological question than a mathematical one, I think I'd better make it community wiki. The properties I'd most like from examples are that they are from reasonably advanced mathematics (so I'm less interested in very elementary false statements like $(x+y)^2=x^2+y^2$, even if they are widely believed) and that the reasons they are found plausible are quite varied.

wouldn't it be great to compile all the nice examples (and some of the most relevant discussion / comments) presented below into a little writeup? that would make for a highly educative and entertaining read.
–
SuvritSep 20 '10 at 12:39

The simplest example is that of the real line with its standard metric. In degree zero the complex of coclosed harmonic forms is $\mathbb C\oplus\mathbb Cx$, and in degree one it is $\mathbb Cdx$, which gives the right cohomology.

Here is the (trivial) algebra background.

Let $A$ be a module over some unnamed ring, and let $d,\delta$ be two endomorphisms of $A$ satisfying $d^2=0=\delta^2$. Put $\Delta:=d\delta+\delta d$. Assume $A=\Delta A+A_{d,\delta}$ where $A_{d,\delta}$ stands for $\ker d\cap\ker\delta$. Write $A_{\delta,\Delta}$ for $\ker\Delta\cap\ker\delta$.

We claim that the natural map $$H(A_{\delta,\Delta},d)\to H(A,d)$$ between homology modules is bijective.

Injectivity. Assume $\delta da=0$ form some $a$ in $A$. We must find an $x$ in $A_{\delta,\Delta}$ such that $dx=da$. We have $a=\Delta b+c$ for some $b\in A$ and some $c\in A_{d,\delta}$. One easily checks that $x:=\delta db+c$ does the trick.

Surjectivity. Let $a$ be in $\ker d$. We must find $x\in A$, $y\in A_{d,\delta}$ such that $a=dx+y$. We have $a=\Delta b+c$ for some $b\in A$ and some $c\in A_{d,\delta}$. One easily checks that $x:=\delta b$, $y:=\delta db+c$ works.

Here are two beliefs. I think everybody will agree that one of them, at least, is false. I adhere to the second one.

Belief 1. The simplest way to compute the exponential $e^A$ of a complex square matrix $A$ is to use the Jordan decomposition.

Belief 2. It's simpler and more efficient to use the following fact.

Let $f(z)$ be the minimal polynomial of $A$, let $g(z)$ be $f(z)$ times the singular part of $e^z/f(z)$, and observe $e^A=g(A)$.

(By abuse of notation $z$ is at the same time an indeterminate and a complex variable.) (The problems of computing the exponential of $A$ and that of computing the Jordan decomposition of $A$ have the same difficulty level. But, to solve one of them, there is no need to refer to the other.) Here are two references

Jordan decomposition is often mentioned in relation with matrix exponentials. I'm convinced (rightly or wrongly) that the association of these notions in this context is purely irrational. I think somebody once made this association by accident, and then many people repeated it mechanically.

Here is another attempt to describe the situation.

Put $B:=\mathbb C[A]$. This is a Banach algebra, and also a $\mathbb C[X]$-algebra ($X$ being an indeterminate). Let $$\mu=\prod_{s\in S}\ (X-s)^{m(s)}$$ be the minimal polynomial of $A$, and identify $B$ to $\mathbb C[X]/(\mu)$. The Chinese Remainder Theorem says that the canonical $\mathbb C[X]$-algebra morphism $$\Phi:B\to C:=\prod_{s\in S}\ \mathbb C[X]/(X-s)^{m(s)}$$ is bijective. Computing exponentials in $C$ is trivial, so the only missing piece in our puzzle is the explicit inversion of $\Phi$. Fix $s$ in $S$ and let $e_s$ be the element of $C$ which has a one at the $s$ place and zeros elsewhere. It suffices to compute $\Phi^{-1}(e_s)$. This element will be of the form $$f=g\ \frac{\mu}{(X-s)^{m(s)}}\mbox{ mod }\mu$$ with $f,g\in\mathbb C[X]$, the only requirement being $$g\equiv\frac{(X-s)^{m(s)}}{\mu}\mbox{ mod }(X-s)^{m(s)}$$ (the congruence taking place in the ring of rational fractions defined at $s$). So $g$ is given by Taylor's Formula.

This can be summarized as follows:

There is a unique polynomial $E$ such that
$\deg E<\deg\mu$ and $e^A=E(A)$. Moreover $E$ can be uniquely written as
$$E=\sum_{s\in S}\\ E_s\\ \frac{\mu}{(X-s)^{m(s)}}$$
with (for all $s$) $\deg E_s < m(s)$ and
$$E_s\equiv e^s\ e^{X-s}\\ \frac{(X-s)^{m(s)}}{\mu}\mbox{ mod }(X-s)^{m(s)},$$
the congruence taking place in $\mathbb C[[X-s]]$.

Your opinions are normative statements: "one should" and "it is better". It is naive to suppose that there is one best method that one should use to compute the matrix exponential.
–
Robin ChapmanMay 15 '10 at 14:07

Here's another howler some people commit: If m, n are integers such that m divides n^2 then m divides n.

It's true sometimes, for example if m is prime (or more generally squarefree, i.e. a product of distinct primes). But in general all one can conclude is that there exists integers p, q, r with p squarefree such that $ m = p q^2 $ and $ n = p q r $

If $a$ is a real zero of a cubic polynomial with rational coefficients then $a$ can be written as a combination of cube roots of rational numbers.

More generally if $a$ is a real zero of an irreducible polynomial with rational coefficients that is solvable by radicals then students expect the following:

Any expression inside a radical evaluates to a real number

Any sub-expression of the expression for $a$ evaluates to an algebraic number of order less than or equal to the order of $a$

Of course the problem is that from Cardan's solution to the cubic we can have negative rational numbers inside a square root. Let $c$ = $4*(-1 + \sqrt{-3})$.

$a$ = $\frac{\sqrt[3]{c}}{4} + \frac{1}{\sqrt[3]{c}}$

$f(x) = 4x^3 - 3x + \frac{1}{2}$.

So while $a$ is an algebraic number of degree three, it can not be written as combination of cube roots of rational numbers. Indeed, it is counter-intuitive that $\sqrt[3]{c}$ has degree 6 over the rational numbers yet we can use this number and simple arithmetic to produce an algebraic number of degree 3.

I don't know how common this mistake is, but I think it's worth mentioning. I used to think that existence of non-measurable sets is guaranteed by the axiom of choice only.

In the presence of AC, there cannot be a $\sigma$-additive measure on $\mathcal{P}(\mathbb{R})$ that extends the usual Lebesgue measure.

It is true that we cannot extend the Lebesgue measure in a translation-invariant way by various Vitali set constructions. On the other hand, if you do not insist that the extension is translation-invariant, it might be possible to do this relative to a real-valued measurable cardinal assumption.

Theorem (Ulam): If there exists a cardinal $\kappa$ such that there exists an atomless $\kappa$-additive probability measure on $\mathcal{P}(\kappa)$, then there exists a $\sigma$-additive measure on $\mathcal{P}(\mathbb{R})$ extending the Lebesgue measure.

The facts that one denotes the smash product of spectra and the smash product of a space with a spectrum (levelwise) with the same $\wedge$ and tends to leave away the $\Sigma^\infty$ when one embeds a space into spectra are also not helpful in getting used to the harsh reality that the above is wrong.

I don't see that this qualifies as a false belief. In order for the question of whether it is true or false to even be meaningful, you have to first commit yourself to one of the many different notions of spectrum, not to mention smash product of spectra.
–
Tom GoodwillieOct 5 '10 at 0:35

1

True. I meant symmetric spectra with the smash product coming from their description as modules over the symmetric sequence of spheres.
–
Peter ArndtOct 5 '10 at 10:52

You can exponentiate the skew-self-adjoint matrices to get examples of matrices preserving a nondegenerate symmetric bilinear form, with Jordan blocks of the form $\left( \begin{smallmatrix} 1 & 1 \\ 0 & 1 \end{smallmatrix} \right)$.

You seem to have a different definition of "the standard inner product on $\mathbb{C}^n$" than I do. I think that phrase normally refers to the familiar positive definite sesquilinear form, with respect to which self-adjoint matrices are indeed diagonalizable.
–
Mark MeckesJan 28 '11 at 16:46

5

Of course it's not bilinear -- an "inner product" on a complex vector space is defined to be sesquilinear, not bilinear -- I've spent a lot of time trying to get my linear algebra students to remember that. The failure of such a form to generalize to other fields is indeed sad, but I think the richness of Hilbert space theory helps to make up for that disappointment. :)
–
Mark MeckesJan 28 '11 at 21:24

Coordinates on a manifold do not have an immediate metric meaning. Until becoming familiar with differential geometry one tends to think they do. (Einstein wrote that he took seven years to free himself from this idea.)

For example, linear control theory is for the most part metric with variables in $R^n$. When moving away from linear control theory, variables are represented as coordinates on a manifold. Nevertheless, much of the literature tends to either abandon metric notions altogether, or to keep using an Euclidean metric though it is no longer very useful.

Here are some various examples (I hope that some of them weren't already mentioned):
1. If a space $X$ have two different norms $\| \cdot \|_i, i=1,2$ such that $\| \cdot \|_1 \leq \| \cdot \|_2$ then the completion with respect to $\| \cdot \|_1$ is contained in the completion with respect to $\| \cdot \|_2$.
2. If $M_1,M_2$ are isomorphic modules and $N_1,N_2$ are isomorphic submodules then $M_1/N_1$ and $M_2/N_2$ are isomorphic.
3. If $A,B$ are subsets of topological spaces $X,Y$ (resp.) and $A,B$ are homeomorphic then the closures $\overline{A}$ and $\overline{B}$ are also homeomorphic.
4. The standard construction of adjoining unit to the Banach algebra $A$ yields nothing new if $A$ already was unital.
5. The phrase "a function is almost everywhere continuous" means the same as: "the function is almost everywhere equal to the continuous function".
6. Suppose you are trying to prove that some function space $F$ is complete (say that functions are defined on $X$ and real valued): you take a Cauchy sequence $\{f_n\}_n$ and prove that for each point $x \in X$ the sequence $\{f_n(x)\}_n$ is Cauchy. Then form the completeness of $\mathbb{R}$ you obtain a function $f$. The false belief is that it is now enough to show that $f$ belong to $F$.
7. If you have an ascending family $\{A_i\}_i$ then to obtain it's union $\bigcup_{i}A_i$ it is enough to take some countable subfamily
8. A convergent net $\{x_i\}_i$ in a metric space is bounded and the set $\{x_i\}_i \cup \{x\}$ is compact (where $x$ is the limit).
9. If $D$ is an open dense subset of a topological space $X$ then $card \; D= card \; X$

The Quaternions $\{x+yi+zj+wk\mid x,y,z,w\in \mathbb{R}$} is a complex banach algebra(With usual operations). Hence it is apparently a counterexample to the Gelfand=Mazur theorem

So, what is the error?

The error is the following:

However the quaternion is a vector space over the field of complex number and it is also a ring, but there is no compatibility between scalar multiplication and quternion multiplication). So it is not a complex algebra. This shows that in the definition of a complex algebra $A$, the commutative condition $\lambda (ab)=(a)(\lambda b),\;\;\lambda \in \mathbb{C},\;\;a,b\in A$, is very essential.

@AliTaghavi You're right that $R$-multiplication induces an $F$-module ($F$-vector space) structure via the evident composite $F \times R \to R \times R \to R$. To be fair to both you and Yemon: a very common slip even among professionals is in knowing that for commutative algebras an $F$-algebra is tantamount to a homomorphism $F \to R$, but temporarily forgetting this doesn't apply in the noncommutative setting (except of course when $F$ is central in $R$) -- not rising to the level of false belief so much as a temporary slip-up. I've made that slip myself!
–
Todd Trimble♦Nov 13 '14 at 11:58

Well, it is true that every vector space has a dual space, even $L^{1/2}$... and it is even true that every topological vector space has a continuous dual space... What you mean is that it is not true that every topological vector space has a non-trivial continuous dual space (or, that the continuous dual of a topological vector space does not necessarily separate points)
–
Mariano Suárez-Alvarez♦Jul 7 '10 at 18:54

Fans: (related to the one of polytopes written above) all convex cones are rational, i.e. one would expect that a line would eventually hit a point in the lattice. It is obviously not true, just take the one-dimensional cone generated by $(1,\sqrt{2})$. A similar one was thinking that if I rotate the cone a bit, I can always make it rational.

"It cannot be shown without some form of AC that the union (or disjoint union) of countably many countable sets is countable. I have a countably infinite set X of countably infinite sets. Therefore, the union of X cannot be shown to be countable without Choice."

The fallacy is that in many cases of interest, it is possible to exhibit an explicit counting of every element of X. In such a case a counting of X by antidiagonals is easily constructed. The usual counting of the rationals is an example of this.

I think this may even be an example of a more general phenomenon of "people think AC is necessary for a certain construction, but in fact it turns out not to be necessary for the example they have in mind". For example, AC is necessary to find a maximal ideal in an arbitrary ring ... but it isn't if you're prepared to assume the ring is Noetherian.

If "Noetherian" is defined by the ascending chain condition or by requiring all ideals to be finitely generated, then in order to deduce the existence of maximal ideals, you still need a weak form of the axiom of choice. The usual argument uses the axiom of dependent choice. (Of course, if you define "Noetherian" to mean that every set of ideals has a maximal element, then deducing the existence of maximal ideals is a choiceless triviality.) A good reference is "Six impossible rings" by Wilfrid Hodges (J. Algebra 31 (1974) 218-244).
–
Andreas BlassOct 22 '10 at 15:29

I'd love to have a reference to a procedure for calculating the geometric genus and algebraic genus of surfaces like this, because they are rational if and only if both these quantities are zero, and for other cubic surfaces that interest me it would save a lot of fruitless hacking around trying to find a rational solution that probably doesn't exist! Are there any symbolic algebra packages that can do this?

I mean for example is $ x y (x y + 1) (x + y) = z^2 $ rational? I'm almost sure it isn't; but how can one be sure?

Suppose $\kappa$ is an $\aleph$ number, then $AC_\kappa$ is equivalent to $W_\kappa$, namely the universe holds that the product of $\kappa$ many sets is non-empty if and only if every cardinality is either of size less than $\kappa$ or has a subset of cardinality $\kappa$.

In fact this is only true if you assume full $AC$, and $(\forall \kappa) AC_\kappa$ doesn't even imply $W_{\aleph_1}$, I was truly shocked.

Furthermore, $W_\kappa$ doesn't even imply $AC_\kappa$ in most cases.

The strongest psychological implication is that most people actually think of the well-ordering principle as a the "correct form" of choice, when it is actually Dependent Choice (limited to $\kappa$, or unbounded) which is the "proper" form, that is $DC_\kappa$ implies both $AC_\kappa$ and $W_\kappa$.

@Thierry: For the past couple of weeks I spent a lot time considering models without choice, not only I held that misconception but not once anyone corrected me about it - grad students and professors alike.
–
Asaf KaragilaApr 17 '11 at 6:09

Here is a false belief I had. Let $f:X \to Y$ be a map of topological spaces having the property that for every finite CW complex $K$, the induced map $f_{\ast}:[K,X] \to [K,Y]$, on unpointed homotopy classes of maps, is a bijection. Then $f$ is a weak homotopy equivalence (that is, it induces isomorphisms on all homotopy groups relative to all basepoints). A counterexample is given by the stabilization map $B \Sigma_{\infty}\xrightarrow{+1} B \Sigma_{\infty}$, which is not an isomorphism on $\pi_1$.

$\mathrm{polymod}$ is "polynomial mod". Two polynomials are congruent $\mathrm{polymod} p$ iff the coefficients each power of the variable are congruent $\pmod{p}$. The equivalence classes are sets of polynomials where each coefficient ranges over an equivalence class $\pmod{p}$. For the cousin, there are many local/globals but they all seem to require additional conditions (q.v. Hensel lifting). I think the set from which $x$ was chosen was left unspecified because this "imprecise mental abbreviation" pops up at various levels of sophistication each with a different such set.
–
AnonymousOct 23 '10 at 15:22

I'm not sure how common it is but I've certainly been able to trick a few people into answering the following question wrong:

Given $n$ identical and independently distributed random variables, $X_k$, what is the limiting distribution of their sum, $S_n = \sum_{k=0}^{n-1} X_k $, as $n \to \infty$?

Most (?) people's answer is the Normal distribution when in actuality the sum is drawn from a Levy-stable distribution. I've cheated a little by making some extra assumptions on the random variables but I think the question is still valid.

From Keith Devlin

I'm not really sure how I feel about this one; I might be one of the unfortunate souls who are still prey to that delusion.

Caution

In case you missed it, the column ended up spilling a lot of electronic ink (as evidenced in this follow-up column), so I don't believe it would be wise to start yet a new one on MO. Thanks in advance!

The more I think about this "error", the less I am convinced. It's like saying that you cannot say that $\binom n k$ is the number of $k$-element sets in an $n$-element set because then you will be unable to generalize to complex values of $n$. Or you cannot define the chromatic polynomial as the function counting the colourings and then plug in $-1$ to get the acyclic orientations of the graph. Also, I think it is perfectly understandable what it means to add something halfways.
–
user11235Apr 10 '11 at 20:50

When I taught elementary teachers the course on arithmetic, they all had been taught that multiplication is repeated addition, but I myself thought it was the cardinality of the cartesian product. We enjoyed discussing this difference in point of view.
–
roy smithMay 9 '11 at 3:06

1

The "repeated addition" characterization has an advantage over the "cardinality of the Cartesian product" characterization (which possibly in some contexts could be considered a disadvantage). That is that it's not self-evident that it's commutative, and so one has a useful exercise for certain kinds of students: figure out why it's commutative.
–
Michael HardyMay 20 '11 at 2:28