The first thing to say is that this is not the same as the question about interesting mathematical mistakes. I am interested about the type of false beliefs that many intelligent people have while they are learning mathematics, but quickly abandon when their mistake is pointed out -- and also in why they have these beliefs. So in a sense I am interested in commonplace mathematical mistakes.

Let me give a couple of examples to show the kind of thing I mean. When teaching complex analysis, I often come across people who do not realize that they have four incompatible beliefs in their heads simultaneously. These are

(i) a bounded entire function is constant;
(ii) sin(z) is a bounded function;
(iii) sin(z) is defined and analytic everywhere on C;
(iv) sin(z) is not a constant function.

Obviously, it is (ii) that is false. I think probably many people visualize the extension of sin(z) to the complex plane as a doubly periodic function, until someone points out that that is complete nonsense.

A second example is the statement that an open dense subset U of R must be the whole of R. The "proof" of this statement is that every point x is arbitrarily close to a point u in U, so when you put a small neighbourhood about u it must contain x.

Since I'm asking for a good list of examples, and since it's more like a psychological question than a mathematical one, I think I'd better make it community wiki. The properties I'd most like from examples are that they are from reasonably advanced mathematics (so I'm less interested in very elementary false statements like $(x+y)^2=x^2+y^2$, even if they are widely believed) and that the reasons they are found plausible are quite varied.

wouldn't it be great to compile all the nice examples (and some of the most relevant discussion / comments) presented below into a little writeup? that would make for a highly educative and entertaining read.
–
SuvritSep 20 '10 at 12:39

Many people believe that Cantor proved the uncountability of the real line using a diagonal argument. This paper does not that provide that proof; Cantor's stated purpose was to prove the existence of `uncountable infinities' without using the theory of irrational numbers.

More to the point, I think, is that the paper proves that the power set of any set has greater cardinality than the set itself. This is the first proof that there is no greatest cardinality. (The uncountability of the real line easily follows, even if Cantor does not mention it because he has bigger fish to fry.)
–
John StillwellMay 31 '10 at 5:12

2

Just to fill in some history here: if I remember right, Cantor first proved the uncountability of the reals by other arguments, then later (as you reference) found the diagonal argument, as a proof of the more general statement about power sets.
–
Peter LeFanu LumsdaineSep 27 '10 at 3:01

2

The link in the answer goes to the wrong page - it should go to page 75, not page 72.
–
David RobertsJun 13 '12 at 6:41

1

And it looks like a diagonal argument to me.
–
David RobertsJun 13 '12 at 6:43

True: Given a graded algebra $A$, there is a notion of a "homogeneous" ideal of $A$. It is a property that connects an ideal of $I$ with the grading and is often necessary to require. For example, if $I$ is a homogeneous ideal of $A$, then the algebra $A / I$ is graded again. If $I$ is not homogeneous, then it is not graded in general (since the projections of different graded components of $A$ onto $A / I$ might have nonzero intersection).

False: Given a filtered algebra $A$, there is a notion of a "filtered" ideal of $A$.

There is no such notion. We can require $I$ to be generated by $I\cap A_n$ for some $n$, or actually to lie inside $A_n$ for some $n$, but in most cases none of these is actually needed. (Correct me if I am wrong.) Formulations like "Let $I$ be an ideal compatible with (or respecting) the filtration" are cargo cult.

But: Given a filtered algebra $A$ and a generating set $G$ of an ideal $I$ of $A$, it is an important question whether $I\cap A_n$ is generated by $G\cap A_n$ for every $n\in \mathbb N$. This is not always satisfied, often nontrivial (in many cases it can be proved by using the diamond lemma to show that every element of $A_n$ has a unique "remainder" modulo $I$ in a certain sense, and this remainder can be obtained by repeated subtraction multiples of elements of $G\cap A_n$) and used tacitly in various texts.

What I mean is: People use these formulations as a protective charm against a danger they don't see but intuitively feel is there, although closer inspection shows that it is pure superstition.
–
darij grinbergMar 15 '11 at 17:26

I'm not sure there is a false belief here, as much as awkward writing. Depending on context, I might very well write "The set $\{a,b \}$ (where $a$ and $b$ might be equal)..." if this issue mattered.
–
David SpeyerMay 6 '10 at 11:16

1

There are many situations where one needs to speak of a set of two numbers that may or may not be equal. E.g.: "Let x<sub>1</sub>, x<sub>2</sub> &isin; ℝ. Then among all the open intervals containing the set {x<sub>1</sub>, x<sub>2</sub>}, none of them is contained in all the others." If one is addressing mathematicians, there is no need to specify that x<sub>1</sub> might be equal to x<sub>2</sub>.
–
Daniel AsimovJun 17 '10 at 23:34

1

Single-letter symbols are usually assumed to be variables, if the context doesn't determine otherwise, even in the absence of quantifiers. (You can put in an implicit universal quantifier to close up all sentences.)
–
Toby BartelsApr 4 '11 at 9:41

1

In a context where one is discussing real analysis, $e$ and $\pi$ are generally taken to be the famous constants. But this is hardly universal; in other contexts, they may have very different meanings.
–
Toby BartelsDec 15 '13 at 5:16

The simplest example is that of the real line with its standard metric. In degree zero the complex of coclosed harmonic forms is $\mathbb C\oplus\mathbb Cx$, and in degree one it is $\mathbb Cdx$, which gives the right cohomology.

Here is the (trivial) algebra background.

Let $A$ be a module over some unnamed ring, and let $d,\delta$ be two endomorphisms of $A$ satisfying $d^2=0=\delta^2$. Put $\Delta:=d\delta+\delta d$. Assume $A=\Delta A+A_{d,\delta}$ where $A_{d,\delta}$ stands for $\ker d\cap\ker\delta$. Write $A_{\delta,\Delta}$ for $\ker\Delta\cap\ker\delta$.

We claim that the natural map $$H(A_{\delta,\Delta},d)\to H(A,d)$$ between homology modules is bijective.

Injectivity. Assume $\delta da=0$ form some $a$ in $A$. We must find an $x$ in $A_{\delta,\Delta}$ such that $dx=da$. We have $a=\Delta b+c$ for some $b\in A$ and some $c\in A_{d,\delta}$. One easily checks that $x:=\delta db+c$ does the trick.

Surjectivity. Let $a$ be in $\ker d$. We must find $x\in A$, $y\in A_{d,\delta}$ such that $a=dx+y$. We have $a=\Delta b+c$ for some $b\in A$ and some $c\in A_{d,\delta}$. One easily checks that $x:=\delta b$, $y:=\delta db+c$ works.

Here are two beliefs. I think everybody will agree that one of them, at least, is false. I adhere to the second one.

Belief 1. The simplest way to compute the exponential $e^A$ of a complex square matrix $A$ is to use the Jordan decomposition.

Belief 2. It's simpler and more efficient to use the following fact.

Let $f(z)$ be the minimal polynomial of $A$, let $g(z)$ be $f(z)$ times the singular part of $e^z/f(z)$, and observe $e^A=g(A)$.

(By abuse of notation $z$ is at the same time an indeterminate and a complex variable.) (The problems of computing the exponential of $A$ and that of computing the Jordan decomposition of $A$ have the same difficulty level. But, to solve one of them, there is no need to refer to the other.) Here are two references

Jordan decomposition is often mentioned in relation with matrix exponentials. I'm convinced (rightly or wrongly) that the association of these notions in this context is purely irrational. I think somebody once made this association by accident, and then many people repeated it mechanically.

Here is another attempt to describe the situation.

Put $B:=\mathbb C[A]$. This is a Banach algebra, and also a $\mathbb C[X]$-algebra ($X$ being an indeterminate). Let $$\mu=\prod_{s\in S}\ (X-s)^{m(s)}$$ be the minimal polynomial of $A$, and identify $B$ to $\mathbb C[X]/(\mu)$. The Chinese Remainder Theorem says that the canonical $\mathbb C[X]$-algebra morphism $$\Phi:B\to C:=\prod_{s\in S}\ \mathbb C[X]/(X-s)^{m(s)}$$ is bijective. Computing exponentials in $C$ is trivial, so the only missing piece in our puzzle is the explicit inversion of $\Phi$. Fix $s$ in $S$ and let $e_s$ be the element of $C$ which has a one at the $s$ place and zeros elsewhere. It suffices to compute $\Phi^{-1}(e_s)$. This element will be of the form $$f=g\ \frac{\mu}{(X-s)^{m(s)}}\mbox{ mod }\mu$$ with $f,g\in\mathbb C[X]$, the only requirement being $$g\equiv\frac{(X-s)^{m(s)}}{\mu}\mbox{ mod }(X-s)^{m(s)}$$ (the congruence taking place in the ring of rational fractions defined at $s$). So $g$ is given by Taylor's Formula.

This can be summarized as follows:

There is a unique polynomial $E$ such that
$\deg E<\deg\mu$ and $e^A=E(A)$. Moreover $E$ can be uniquely written as
$$E=\sum_{s\in S}\\ E_s\\ \frac{\mu}{(X-s)^{m(s)}}$$
with (for all $s$) $\deg E_s < m(s)$ and
$$E_s\equiv e^s\ e^{X-s}\\ \frac{(X-s)^{m(s)}}{\mu}\mbox{ mod }(X-s)^{m(s)},$$
the congruence taking place in $\mathbb C[[X-s]]$.

Your opinions are normative statements: "one should" and "it is better". It is naive to suppose that there is one best method that one should use to compute the matrix exponential.
–
Robin ChapmanMay 15 '10 at 14:07

Here's another howler some people commit: If m, n are integers such that m divides n^2 then m divides n.

It's true sometimes, for example if m is prime (or more generally squarefree, i.e. a product of distinct primes). But in general all one can conclude is that there exists integers p, q, r with p squarefree such that $ m = p q^2 $ and $ n = p q r $

If $a$ is a real zero of a cubic polynomial with rational coefficients then $a$ can be written as a combination of cube roots of rational numbers.

More generally if $a$ is a real zero of an irreducible polynomial with rational coefficients that is solvable by radicals then students expect the following:

Any expression inside a radical evaluates to a real number

Any sub-expression of the expression for $a$ evaluates to an algebraic number of order less than or equal to the order of $a$

Of course the problem is that from Cardan's solution to the cubic we can have negative rational numbers inside a square root. Let $c$ = $4*(-1 + \sqrt{-3})$.

$a$ = $\frac{\sqrt[3]{c}}{4} + \frac{1}{\sqrt[3]{c}}$

$f(x) = 4x^3 - 3x + \frac{1}{2}$.

So while $a$ is an algebraic number of degree three, it can not be written as combination of cube roots of rational numbers. Indeed, it is counter-intuitive that $\sqrt[3]{c}$ has degree 6 over the rational numbers yet we can use this number and simple arithmetic to produce an algebraic number of degree 3.

The Quaternions $\{x+yi+zj+wk\mid x,y,z,w\in \mathbb{R}$} is a complex banach algebra(With usual operations). Hence it is apparently a counterexample to the Gelfand=Mazur theorem

So, what is the error?

The error is the following:

However the quaternion is a vector space over the field of complex number and it is also a ring, but there is no compatibility between scalar multiplication and quternion multiplication). So it is not a complex algebra. This shows that in the definition of a complex algebra $A$, the commutative condition $\lambda (ab)=(a)(\lambda b),\;\;\lambda \in \mathbb{C},\;\;a,b\in A$, is very essential.

@AliTaghavi You're right that $R$-multiplication induces an $F$-module ($F$-vector space) structure via the evident composite $F \times R \to R \times R \to R$. To be fair to both you and Yemon: a very common slip even among professionals is in knowing that for commutative algebras an $F$-algebra is tantamount to a homomorphism $F \to R$, but temporarily forgetting this doesn't apply in the noncommutative setting (except of course when $F$ is central in $R$) -- not rising to the level of false belief so much as a temporary slip-up. I've made that slip myself!
–
Todd Trimble♦Nov 13 '14 at 11:58

The facts that one denotes the smash product of spectra and the smash product of a space with a spectrum (levelwise) with the same $\wedge$ and tends to leave away the $\Sigma^\infty$ when one embeds a space into spectra are also not helpful in getting used to the harsh reality that the above is wrong.

I don't see that this qualifies as a false belief. In order for the question of whether it is true or false to even be meaningful, you have to first commit yourself to one of the many different notions of spectrum, not to mention smash product of spectra.
–
Tom GoodwillieOct 5 '10 at 0:35

1

True. I meant symmetric spectra with the smash product coming from their description as modules over the symmetric sequence of spheres.
–
Peter ArndtOct 5 '10 at 10:52

You can exponentiate the skew-self-adjoint matrices to get examples of matrices preserving a nondegenerate symmetric bilinear form, with Jordan blocks of the form $\left( \begin{smallmatrix} 1 & 1 \\ 0 & 1 \end{smallmatrix} \right)$.

You seem to have a different definition of "the standard inner product on $\mathbb{C}^n$" than I do. I think that phrase normally refers to the familiar positive definite sesquilinear form, with respect to which self-adjoint matrices are indeed diagonalizable.
–
Mark MeckesJan 28 '11 at 16:46

5

Of course it's not bilinear -- an "inner product" on a complex vector space is defined to be sesquilinear, not bilinear -- I've spent a lot of time trying to get my linear algebra students to remember that. The failure of such a form to generalize to other fields is indeed sad, but I think the richness of Hilbert space theory helps to make up for that disappointment. :)
–
Mark MeckesJan 28 '11 at 21:24

Here is a false belief I had. Let $f:X \to Y$ be a map of topological spaces having the property that for every finite CW complex $K$, the induced map $f_{\ast}:[K,X] \to [K,Y]$, on unpointed homotopy classes of maps, is a bijection. Then $f$ is a weak homotopy equivalence (that is, it induces isomorphisms on all homotopy groups relative to all basepoints). A counterexample is given by the stabilization map $B \Sigma_{\infty}\xrightarrow{+1} B \Sigma_{\infty}$, which is not an isomorphism on $\pi_1$.

Coordinates on a manifold do not have an immediate metric meaning. Until becoming familiar with differential geometry one tends to think they do. (Einstein wrote that he took seven years to free himself from this idea.)

For example, linear control theory is for the most part metric with variables in $R^n$. When moving away from linear control theory, variables are represented as coordinates on a manifold. Nevertheless, much of the literature tends to either abandon metric notions altogether, or to keep using an Euclidean metric though it is no longer very useful.

Here are some various examples (I hope that some of them weren't already mentioned):
1. If a space $X$ have two different norms $\| \cdot \|_i, i=1,2$ such that $\| \cdot \|_1 \leq \| \cdot \|_2$ then the completion with respect to $\| \cdot \|_1$ is contained in the completion with respect to $\| \cdot \|_2$.
2. If $M_1,M_2$ are isomorphic modules and $N_1,N_2$ are isomorphic submodules then $M_1/N_1$ and $M_2/N_2$ are isomorphic.
3. If $A,B$ are subsets of topological spaces $X,Y$ (resp.) and $A,B$ are homeomorphic then the closures $\overline{A}$ and $\overline{B}$ are also homeomorphic.
4. The standard construction of adjoining unit to the Banach algebra $A$ yields nothing new if $A$ already was unital.
5. The phrase "a function is almost everywhere continuous" means the same as: "the function is almost everywhere equal to the continuous function".
6. Suppose you are trying to prove that some function space $F$ is complete (say that functions are defined on $X$ and real valued): you take a Cauchy sequence $\{f_n\}_n$ and prove that for each point $x \in X$ the sequence $\{f_n(x)\}_n$ is Cauchy. Then form the completeness of $\mathbb{R}$ you obtain a function $f$. The false belief is that it is now enough to show that $f$ belong to $F$.
7. If you have an ascending family $\{A_i\}_i$ then to obtain it's union $\bigcup_{i}A_i$ it is enough to take some countable subfamily
8. A convergent net $\{x_i\}_i$ in a metric space is bounded and the set $\{x_i\}_i \cup \{x\}$ is compact (where $x$ is the limit).
9. If $D$ is an open dense subset of a topological space $X$ then $card \; D= card \; X$

Well, it is true that every vector space has a dual space, even $L^{1/2}$... and it is even true that every topological vector space has a continuous dual space... What you mean is that it is not true that every topological vector space has a non-trivial continuous dual space (or, that the continuous dual of a topological vector space does not necessarily separate points)
–
Mariano Suárez-Alvarez♦Jul 7 '10 at 18:54

Fans: (related to the one of polytopes written above) all convex cones are rational, i.e. one would expect that a line would eventually hit a point in the lattice. It is obviously not true, just take the one-dimensional cone generated by $(1,\sqrt{2})$. A similar one was thinking that if I rotate the cone a bit, I can always make it rational.

"It cannot be shown without some form of AC that the union (or disjoint union) of countably many countable sets is countable. I have a countably infinite set X of countably infinite sets. Therefore, the union of X cannot be shown to be countable without Choice."

The fallacy is that in many cases of interest, it is possible to exhibit an explicit counting of every element of X. In such a case a counting of X by antidiagonals is easily constructed. The usual counting of the rationals is an example of this.

I think this may even be an example of a more general phenomenon of "people think AC is necessary for a certain construction, but in fact it turns out not to be necessary for the example they have in mind". For example, AC is necessary to find a maximal ideal in an arbitrary ring ... but it isn't if you're prepared to assume the ring is Noetherian.

If "Noetherian" is defined by the ascending chain condition or by requiring all ideals to be finitely generated, then in order to deduce the existence of maximal ideals, you still need a weak form of the axiom of choice. The usual argument uses the axiom of dependent choice. (Of course, if you define "Noetherian" to mean that every set of ideals has a maximal element, then deducing the existence of maximal ideals is a choiceless triviality.) A good reference is "Six impossible rings" by Wilfrid Hodges (J. Algebra 31 (1974) 218-244).
–
Andreas BlassOct 22 '10 at 15:29

I'd love to have a reference to a procedure for calculating the geometric genus and algebraic genus of surfaces like this, because they are rational if and only if both these quantities are zero, and for other cubic surfaces that interest me it would save a lot of fruitless hacking around trying to find a rational solution that probably doesn't exist! Are there any symbolic algebra packages that can do this?

I mean for example is $ x y (x y + 1) (x + y) = z^2 $ rational? I'm almost sure it isn't; but how can one be sure?

Suppose $\kappa$ is an $\aleph$ number, then $AC_\kappa$ is equivalent to $W_\kappa$, namely the universe holds that the product of $\kappa$ many sets is non-empty if and only if every cardinality is either of size less than $\kappa$ or has a subset of cardinality $\kappa$.

In fact this is only true if you assume full $AC$, and $(\forall \kappa) AC_\kappa$ doesn't even imply $W_{\aleph_1}$, I was truly shocked.

Furthermore, $W_\kappa$ doesn't even imply $AC_\kappa$ in most cases.

The strongest psychological implication is that most people actually think of the well-ordering principle as a the "correct form" of choice, when it is actually Dependent Choice (limited to $\kappa$, or unbounded) which is the "proper" form, that is $DC_\kappa$ implies both $AC_\kappa$ and $W_\kappa$.

@Thierry: For the past couple of weeks I spent a lot time considering models without choice, not only I held that misconception but not once anyone corrected me about it - grad students and professors alike.
–
Asaf KaragilaApr 17 '11 at 6:09

As a sequel of this famous answer on $\dim(U+V+W)$, the following inequality is not true $\forall n \ge 4$:
$$\dim(\sum_{i = 1}^{n} U_i) \le \sum_{r=1}^{n} (-1)^{r+1} \sum_{i_1 < i_2 < \dots < i_r} \dim(\bigcap_{s=1}^{r}U_{i_s})$$Darij Grinberg has found a counter-example (see this post).