It's quite a fascinating result that arises by considering commutators in groups. If $G$ is a group, its commutator subgroup $[G,G]$ is the subgroup of $G$ generated by all the commutators $[g,h] = ghg^{-1}h^{-1}$ of $G$. It's easy to see that the commutator subgroup is normal. A group $G$ is said to be perfect if $G = [G,G]$.

So let's assume $G$ is perfect. This implies that every element of $G$ can be written as a product of commutators. But can every element of $G$ be written as a single commutator? That's really far from obvious. For example, take your favourite perfect group and an element in it: can you prove that this single element is a commutator? Not so easy, right?

In fact, we can define the commutator length of any $g\in G$ to be the minimum number of commutators in all products of commutators equal to $g$. If $g$ can't be written as the product of commutators, then its commutator length is infinite.

The commutator width of a group is defined to be the supremum over commutator lengths of all the elements of $G$. (Note: I think this should just be called the commutator length of $G$ as well, but that's how the terminology ended up!)

It turns out that finding a perfect group $G$ with commutator width greater than one is quite tricky. In fact, the theorem proved in loc. cit. is:

Former Ore Conjecture/Now Theorem. If $G$ is a finite nonabelian simple group, then every element of $G$ is a commutator.

That's pretty cool, though the proof is very long. That's not surprising since it is a theorem about all finite nonabelian simple groups. What's perhaps even more surprising is that there are examples of finitely-generated infinite simple groups containing elements that are not commutators. In fact, examples exist of such $G$ with infinite commutator length, as given in Alexey Muranov's paper:

We say that a group $G$ is residually finite if for each $g\in G$ that is not equal to the identity of $G$, there exists a finite group $F$ and a group homomorphism
$$\varphi:G\to F$$ such that $\varphi(g)$ is not the identity of $F$.

The definition does not change if we require that $\varphi$ be surjective. Therefore, a group $G$ is residually finite if and only if for each $g\in G$ that is not the identity, there exists a finite index normal subgroup $N$ of $G$ such that $g\not\in N$.

Hence, if $G$ is residually finite, then the intersection of all finite-index normal subgroups is trivial. The converse holds, too (why?).…read the rest of this post!

The meat of the claimed proof of the Riemann hypothesis is in Atiyah's construction of the Todd map $T:\C\to \C$. It supposedly comes from the composition of two different isomorphisms
$$\C\xrightarrow{t_+} C(A)\xrightarrow{t^{-1}_{-}} \C$$ of the complex field $\C$ with the $C(A)$, the center of a von Neumann hyperfinite factor $A$ of type II-1. Understanding Atiyah's work boils down to understanding this Todd map, and therefore in understanding what is in the paper "The Fine Structure Constant".

Assuming there is a zero $b$ of the Riemann zeta function $\zeta(s)$ off the critical line, Atiyah defines a function
$$F(s) = T(1 +\zeta(s + b)) – 1.$$ The function $F$ satisfies $F(0) = 0$ because one of the properties of the Todd map $T$ is that $T(1) = 1$, and $F$ is also supposedly analytic. According to some of the basic properties of the Todd function which is a polynomial on a closed rectangle containing this zero, this would imply that $F(s) = 0$ and therefore that $\zeta$ is identically zero, which is the contradiction.

Now, I have very little understanding of von Neumann algebras so I won't comment at all on the Todd map. I have no doubt that the experts will dissect this because there's so much attention on it. Even assuming all the properties of the Todd function, I find the proof difficult to follow. For example, the assumed zero $b$ off the critical strip: I can't find where "$b$ is off the critical strip" is even being used. In fact, it's hard to see where any of the basic properties of the zeta function are being used.

Well this is strange indeed: according to this New Scientist article published today, the famous Sir Michael Atiyah is supposed to talk this Monday at the Heidelberg Laureate Forum. The topic: a proof of the Riemann hypothesis. The Riemann hypothesis states that the Riemann Zeta function defined by the analytic continuation of $\zeta(s) = \sum_{n=1}^\infty n^{-s}$ has nontrivial zeros only on the critical line whose numbers have real part $1/2$. Check out this MathWorld article for more details.

The Riemann hypothesis is considered by many to be the outstanding problem in mathematics. Many people have tried to prove it and failed.

When I was a student at McGill I loved looking at the latest Springer texts in the now-nonexistant Rosenthall library. So, I thought that I'd list some of the cool looking titles that have come out in 2018:

Walter Dittrick, Reassessing Riemann's Paper: This book is an analysis of Riemann's paper "On the Number of Primes Less Than a Given Magnitude", and could be a great historical starting point into the subject

Berthé, Michel Rigo (editors), Sequences, Groups, and Number Theory: Now this looks interesting! It is a curious volume of lectures on the interactions between words (as in formal languages and presented groups), number theory, and dynamical systems

Let $R$ be a commutative ring and $M_n(R)$ denote the ring of $n\times n$ matrices with coefficients in $R$. For $X,Y\in M_n(R)$, their commutator $[X,Y]$ is defined by
$$[X,Y] := XY – YX.$$ The trace of any matrix is defined as the sum of its diagonal entries.

If $X$ and $Y$ are any matrices, what is the trace of $[X,Y]$? It's zero! That's because the trace of $XY$ is the same as the trace of $YX$. Therefore:

Any commutator has trace zero.

What about the converse? Is any trace zero matrix also a commutator? In other words, given a trace zero matrix $Z\in M_n(R)$, can we find matrices $X$ and $Y$ such that $Z = [X,Y]$? Albert and Muckenhoupt proved that you can, assuming that $R$ is a field.

What happens if you also want $X$ and $Y$ to have trace zero?

Good question. In general, this is not possible. For example, let's consider the simplest field of all, the field with two elements denoted by $\F_2$. Okay, it's actually debatable whether $\F_2$ really is the simplest field, because so many problems happen in characteristic two. For example, this problem we've been considering: in $\F_2$, if $X$ and $Y$ are $2\times 2$ matrices of trace zero, then $[X,Y]$ will have zero off-diagonal entries. So for example, the matrix
$$\begin{pmatrix}1 & 1\\1 & 1\end{pmatrix}\in M_2(\F_2)$$ cannot be written as a commutator of two matrices with trace zero.

It seems that characteristic two is the only obstruction, in the case of $2\times 2$ matrices. In fact, Alexander Stasinski proved in his paper [1] the following:

Theorem. Let $R$ be a principal ideal domain. If $n\geq 3$ then any matrix in $M_n(R)$ of trace zero can be written as the commutator of two matrices in $M_n(R)$, each having trace zero. The same holds for $n=2$ if two is invertible in $R$.

Notice how the characteristic two problem only happens in the $2\times 2$ case.

Some believe that if you're main profession is pure math research, you don't need a scientific calculator. That's simply not true. Although I don't use one nearly as much as when I was an undergrad, I still need a calculator and the only one I'm willing to use is the Casio FX-991MS.

This classic book explains Galois theory but for commutative rings. Even though there are many more technicalities in the general commutative ring case compared to fields, I actually found the approach in this book more natural than the Galois theory for fields that I learned in undergrad algebra. There are some exercises and this book is easy to read.…read the rest of this post!

I happened to come across a 1993 opinion piece, Theorems for a price: Tomorrow's semi-rigorous mathematical culture by Doron Zeilberger. I think it's a rather fascinating document as it questions the future of mathematical proof. Its basic thesis is that some time in the future of mathematics, the expectation of proof will move to a "semi-rigorous" state where mathematical statements will be given probabilities of being true.

It helps to clarify this with an example even more simple than in Zeilberger's paper. Take the arithmetic-geometric mean inequality for two variables $a,b\geq 0$. It says that
$$\frac{a + b}{2} \geq \sqrt{ab}.$$ This simple identity is just a rearrangement of the inequality $(a – b)^2 \geq 0$. For simplicity, let's say that $a,b\in [0,1]$. Instead of actually proving this inequality, we could generate uniform random numbers in $[0,1]$ and see if this inequality actually works for them. So if I test this inequality 1000 times, of course I will get that it works 1000 times.…read the rest of this post!

I've been talking a little about abelian categories these days. That's because I've been going over Weibel's An Introduction to Homological Algebra. It's a book I read before, and I still feel pretty confident about the material. This time, though, I think I'm going to explore a few different paths that I haven't really given much thought to before, such as diagram proofs in abelian categories, group cohomology (more in-depth), and Hochschild homology.