The other immediate impression I got from the book was the sheer weight (physical and otherwise – the book comprises 1034 pages) of mathematics that is out there, much of which I still only have a very partial grasp of at best (see also Einstein’s famous quote on the subject). But the book also demonstrates that mathematics, while large, is at least connected (and reasonably bounded in diameter, modulo a small exceptional set). I myself certainly plan to use this book as a first reference the next time I need to look up some mathematical theory or concept that I haven’t had occasion to really use much before.

Given that I have been heavily involved in certain parts of this project, I will not review the book fully here – I am sure that will be done more objectively elsewhere – but comments on the book by other readers are more than welcome here.

I’ve saved this article for last, in part because it ties in well with my upcoming course on Perelman’s proof which will start in a few weeks (details to follow soon).

The last external article for the PCM that I would like to point out here is Brian Osserman‘s article on the Weil conjectures, which include the “Riemann hypothesis over finite fields” that was famously solved by Deligne. These (now solved) conjectures, which among other things gives some quite precise control on the number of points in an algebraic variety over a finite field, were (and continue to be) a major motivating force behind much of modern arithmetic and algebraic geometry.

My penultimate article for my PCM series is a very short one, on “Hamiltonians“. The PCM has a number of short articles to define terms which occur frequently in the longer articles, but are not substantive enough topics by themselves to warrant a full-length treatment. One of these is the term “Hamiltonian”, which is used in all the standard types of physical mechanics (classical or quantum, microscopic or statistical) to describe the total energy of a system. It is a remarkable feature of the laws of physics that this single object (which is a scalar-valued function in classical physics, and a self-adjoint operator in quantum mechanics) suffices to describe the entire dynamics of a system, although from a mathematical perspective it is not always easy to read off all the analytic aspects of this dynamics just from the form of the Hamiltonian.

For this post, I would also like to highlight an article of my good friend Andrew Granville on one of my own favorite topics, “Analytic number theory“, focusing in particular on the classical problem of understanding the distribution of the primes, via such analytic tools as zeta functions and L-functions, sieve theory, and the circle method.

I’ll start today with my article on “Function spaces“. Just as the analysis of numerical quantities relies heavily on the concept of magnitude or absolute value to measure the size of such quantities, or the extent to which two such quantities are close to each other, the analysis of functions relies on the concept of a norm to measure various “sizes” of such functions, as well as the extent to which two functions resemble to each other. But while numbers mainly have just one notion of magnitude (not counting the p-adic valuations, which are of importance in number theory), functions have a wide variety of such magnitudes, such as “height” ( or norm), “mass” ( norm), “mean square” or “energy” ( or norms), “slope” (Lipschitz or norms), and so forth. In modern mathematics, we use the framework of function spaces to understand the properties of functions and their magnitudes; they provide a precise and rigorous way to formalise such “fuzzy” notions as a function being tall, thin, flat, smooth, oscillating, etc. In this article I focus primarily on the analytic aspects of these function spaces (inequalities, interpolation, etc.), leaving aside the algebraic aspects or the connections with mathematical physics.

The Companion has several short articles describing specific landmark achievements in mathematics. For instance, here is Peter Cameron‘s short article on “Gödel’s theorem“, on what is arguably one of the most popularised (and most misunderstood) theorems in all of mathematics.

I’m continuing my series of articles for the Princeton Companion to Mathematics ahead of the winter quarter here at UCLA (during which I expect this blog to become dominated by ergodic theory posts) with my article on generalised solutions to PDE. (I have three more PCM articles to release here, but they will have to wait until spring break.) This article ties in to some extent with my previous PCM article on distributions, because distributional solutions are one good example of a “generalised solution” or “weak solution” to a PDE. They are not the only such notion though; one also has variational and stationary solutions, viscosity solutions, penalised solutions, solutions outside of a singular set, and so forth. These notions of generalised solution are necessary when dealing with PDE that can exhibit singularities, shocks, oscillations, or other non-smooth behaviour. Also, in the foundational existence theory for many PDE, it has often been profitable to first construct a fairly weak solution and then use additional arguments to upgrade that solution to a stronger solution (e.g. a “classical” or “smooth” solution), rather than attempt to construct the stronger solution directly. On the other hand, there is a tradeoff between how easy it is to construct a weak solution, and how easy it is to upgrade that solution; solution concepts which are so weak that they cannot be upgraded at all seem to be significantly less useful in the subject, even if (or especially if) existence of such solutions is a near-triviality. [This is one manifestation of the somewhat whimsical “law of conservation of difficulty”: in order to prove any genuinely non-trivial result, some hard work has to be done somewhere. In particular, it is often the case that the behaviour of PDE depends quite sensitively on the exact structure of that PDE (e.g. on the sign of various key terms), and so any result that captures such behaviour must, at some point, exploit that structure in a non-trivial manner; one usually cannot get very far in PDE by relying just on general-purpose theorems that apply to all PDE, regardless of structure.]

The Companion also has a section on history of mathematics; for instance, here is Leo Corry‘s PCM article “The development of the idea of proof“, covering the period from Euclid to Frege. We take for granted nowadays that we have precise, rigorous, and standard frameworks for proving things in set theory, number theory, geometry, analysis, probability, etc., but it is worth remembering that for the majority of the history of mathematics, this was not completely the case; even Euclid’s axiomatic approach to geometry contained some implicit assumptions about topology, order, and sets which were not fully formalised until the work of Hilbert in the modern era. (Even nowadays, there are still a few parts of mathematics, such as mathematical quantum field theory, which still do not have a completely satisfactory formalisation, though hopefully the situation will improve in the future.)

I’m continuing my series of articles for the Princeton Companion to Mathematics through the winter break with my article on distributions. These “generalised functions” can be viewed either as the limits of actual functions, as well as the dual of suitable “test” functions. Having such a space of virtual functions to work in is very convenient for several reasons, in particular it allws one to perform various algebraic manipulations while avoiding (or at least deferring) technical analytical issues, such as how to differentiate a non-differentiable function. You can also find a more recent draft of my article at the PCM web site (username Guest, password PCM).

Today I will highlight Carl Pomerance‘s informative PCM article on “Computational number theory“, which in particular focuses on topics such as primality testing and factoring, which are of major importance in modern cryptography. Interestingly, sieve methods play a critical role in making modern factoring arguments (such as the quadratic sieve and number field sieve) practical even for rather large numbers, although the use of sieves here is rather different from the use of sieves in additive prime number theory.

I would also like to highlight Doron Zeilberger‘s PCM article “Enumerative and Algebraic combinatorics“. This article describes the art of how to usefully count the number of objects of a given type exactly; this subject has a rather algebraic flavour to it, in contrast with asymptotic combinatorics, which is more concerned with computing the order of magnitude of number of objects in a class. The two subjects complement each other; for instance, in my own work, I have found enumerative and other algebraic methods tend to be useful for controlling “main terms” in a given expression, while asymptotic and other analytic methods tend to be good at controlling “error terms”.

Phase space is also used in pure mathematics, where it is used to simultaneously describe position (or time) and frequency; thus the term “time-frequency analysis” is sometimes used to describe phase space-based methods in analysis. The counterpart of classical mechanics is then symplectic geometry and Hamiltonian ODE, while the counterpart of quantum mechanics is the theory of linear differential and pseudodifferential operators. The former is essentially the “high-frequency limit” of the latter; this can be made more precise using the techniques of microlocal analysis, semi-classical analysis, and geometric quantisation.

As usual, I will highlight another author’s PCM article in this post, this one being Frank Kelly‘s article “The mathematics of traffic in networks“, a subject which, as a resident of Los Angeles, I can relate to on a personal level :-) . Frank’s article also discusses in detail Braess’s paradox, which is the rather unintuitive fact that adding extra capacity to a network can sometimes increase the overall delay in the network, by inadvertently redirecting more traffic through bottlenecks! If nothing else, this paradox demonstrates that the mathematics of traffic is non-trivial.

I’m continuing my series of articles for the Princeton Companion to Mathematics with my article on compactness and compactification. This is a fairly recent article for the PCM, which is now at the stage in which most of the specialised articles have been written, and now it is the general articles on topics such as compactness which are being finished up. The topic of this article is self-explanatory; it is a brief and non-technical introduction as to the incredibly useful concept of compactness in topology, analysis, geometry, and other areas mathematics, and the closely related concept of a compactification, which allows one to rigorously take limits of what would otherwise be divergent sequences.

The PCM has an extremely broad scope, covering not just mathematics itself, but the context that mathematics is placed in. To illustrate this, I will mention Michael Harris‘s essay for the Companion, ““Why mathematics?”, you may ask“.

I’m continuing my series of articles for the Princeton Companion to Mathematics by uploading my article on the Fourier transform. Here, I chose to describe this transform as a means of decomposing general functions into more symmetric functions (such as sinusoids or plane waves), and to discuss a little bit how this transform is connected to differential operators such as the Laplacian. (This is of course only one of the many different uses of the Fourier transform, but again, with only five pages to work with, it’s hard to do justice to every single application. For instance, the connections with additive combinatorics are not covered at all.)

On the official web site of the Companion (which you can access with the user name “Guest” and password “PCM”), there is a more polished version of the same article, after it had gone through a few rounds of the editing process.

I’ll also point out David Ben-Zvi‘s Companion article on “moduli spaces“. This concept is deceptively simple – a space whose points are themselves spaces, or “representatives” or “equivalence classes” of such spaces – but it leads to the “correct” way of thinking about many geometric and algebraic objects, and more importantly about families of such objects, without drowning in a mess of coordinate charts and formulae which serve to obscure the underlying geometry.

For commenters

To enter in LaTeX in comments, use $latex <Your LaTeX code>$ (without the < and > signs, of course; in fact, these signs should be avoided as they can cause formatting errors). See the about page for details and for other commenting policy.