Numbers are a lot of fun. They can start conversations—the interesting number paradox is a party favourite: every number must be interesting because the first number that wasn't would be very interesting! Of course, in the wrong company they can just as easily end conversations.

The art here is my attempt at transforming famous numbers in mathematics into pretty visual forms, start some of these conversations and awaken emotions for mathematics—other than dislike and confusion

the numbers π, φ and e

The consequence of the interesting number paradox is that all numbers are interesting. But some are more interesting than others—how Orwellian!

All animals are equal, but some animals are more equal than others.—George Orwell (Animal Farm)

Numbers such as `pi` (or `tau` if you're a revolutionary), `phi`, `e`, `i = \sqrt{-1}`, and `0` have captivated imagination. Chances are at least one of them appears in the next physics equation you come across.

Of these three transcendental numbers, `\pi` (3.14159265...) is the most well known. It is the ratio of a circle's circumference to its diameter (`d = \pi r`) and appears in the formula for the area of the circle (`a = \pi r^2`).

▲ The numbers `\pi`, `\phi` and `e` nearly form a right-angled triangle.

The last of the three numbers, `e` (2.71828182...) is Euler's number and also known as the base of the natural logarithm. It, too, can be defined geometrically—it is the unique real number, `e`, for which the function `f(x) = e^x` has a tangent of slope 1 at `x=0`. Like `\pi`, `e` appears throughout mathematics. For example, `e` is central in the expression for the normal distribution as well as the definition of entropy. And if you've ever heard of someone talking about log plots ... well, there's `e` again!

Two of these numbers can be seen together in mathematics' most beautiful equation, the Euler identity: `e^{i pi} = -1`. The tau-oists would argue that this is even prettier: `e^{i tau} = 1`.

did you see something special?

These three numbers have the curious property that they are almost Pythagorean. In other words, if they are made into sides of a triangle, the triangle is nearly a right-angled triangle (89.1°).

is π normal?

It is not yet known whether the digits of π are normal—determining this is an important problem in mathematics. In other words, is the distribution of digit frequencies in π uniform? Do each of the digits 0–9 appear exactly 1/10th of the time, does every two-digit string appear exactly 1/100th of the time and so on for every finite-length string1?

1 One interesting finite-length string is the 6-digit Fenyman Point (...999999...) which appears at digit 762 in π. The Feynman Point was the subject of 2014 `pi` Day art.

This question can be posed for different representations of π—in different bases. The distribution frequencies of 1/10, 1/100, and so on above refer to the representation of π in base 10. This is the way we're used to seeing numbers. However, if π is encoded as binary (base 2), would all the digits in 11.00100100001111... be normal? The table below shows the first several digits of π in each base from 2 to 16, as well as the natural logarithm base, `e`.

Because the digits in the numbers are essentially random (this is a conjecture), the essence of the art is based on randomness.

A vexing consequence of π being normal is that, because it is non-terminating, π would contain all patterns. Any word you might think of, encoded into numbers in any way, would appear infinitely many times. The entire works of Shakespeare, too. As well, all his plays in which each sentence is reversed, or has one spelling mistake, or two! In fact, you would eventually find π within π, but only if you have infinite patience.

This is why any attempts to use the digits of `pi` to infer meaning about anything is ridiculous. The exact opposite of what you find is also in `pi`.

Stoneham's constant

A number can be normal in one base, but another. For example, Stoneham's constant,

patterns in the art

Some of the numerical art reveals interesting and unexpected observations. For example, the sequence 999999 in π at digit 762 called the Feynman Point. Or that if you calculate π to 13,099,586 digits you will find love.

Data in small multiples can vary in range, noise level and trend. Gregor McInerny and myself show you how you can deal with this by cropped and scaling the multiples to a different range to emphasize relative changes while preserving the context of the full data range to show absolute changes.

The Jurassic World Creation Lab webpage shows you how one might create a dinosaur from a sample of DNA. First extract, sequence, assemble and fill in the gaps in the DNA and then incubate in an egg and wait.

▲ We can't get dinosaur genomics right, but we can get it less wrong. (a) Corn genome used in Jurassic World Creation Lab website. Image is from the Science publication B73 Maize Genome: Complexity, Diversity, and Dynamics. Photo and composite by Universal Studios and Amblin Entertainment. (b) Random data on 8 chromosomes from chicken genome resized to triceratops genome size (3.2 Gb). Image by Martin Krzywinski. (c) Actual genome data for lizard genome, UCSC anoCar2.0, May 2010. Image by Martin Krzywinski. Triceratops outline in (b,c) from wikipedia.

With enough time, you'll grow your own brand new dinosaur. Or a stalk of corn ... with more teeth.

The original figure details the relationships between more than 100 sequenced epigenomes and genetic traits, including disease like Crohn's and Alzheimer's. These relationships were shown as a heatmap in which the epigenome-trait cell depicted the P value associated with tissue-specific H3K4me1 epigenetic modification in regions of the genome associated with the trait.

As much as I distrust network diagrams, in this case this was the right way to show the data. The network was meticulously laid out by hand to draw attention to the layered groups of diseases of traits.

▲ Network diagram redesign of the heatmap for a select set of traits. Only relationships with –log P > 3.9 are displayed. Appears on Graphic Science page in June 2015 issue of Scientific American.
(details)

The bootstrap is a computational method that simulates new sample from observed data. These simulated samples can be used to determine how estimates from replicate experiments might be distributed and answer questions about precision and bias.

We discuss both parametric and non-parametric bootstrap. In the former, observed data are fit to a model and then new samples are drawn using the model. In the latter, no model assumption is made and simulated samples are drawn with replacement from the observed data.

Background reading

Building on last month's column about Bayes' Theorem, we introduce Bayesian inference and contrast it to frequentist inference.

Given a hypothesis and a model, the frequentist calculates the probability of different data generated by the model, P(data|model). When this probability to obtain the observed data from the model is small (e.g. `alpha` = 0.05), the frequentist rejects the hypothesis.

In contrast, the Bayesian makes direct probability statements about the model by calculating P(model|data). In other words, given the observed data, the probability that the model is correct. With this approach it is possible to relate the probability of different models to identify one that is most compatible with the data.

The Bayesian approach is actually more intuitive. From the frequentist point of view, the probability used to assess the veracity of a hypothesis, P(data|model), commonly referred to as the P value, does not help us determine the probability that the model is correct. In fact, the P value is commonly misinterpreted as the probability that the hypothesis is right. This is the so-called "prosecutor's fallacy", which confuses the two conditional probabilities P(data|model) for P(model|data). It is the latter quantity that is more directly useful and calculated by the Bayesian.