While using a computer to calculate Brun's constant in 1994, Thomas Nicely discovered the infamous 'Pentium Bug'. The chip gave occasional errors when performing floating point division operations due to five errors encoded into a lookup table inside the chip. This cost Intel millions to fix.

International Standard Paper sizes and square roots: The most common size of printer paper (in Europe anyway) is A4 (about 8.3 x 11.7 inches or 210 x 297 mm). A piece of A4 is twice the area of an A5 sheet and half the area of an A3 sheet.

If you want to be able to cut a sheet of paper in half like this and get two smaller pieces with the same proportions as the original, then the height to length ratio has to be the square root of two (about 1.414).

ISO paper sizes begin with A0 which is defined as having an area of one square metre. When A0 is cut in half you get A1 and so on until after 4 divisions you end up with A4 which is therefore 1/16 of a square metre in area.

It's possible to buy sheets of paper larger than A0. One of twice the area is called 2A0 and one twice as big as that is called 4A0. This system of naming the larger formats doesn't fit well with the naming scheme of the smaller sizes - the person who came up with those large format designations was no mathematician.

It's possible to buy sheets of paper larger than A0. One of twice the area is called 2A0 and one twice as big as that is called 4A0. This system of naming the larger formats doesn't fit well with the naming scheme of the smaller sizes - the person who came up with those large format designations was no mathematician.

Urmmm, why? Correct me if I'm wrong but if if 2A0 is twice the size of A0 then 4A0, being twice the size of 2A0 is logically four times the size of A0. I agree that it doesn't fit the smaller names, but from a mathematical point of view the larger size names work perfectly.

__________________
Don't pray in my school and I won't think in your church.

The flaw with the naming arrangement is that it changes from a 'power' series (for the smaller sizes) to a multiplicative one for the larger. I agree that the naming scheme for the larger sizes makes perfect sense - it just doesn't fit well with the naming scheme for the smaller sizes.

In the smaller sizes, the number is a (negative) power of 2: A sheet of An (where n is 0, 1, 2, 3, 4, 5, 6, etc.) has an area of 2-n square metres.

If the same naming system were used for the larger sizes, then a sheet of 2A0 would be called A-1 and a sheet of 4A0 would be A-2, etc.

On the other hand, if we used the 'large' naming scheme throughout then the sheets should be called (in decreasing size order):

While I agree it's a confusing system for naming paper sizes, your post actually makes it clear that it's not unmathematical. The multiplicative term precedes the 'A', and the negative exponent follows it. As a result, there's more than one way to refer to each paper size:

1/2A-1 = A0 = 2A1 = 4A2
1/4A2 = 1/2A3 = A4 = 2A5

It's not unlike the notation used in scientific calculators, e.g. 3.782e+36

Benford's law and the detection of fraud: If you take some apparently random numbers, such as the amount of money in a bank account, or the number of times a thread at has been viewed, what would you expect the first (leftmost) digits of those numbers to be?

Your first thought might be that all digits are equally likely, and as we don't normally bother to write leading zeros, that the remaining digits 1 to 9 would each occur one ninth of the time. The surprising thing is that this is not the case: 1 occurs much more often than that, with 2 occurring less often than 1, and so on down to 9 which occurs with the lowest frequency.

One of the first people to report this fact was the nineteenth-century astronomer, Simon Newcomb, who noticed that books containing tables of logarithms always showed more wear on the pages of numbers starting with '1'. Benford investigated this further, and in 1938 concluded that the first digit is d with a probability of log10(1 + 1/d).

The table below shows roughly how often each digit appears as the first digit.

I first encountered Benford's Law in Warren Weaver's most excellent probability book, Lady Luck.

__________________Hear me / and if I close my mind in fear / please pry it openSee me / and if my face becomes sincere / bewareHold me / and when I start to come undone / stitch me togetherSave me / and when you see me strut / remind me of what left this outlaw torn

Different sorts of infinity: This might be slightly heavy for a 'trivia' thread, but I'll try and keep it light.

Infinity does tend to boggle the mind at first, but mathematicians encounter it pretty often and tend to become immune. Cantor showed that there are different classes of infinity - in a loose sense, some infinities are 'bigger' than others.

Cantor called the 'smallest' infinity aleph-null - this is the infinity of the integers: 1, 2, 3, 4, ... but (and this may seem paradoxical) the same infinity holds for the primes: 2, 3, 5, 7, 11, 13, 17, ... or all the even integers, or all the numbers divisible by one-million, or all the rational fractions.

It seems strange at first - obviously there are twice as many integers as there are even integers, and there are a million integers for every one that is a multiple of a million - and there are a 'lot more' fractions than there are integers. How can we say that all these infinities are the same?

Georg Ferdinand Ludwig Philipp Cantor said that any set of things that could be put in a one-to-one correspondence with the counting numbers has the same 'cardinal' aleph-null. He then went on to use his famous 'diagonal' proof to show that not only are there are higher infinities than aleph-null, but there are an infinite number of them.

One of the first people to report this fact was the nineteenth-century astronomer, Simon Newcomb, who noticed that books containing tables of logarithms always showed more wear on the pages of numbers starting with '1'.

Simon Newcomb's brother was the great-grandfather of physicist William Newcomb, who is best known in freethinking circles as the deviser of Newcomb's Paradox.

Different sorts of infinity: This might be slightly heavy for a 'trivia' thread, but I'll try and keep it light.

Infinity does tend to boggle the mind at first, but mathematicians encounter it pretty often and tend to become immune. Cantor showed that there are different classes of infinity - in a loose sense, some infinities are 'bigger' than others.

Cantor called the 'smallest' infinity aleph-null - this is the infinity of the integers: 1, 2, 3, 4, ... but (and this may seem paradoxical) the same infinity holds for the primes: 2, 3, 5, 7, 11, 13, 17, ... or all the even integers, or all the numbers divisible by one-million, or all the rational fractions.

It seems strange at first - obviously there are twice as many integers as there are even integers, and there are a million integers for every one that is a multiple of a million - and there are a 'lot more' fractions than there are integers. How can we say that all these infinities are the same?

Georg Ferdinand Ludwig Philipp Cantor said that any set of things that could be put in a one-to-one correspondence with the counting numbers has the same 'cardinal' aleph-null. He then went on to use his famous 'diagonal' proof to show that not only are there are higher infinities than aleph-null, but there are an infinite number of them.

I love the infinites and transfinites. The wiki on the alephs is rather well done.

__________________
Through with oligarchy? Ready to get the money out of politics? Want real progressives in office who will work for the people and not the donors?

Nontransitive (unfair) bingo cards: In this much simplified bingo game, only the numbers 1 to 6 are used. The winner is the first person to complete a horizontal row.

The amazing thing is that, with two players, this game is unfair! Let your opponent choose a card first, then you choose the one 'before'. In the long run A beats B, B beats C, C beats D, and D beats A!

Clever. Let me see if my intuition is right about this - the way it works is that the two numbers on a losing card are spread across two rows on a winning card. Considering A & B, for example, there's a better chance of either a 1 or 3 showing up before both the 2 & 4, than there is of both the 5 & 6 showing up (also prior to the 2 & 4).

I tried to explain it, but found my mind was boggling on both cylinders after a while...

I googled "nontransitive bingo" hoping to find a clear explanation, but the first result was this thread! (way to go, Google!)

So I figured there are only 6! ways of drawing 6 bingo balls (that's only 720 ways) so I wrote a little program to check which cards win for each of the possible games. I might have a bug in my program, but I found that the cards aren't nontransitive B is the best card, D is the worst.

Maybe you need to take into account that there aren't really 6! possibilities? What I mean is that, a game can be over in as little as 2 draws, and by considering each of the 6! possibilities, many of which would never occur in a real bingo game, you're effectively weighting the probabilities.

Take a game of A vs B, for example. Should we assign equal weight to:
(1,2,3,4,5,6)
(1,2,6,5,4,3) and
(1,6,5,4,3,2)?

Or should we allow for shorter sequences and assign equal weight to:
(1,2) and
(1,6,5)?

I'll hack my program when I get time, so that it works out the results your way.

Edited to add: Now I've thought about it a bit more, I'm still not sure. Each of those 720 possible ways for the balls to come out is equally likely? My program just runs through all the possibilities and finds which card wins - once a card has won a game, it doesn't care about any remaining numbers. In the card A versus card B example, B wins 384 of the possible games and A only wins 336.

Actually, I think I'm wrong now. Even if the numbers aren't drawn, you need to consider each 6! to have equal weighting so that you get the right probabilities, i.e. 1/6 for first draw, 1/5 for second, etc.

One question - what happens when two cards get a row at the same time?

Here's a curiosity I recently discovered: all the possible isohedral dice. What is an isohedron? It is a polyhedron where all the faces are alike, though that need not be the case for the edges or vertices.

For all edges and all vertices alike, one gets the five Platonic solids:
tetrahedron
cube / hexahedron
octahedron
dodecahedron
icosahedron

I've indicated which ones are regular polyhedra. Of the 4-sided ones, the 2*n, 12, 24, and 60 have kite-shaped faces and the 2*3=6, 12, and 30 ones have rhombus-shaped faces. And in general, the 5-sided ones; 12, 24, and 60, have two mirror-image variants.

The triangular-face ones can be constructed by using the faces of regular polyhedra as the bases of pyramids. A n-sided face can be turned into a 2*n-sided one by adding vertices at the centers of the edges. And the 2*n-sided triangular family can be constructed in the same way, but with using an isolated n-gon with a pyramid on each side.

If you like mathematics and programming, check out Project Euler, which contains big set of mathematics problems presented as a challenge to programmers -- who can write programs that solve these problems that are (1) elegant and (2) fast?