Archive for November 2011

Something about writing this thesis makes me think of setting things on fire. This prompted me to wonder exactly what a flame is.

Although a flame appears to be a stable, defined structure, we know that this is an illusion. Particles enter at the bottom and leave at the top, becoming visible for only a part of that journey; the region of space in which they’re visible we call a ‘flame’. It’s rather like a queue: it has a shape, a duration and a certain characteristic behaviour, but nothing about it is permanent. (Remember how no part of our bodies is the same after 20 years…?)

So a flame is really a time-averaged aggregate of microscopic events. But what light-emitting events lead to the thing we call a flame?

Candle wax is made of long-chain hydrocarbons that have a low melting point. Heat turns wax from a white solid to a clear liquid, and then to a gas. Heat rises, and the gas molecules are carried upwards. When they are hot enough, they react with oxygen to form carbon dioxide and water vapour, like this:

But this is too simple. The process of combustion is incredibly complex, with countless steps and short-lived intermediates – something a bit more like this.

Many of these reactions give off heat. The heat excites electrons in nearby molecules, and these electrons relax back to their original positions by emitting light. The colour of the light given off depends on how hot the molecule was. It’s how astronomers measure how hot the stars are, and what they’re made of.

The hottest part of a candle flame is the very bottom, where oxygen is plentiful and there is a high density of heat-generating combustion reactions occurring per second, driving up the temperature. The molecules here burn at about 1400 °C and give off a hot blue colour.

These very efficient reactions near the bottom effectively starve the rest of the flame of oxygen. Oxygen still gets in through the sides, but not as efficiently. The result is incomplete combustion: the wax is converted into particles of soot carried upwards by the current of air generated by the heat. These are still hot enough to glow, but the temperature is much lower, and so this region of the flame is a cool yellow.

The flame is only the part of the process we can see. It is misleading to see that soot particles emit light within the flame; better to say that the flame-space is defined as that region within which the particles are hot enough to glow. The tapering shape of the flame comes directly from the low availability of oxygen. When you trap a flame under a glass, the flame extends before going out, because the lifetime of a glowing particle is longer in the absence of oxygen.

This is demonstrated in a lovely picture from NASA of a flame in microgravity. Because there is no ‘up’ for the air currents to go, the flame burns in all directions. This is a much more efficient use of space: oxygen can get in from all directions, so the fire burns strongly, with no soot to give it a yellow colour.

Restricting ourselves to changing just one thing makes us into scientists. A scientist might express it in this way: in an experiment in which all other things are held constant, what variable would you alter in order to maximise the happiness of the world?

Even the most naïve scientist acknowledges that there are not many problems that can be solved by changing just one thing. But even that is an interesting observation. Let’s consider it in more detail – with graphs.

Here are three graphs with, as a y-axis, some imaginary scale of ‘aggregated societal happiness’ – a grotesque utilitarian caricature, but bear with me, I’m trying to make a point. What we vary lies along the horizontal axis. We change the value along the horizontal, and watch to see how happiness goes up or down.

Graph A shows a simple relationship where the more you have of X, the better off everyone is. X might be something like availability of food, ranging from 0% to 100% – if one more person can eat, the world is a little bit better off for it.

Graph B shows the opposite, where the more you have of X, the worse off everyone is. X here might be prevalence of smallpox; under no circumstance does more X mean more happiness.

In graph C, there is a certain value of X that ensures a maximum of happiness, and too little X or too much is actually a bad thing. X here might be freedom of expression. If you object to this, then I’m sure you won’t object to me hanging a Nazi poster in your bedroom. There’s only so much freedom of expression you can have before it starts to clash with other freedoms you enjoy, like your freedom of privacy.

But really interesting to me is a graph like this one.

Here we have two happiness maxima – two clearly different ways of organising a society, one, perhaps, happier than another – but separated by a chasm of misery for some levels of X.*

What are examples of X that would generate this curve? They are instances where everyone benefits from acting the same way, society suffers a little more for every person that deviates… until the deviants become the majority, in which case everyone is punished for those people who choose not to deviate. One good example for such an X is the tendency to drive on the left side of the road: it’s great if everyone does it, great if nobody does it, but chaos if exactly half the people do it.

But what if you want to change from driving on the left to driving on the right? To move between one maximum of utility and another? It has to be done in one step – overnight – to avoid the dangers of the middle ground. It can be done, and indeed has. But in some cases, a simple transition from one stable state to another is simply impossible; it is simply too costly. Other things will have to change to accommodate it.

The argument about freedom of expression as pertains to Nazi posters was stolen from Chomsky, Understanding Power. Can’t find the page number.

* It’s important to be clear that this is separate from the idea that you have to make things worse now in order to make them better later – which is itself an important concept, but not under discussion here. Happiness levels at a given X are taken to be instantaneous and without memory; they are functions of state, not functions of path.

Another quick link to a video. I like Steven Pinker a lot. Usually when I tell this to my philosopher friends, they nod, smiling, and say “Ah, Pinker.” Usually when I tell my linguist friends, they roll their eyes and say “Ugh. Pinker.” Which of these grotesque caricatures best describes you? Watch his talk and decide for yourself. It’s over an hour long, but even if you do nothing else today, your life will be immeasurably enriched by an anecdote beginning at 20 minutes and 33 seconds…

We like board games here at the SI, and the latest craze sweeping the office is xiangqi – Chinese chess.

Xiangqi is played on a nine-by-nine grid, and is similar to Western chess in many ways: it is turn-based, zero-sum, and is won by checkmating the opponent’s king (or ‘general’). It is easy to see how the two games might share a common ancestor. Many of the pieces move in similar ways – the pieces called ‘chariots’ move exactly like castles, the ‘horses’ almost exactly like knights, the ‘soldiers’ and ‘elephants’ a little bit like pawns and bishops. Some pieces have no equivalent in Western chess – ‘fireworks’ capture by jumping over exactly one other piece, and ‘advisors’ protect the general – but, rather pleasingly, once the pieces’ moves have been learnt, the tactics and strategy of chess carry over very well. If you are good at chess, you’re not too bad at xiangqi either, even if you don’t know it yet.

One important difference is that the xiangqi’s general is confined to a small space – a three-by-three grid. This has the effect that checkmate is much easier to achieve in xiangqi than in chess.

We may define this exactly. Consider the set of all possible combinations of up to 32 pieces on an 8×8 chessboard ­– a huge number, call it C. Now imagine the much smaller (though still gigantic) set of possible combinations of pieces that are checkmates for back. Call this smaller number C+. Now consider the equivalent numbers in xiangqi – the set of all possible positions, X, and the subset of all possible checkmates, X+. When we say that checkmate is easier in xiangqi than in chess, we mean that (X+ / X) is much bigger than (C+ / C) – checkmates make up a much bigger proportion of the available positions.

We’re talking about huge numbers when we discuss X and C, but still finite – Dennett fans would call them Vast. The number of possible positions is an upper bound, a kind of mathematical worst-case scenario. In fact the number can be made smaller by realising that some positions are unreachable. In chess, the set of positions in which a pawn sits on the first row may be removed from C; likewise in xiangqi the positions in which the two generals illegally face each other. These legally accessible positions are called the state spaces of xiangqi and chess.

State space sizes are difficult to calculate. People begin with an upper bound, then whittle this Vast figure down by working out the mathematical consequences of the games’ rules. Exact answers have been obtained with simple games like noughts and crosses: 765. Chess’s has been estimated as 100,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000, xiangi’s as ten times bigger, but take these as approximations. The set of all possible games, of course, is much, much higher.

Don’t bother playing draughts, by the way. It’s been solved. The entire decision tree of a game draughts has been worked out, in 2007. From the starting position, draughts will always end in a draw if played perfectly. The game only becomes interesting because people make mistakes.