[Preface: This is the second part of my discussion of this paper by Craig Holt. It has a few more equations than usual, so strap a seat-belt onto your brain and get ready!]

“Alright brain. You don’t like me, and I don’t like you, but let’s get through this thing and then I can continue killing you with beer.” — Homer Simpson

Imagine a whale. We like to say that the whale is big. What does that mean? Well, if we measure the length of the whale, say by comparing it to a meter-stick, we will count up a very large number of meters. However, this only tells us that the whale is big in comparison to a meter-stick. It doesn’t seem to tell us anything about the intrinsic, absolute length of the whale. But what is the meaning of `intrinsic, absolute’ length?

Imagine the whale is floating in space in an empty universe. There are no planets, people, fish or meter-sticks to compare the whale to. Maybe we could say that the whale has the property of length, even though we have no way of actually measuring its length. That’s what `absolute’ length means. We can imagine that it has some actual number, independently of any standard for comparison like a meter-stick.

“Oh no, not again!”

In Craig’s Holt’s paper, this distinction — between measured and absolute properties — is very important. All absolute quantities have primes (also called apostrophes), so the absolute length of a whale would be written as whale-length’ and the absolute length of a meter-stick is written meter’. The length of the whale that we measure, in meters, can be written as the ratio whale-length’ / meter’ . This ratio is something we can directly measure, so it doesn’t need a prime, we can just call it whale-length: it is the number of meter sticks that equal a whale-length. It is clear that if we were to change all of the absolute lengths in the universe by the same factor, then the absolute properties whale-length’ and meter’ would both change, but the measurable property of whale-length would not change.

Ok, so, you’re probably thinking that it is weird to talk about absolute quantities if we can’t directly measure them — but who says that you can’t directly measure absolute quantities? I only gave you one example where, as it turned out, we couldn’t measure the absolute length. But one example is not a general proof. When you go around saying things like “absolute quantities are meaningless and therefore changes in absolute quantities can’t be detected”, you are making a pretty big assumption. This assumption has a name, it is called Bridgman’s Principle (see the last blog post).

Bridgman’s Principle is the reason why at school they teach you to balance the units on both sides of an equation. For example, `speed’ is measured in units of length per time (no, not milligrams — this isn’t Breaking Bad). If we imagine that light has some intrinsic absolute speed c’, then to measure it we would need to have (for example) some reference length L’ and some reference time duration T’ and then see how many lengths of L’ the light travels in time T’. We would write this equation as:

where C is the speed that we actually measure. Bridgman’s Principle says that a measured quantity like C cannot tell us the absolute speed of light c’, it only tells us what the value of c’ is compared to the values of our measuring apparatus, L’ and T’ (for example, in meters per second). If there were some way that we could directly measure the absolute value of c’ without comparing it to a measuring rod and a clock, then we could just write c’ = C without needing to specify the units of C. So, without Bridgman’s Principle, all of Dimensional Analysis basically becomes pointless.

So why should Bridgman’s Principle be true in general? Scientists are usually lazy and just assume it is true because it works in so many cases (this is called “proof by induction”). After all, it is hard to find a way of measuring the absolute length of something, without referring to some other reference object like a meter-stick. But being a good scientist is all about being really tight-assed, so we want to know if Bridgman’s Principle can be proven to be watertight.

A neat example of a watertight principle is the Second Law of Thermodynamics. This Law was also originally an inductive principle (it seemed to be true in pretty much all thermodynamic experiments) but then Boltzmann came along with his famous H-Theorem and proved that it has to be true if matter is made up of atomic particles. This is called a constructive justification of the principle [1].

The H Theorem makes it nice and easy to judge whether some crackpot’s idea for a perpetual motion machine will actually run forever. You can just ask them: “Is your machine made out of atoms?” And if the answer is `yes’ (which it probably is), then you can point out that the H-Theorem proves that machines made up of atoms must obey the Second Law, end of story.

Coming up with a constructive proof, like the H-Theorem, is pretty hard. In the case of Bridgman’s Principle, there are just too many different things to account for. Objects can have numerous properties, like mass, charge, density, and so on; also there are many ways to measure each property. It is hard to imagine how we could cover all of these different cases with just a single theorem about atoms. Without the H-Theorem, we would have to look over the design of every perpetual motion machine, to find out where the design is flawed. We could call this method “proof by elimination of counterexamples”. This is exactly the procedure that Craig uses to lend support to Bridgman’s Principle in his paper.

To get a flavor for how he does it, recall our measurement of the speed of light from equation (1). Notice that the measured speed C does not have to be the same as the absolute speed c’. In fact we can rewrite the equation as:

and this makes it clear that the number C that we measure is not itself an absolute quantity, but rather is a comparison between the absolute speed of light c’ and the absolute distance L’ per time T’. What would happen if we changed all of the absolute lengths in the universe? Would this change the value of the measured speed of light C? At first glance, you might think that it would, as long as the other absolute quantities on the left hand side of equation (2) are independent of length. But if that were true, then we would be able to measure changes in absolute length by observing changes in the measurable speed of light C, and this would contradict Bridgman’s Principle!

To get around this, Craig points out that the length L’ and time T’ are not fundamental properties of things, but are actually reducible to the atomic properties of physical rods and clocks that we use to make measurements. Therefore, we should express L’ and T’ in terms of the more fundamental properties of matter, such as the masses of elementary particles and the coupling constants of forces inside the rods and clocks. In particular, he argues that the absolute length of any physical rod is equal to some number times the “Bohr radius” of a typical atom inside the rod. This radius is in turn proportional to:

where h’, c’ are the absolute values of Planck’s constant and the speed of light, respectively, and m’e is the absolute electron mass. Similarly, the time duration measured by an atomic clock is proportional to:

As a result, both the absolute length L’ and time T’ actually depend on the absolute constants c’, h’ and the electron mass m’e. Substituting these into the expression for the measured speed of light, we get:

where X,Y are some proportionality constants. So, the factors of c’ cancel and we are left with C=X/Y. The numbers X and Y depend on how we construct our rods and clocks — for instance, they depend on how many atoms are inside the rod, and what kind of atom we use inside our atomic clock. In fact, the definition of a `meter’ and a `second’ are specially chosen so as to make this ratio exactly C=299,792,458 [2].

Now that we have included the fact that our measuring rods and clocks are made out of matter, we see that in fact the left hand side of equation (5) is independent of any absolute quantities. Therefore changing the absolute length, time, mass, speed etc. cannot have any effect on the measured speed of light C, and Bridgman’s principle is safe — at least in this example.

(Some readers might wonder why making a clock heavier should also make it run faster, as seems to be suggested by equation (4). It is important to remember that the usual kinds of clocks we use, like wristwatches, are quite complicated things containing trillions of atoms. To calculate how the behaviour of all these atoms would change the ticking of the overall clock mechanism would be, to put it lightly, a giant pain in the ass. That’s why Craig only considers very simple devices like atomic clocks, whose behaviour is well understood at the atomic level [3].)

image credit: xetobyte – A Break in Reality

Another simple model of a clock is the light clock: a beam of light bouncing between two mirrors separated by a fixed distance L’. Since light has no mass, you might think that the frequency of such a clock should not change if we were to increase all absolute masses in the universe. But we saw in equation (4) that the frequency of an atomic clock is proportional to the electron mass, and so it would increase. It then seems like we could measure this increase in atomic clock frequency by comparing it to a light clock, whose frequency does not change — and then we would know that the absolute masses had changed. Is this another threat to Bridgman’s Principle?

The catch is that, as Craig points out, the length L’ between the mirrors of the light clock is determined by a measuring rod, and the rod’s length is inversely proportional to the electron mass as we saw in equation (1). So if we magically increase all the absolute masses, we would also cause the absolute length L’ to get smaller, which means the light-clock frequency would increase. In fact, it would increase by exactly the same amount as the atomic clock frequency, so comparing them would not show us any difference! Bridgman’s Principle is saved again.

Let’s do one more example, this time a little bit more extreme. According to Einstein’s theory of general relativity, every lump of mass has a Schwarzschild radius, which is the radius of a sphere such that if you crammed all of the mass into this sphere, it would turn into a black hole. Given some absolute amount of mass M’, its Schwarzschild radius is given by the equation:

where c’ is the absolute speed of light from before, and G’ is the absolute gravitational constant, which determines how strong the gravitational force is. Now, glancing at the equation, you might think that if we keep increasing all of the absolute masses in the universe, planets will start turning into black holes. For instance, the radius of Earth is about 6370 km. This is the Schwarzschild radius for a mass of about a million times Earth’s mass. So if we magically increased all absolute masses by a factor of a million, shouldn’t Earth collapse into a black hole? Then, moments before we all die horribly, we would at least know that the absolute mass has changed, and Bridgman’s Principle was wrong.

Of course, that is only true if changing the absolute mass doesn’t affect the other absolute quantities in equation (6). But as we now know, increasing the absolute mass will cause our measuring rods to shrink, and our clocks to run faster. So the question is, if we scale the masses by some factor X, do all the X‘s cancel out in equation (6)?

Well, since our absolute lengths have to shrink, the Schwarzschild radius should shrink, so if we multiply M’ by X, then we should divide the radius R’ by X. This doesn’t balance! Hold on though — we haven’t dealt with the constants c’ and G’ yet. What happens to them? In the case of c’, we have c’ = C L’ / T’. Since L’ and T’ both decrease by a factor of X (lengths and time intervals get shorter) there is no overall effect on the absolute speed of light c’.

How do we measure the quantity G’? Well, G’ tells us how much two masses (measured relative to a reference mass m’) will accelerate towards each other due to their gravitational attraction. Newton’s law of gravitation says:

where N is some number that we can measure, and it depends on how big the two masses are compared to the reference mass m’, how large the distance between them is compared to the reference length L’, and so forth. If we measure the acceleration a’ using the same reference length and time L’,T’, then we can write:

where the A is just the measured acceleration in these units. Putting this all together, we can re-arrange equation (7) to get:

and we can define G = (A/N) as the actually measured gravitational constant in the chosen units. From equation (9), we see that increasing M’ by a factor of X, and hence dividing each instance of L’ and T’ by X, implies that the absolute constant G’ will actually change: it will be divided by a factor of X2.

What is the physics behind all this math? It goes something like this: suppose we are measuring the attraction between two masses separated by some distance. If we increase the masses, then our measuring rods shrink and our clocks get faster. This means that when we measure the accelerations, the objects seem to accelerate faster than before. This is what we expect, because two masses should become more attractive (at the same distance) when they become more massive. However, the absolute distance between the masses also has to shrink. The net effect is that, after increasing all the absolute masses, we find that the masses are producing the exact same attractive force as before, only at a closer distance. This means the absolute attraction at the original distance is weaker — so G’ has become weaker after the absolute masses in the universe have been increased (notice, however, that the actually measured value G does not change).

Diagram of a Cavendish experiment for measuring gravity.

Returning now to equation (6), and multiplying M’ by X, dividing R’ by X and dividing G’ by X2, we find that all the extra factors cancel out. We conclude that increasing all the absolute masses in the universe by a factor of a million will not, in fact, cause Earth to turn into a black hole, because the effect is balanced out by the contingent changes in the absolute lengths and times of our measuring instruments. Whew!

Craig’s paper is long and very thorough. He compares a whole zoo of physical clocks, including electric clocks, light-clocks, freely falling inertial clocks, different kinds of atomic clocks and even gravitational clocks made from two orbiting planets. Not only does he generalize his claim to Newtonian mechanics, he covers general relativity as well, and the Dirac equation of quantum theory, including a discussion of Compton scattering (a photon reflecting off an electron). Besides all of this, he takes pains to discuss the meaning of coupling constants, the Planck scale, and the related but distinct concept of scale invariance. All in all, Craig’s paper just might be the most comprehensive justification for Bridgman’s principle so far in existence!

Most scientists might shrug and say “who needs it?”. In the same way, not many scientists care to examine perpetual motion machines to find out where the flaw lies. In this respect, Craig is a craftsman of the first order — he cares deeply about the details. Unlike the Second Law of Thermodynamics, Bridgman’s Principle seems rarely to have been challenged. This only makes Craig’s defense of it all the more important. After all, it is especially those beliefs which we are disinclined to question that are most deserving of a critical examination.

Footnotes:

[1] Some physical principles, like the Relativity Principle, have never been given a constructive justification. For this reason, Einstein himself seems to have regarded the Relativity Principle with some suspicion. See this great discussion by Brown and Pooley.

[2] Why not just set it to N=1? Well, no reason why not! Then we would replace the meter by the `light second’, and the second by the `light-meter’. And we would say things like “Today I walked 0.3 millionths of a light second to buy an ice-cream, and it took me just 130 billion light-meters to eat it!” So, you know, that would be a bit weird. But theorists do it all the time.

[3] To be perfectly strict, we cannot assume that a wristwatch will behave in the same way as an atomic clock in response to changes in absolute properties; we would have to derive their behavior constructively from their atomic description. This is exactly why a general constructive proof of Bridgman’s Principle would be so hard, and why Craig is forced to stick with simple models of clocks and rulers.

[Preface: A while back, Michael Raymer, a professor at the University of Oregon, drew my attention to a curious paper by Craig Holt, who tragically passed away in 2014 [1]. Michael wrote:“Dear Jacques … I would be very interested in knowing your opinion of this paper,since Craig was not a professional academic, and had little community inwhich to promote the ideas. He was one of the most brilliant PhD studentsin my graduate classes back in the 1970s, turned down an opportunity tointerview for a position with John Wheeler, worked in industry until age50 when he retired in order to spend the rest of his time in self study.In his paper he takes a Machian view, emphasizing the relational nature ofall physical quantities even in classical physics. I can’t vouch for thetechnical correctness of all of his results, but I am sure they areinspiring.”

The paper makes for an interesting read because Holt, unencumbered by contemporary fashions, freely questions some standard assumptions about the meaning of `mass’ in physics. Probably because it was a work in progress, Craig’s paper is missing some of the niceties of a more polished academic work, like good referencing and a thoroughly researched introduction that places the work in context (the most notable omission is the lack of background material on dimensional analysis, which I will talk about in this post). Despite its rough edges, Craig’s paper led me down quite an interesting rabbit-hole, of which I hope to give you a glimpse. This post covers some background concepts; I’ll mention Craig’s contribution in a follow-up post. ]

______________
Imagine you have just woken up after a very bad hangover. You retain your basic faculties, such as the ability to reason and speak, but you have forgotten everything about the world in which you live. Not just your name and address, but your whole life history, family and friends, and entire education are lost to the epic blackout. Using pure thought, you are nevertheless able to deduce some facts about the world, such as the fact that you were probably drinking Tequila last night.

The first thing you notice about the world around you is that it can be separated into objects distinct from yourself. These objects all possess properties: they have colour, weight, smell, texture. For instance, the leftover pizza is off-yellow, smells like sardines and sticks to your face (you run to the bathroom).

While bending over the toilet for an extended period of time, you notice that some properties can be easily measured, while others are more intangible. The toilet seems to be less white than the sink, and the sink less white than the curtains. But how much less? You cannot seem to put a number on it. On the other hand, you know from the ticking of the clock on the wall that you have spent 37 seconds thinking about it, which is exactly 14 seconds more than the time you spent thinking about calling a doctor.

You can measure exactly how much you weigh on the bathroom scale. You can also see how disheveled you look in the mirror. Unlike your weight, you have no idea how to quantify the amount of your disheveled-ness. You can say for sure that you are less disheveled than Johnny Depp after sleeping under a bridge, but beyond that, you can’t really put a number on it. Properties like time, weight and blood-alcohol content can be quantified, while other properties like squishiness, smelliness and dishevelled-ness are not easily converted into numbers.

You have rediscovered one of the first basic truths about the world: all that we know comes from our experience, and the objects of our experience can only be compared to other objects of experience. Some of those comparisons can be numerical, allowing us to say how much more or less of something one object has than another. These cases are the beginning of scientific inquiry: if you can put a number on it, then you can do science with it.

Rulers, stopwatches, compasses, bathroom scales — these are used as reference objects for measuring the `muchness’ of certain properties, namely, length, duration, angle, and weight. Looking in your wallet, you discover that you have exactly 5 dollars of cash, a receipt from a taxi for 30 dollars, and you are exactly 24 years old since yesterday night.

You reflect on the meaning of time. A year means the time it takes the Earth to go around the Sun, or approximately 365 and a quarter days. A day is the time it takes for the Earth to spin once on its axis. You remember your school teacher saying that all units of time are defined in terms of seconds, and one second is defined as 9192631770 oscillations of the light emitted by a Caesium atom. Why exactly 9192631770, you wonder? What if we just said 2 oscillations? A quick calculation shows that this would make you about 110 billion years old according to your new measure of time. Or what about switching to dog years, which are 7 per human year? That would make you 168 dog years old. You wouldn’t feel any different — you would just be having a lot more birthday parties. Given the events of last night, that seems like a bad idea.

You are twice as old as your cousin, and that is true in dog years, cat years, or clown years [2]. Similarly, you could measure your height in inches, centimeters, or stacked shot-glasses — but even though you might be 800 rice-crackers tall, you still won’t be able to reach the aspirin in the top shelf of the cupboard. Similarly, counting all your money in cents instead of dollars will make it a bigger number, but won’t actually make you richer. These are all examples of passive transformations of units, where you imagine measuring something using one set of units instead of another. Passive transformations change nothing in reality: they are all in your head. Changing the labels on objects clearly cannot change the physical relationships between them.

Things get interesting when we consider active transformations. If a passive transformation is like saying the length of your coffee table is 100 times larger when measured in cm than when measured in meters, then an active transformation would be if someone actually replaced your coffee table with a table 100 times bigger. Now, obviously you would notice the difference because the table wouldn’t fit in your apartment anymore. But imagine that someone, in addition to replacing the coffee table, also replaced your entire apartment and everything in it with scaled-up models 100 times the size. And imagine that you also grew to into a giant 100 times your original size while you were sleeping. Then when you woke up, as a giant inside a giant apartment with a giant coffee table, would you realise anything had changed? And if you made yourself a giant cup of coffee, would it make your giant hangover go away?

Or if you woke up as a giant bug?

We now come to one of the deepest principles of physics, called Bridgman’s Principle of absolute significance of relative magnitude, named for our old friend Percy Bridgman. The Principle says that only relative quantities can enter into the laws of physics. This means that, whatever experiments I do and whatever measurements I perform, I can only obtain information about the relative sizes of quantities: the length of the coffee table relative to my ruler, or the mass of the table relative to the mass of my body, etc. According to this principle, actively changing the absolute values of some quantity by the same proportion for all objects should not affect the outcomes of any experiments we could perform.

To get a feeling for what the principle means, imagine you are a primitive scientist. You notice that fruit hanging from trees tends to bob up and down in the wind, but the heavier fruits seems to bounce more slowly than the lighter fruits (for those readers who are physics students, I’m talking about a mass on a spring here). You decide to discover the law that relates the frequency of bobbing motion to the mass of the fruit. You fill a sack with some pebbles (carefully chosen to all have the same weight) and hang it from a tree branch. You can measure the mass of the sack by counting the number of pebbles in it, but you still need a way to measure the frequency of the bobbing. Nearby you hear the sound of water dripping from a leaf into a pond. You decide to measure the frequency by how many times the sack bobs up and down in between drips of water. Now you are ready to do your experiment.

You measure the bobbing frequency of the sack for many different masses, and record the results by drawing in the dirt with a stick. After analysing your data, you discover that the frequency f (in oscillations per water drop) is related to the mass m (in pebbles) by a simple formula:

where k stands for a particular number, say 16.8. But what does this number really mean?

Unbeknownst to you, a clever monkey was watching you from the bushes while you did the experiment. After you retire to your cave to sleep, the monkey comes out to play a trick on you. He carefully replaces each one of your pebbles with a heavier pebble of the same size and appearance, and makes sure that all of the heavier pebbles are the same weight as each other. He takes away the original pebbles and hides them. The next day, you repeat the experiment in exactly the same way, but now you discover that the constant k has changed from yesterday’s value of 16.8 to the new value of 11.2. Does this mean that the law of nature that governs the bobbing of things hanging from the tree has changed overnight? Or should you decide that the law is the same, but that the units that you used to measure frequency and mass have changed?

You decide to apply Bridgman’s Principle. The principle says that if (say) all the masses in the experiment were changed by the same proportion, then the laws of physics would not allow us to see any difference, provided we used the same measuring units. Since you do see a difference, Bridgman’s Principle says that it must be the units (and not the law itself) that has changed. `These must be different pebbles’ you say to yourself, and you mark them by scratching an X onto them. You go out looking for some other pebbles and eventually you find a new set of pebbles which give you the right value of 16.8 when you perform the experiment. `These must be the same kind of pebbles that I used in the original experiment’ you say to yourself, and you scratch an O on them so that you won’t lose them again. Ha! You have outsmarted the monkey.

Notice that as long as you use the right value for k — which depends on whether you measure the mass using X or O pebbles — then the abstract equation (1) remains true. In physics language, you are interpreting k as a dimensional constant, having the dimensions of frequency times √mass. This means that if you use different units for measuring frequency or mass, the numerical value of k has to change in order to preserve the law. Notice also that the dimensions of k are chosen so that equation (1) has the same dimensions on each side of the equals sign. This is called a dimensionally homogeneous equation. Bridgman’s Principle can be rephrased as saying that all physical laws must be described by dimensionally homogeneous equations.

Bridgman’s Principle is useful because it allows us to start with a law expressed in particular units, in this case `oscillations per water-drop’ and `O-pebbles’, and then infer that the law holds for any units. Even though the numerical value of k changes when we change units, it remains the same in any fixed choice of units, so it represents a physical constant of nature.

The alternative is to insist that our units are the same as before (the pebbles look identical after all). That means that the change in k implies a change in the law itself, for instance, it implies that the same mass hanging from the tree today will bob up and down more slowly than it did yesterday. In our example, it turns out that Bridgman’s Principle leads us to the correct conclusion: that some tricky monkey must have switched our pebbles. But can the principle ever fail? What if physical laws really do change?

Suppose that after returning to your cave, the tricky monkey decides to have another go at fooling you. He climbs up the tree and whispers into its leaves: `Do you know why that primitive scientist is always hanging things from your branch? She is testing how strong you are! Make your branches as stiff and strong as you can tomorrow, and she will reward you with water from the pond’.

The next day, you perform the experiment a third time — being sure to use your `O-pebbles’ this time — and you discover again that the value of k seems to have changed. It now takes many more pebbles to achieve a given frequency than it did on the first day. Using Bridgman’s Principle, you again decide that something must be wrong with your measuring units. Maybe this time it is the dripping water that is wrong and needs to be adjusted, or maybe you have confidence in the regularity of the water drip and conclude that the `O-pebbles’ have somehow become too light. Perhaps, you conjecture, they were replaced by the tricky monkey again? So you throw them out and go searching for some heavier pebbles. You find some that give you the right value of k=16.8, and conclude that these are the real `O-pebbles’.

The difference is that this time, you were tricked! In fact the pebbles you threw out were the real `O-pebbles’. The change in k came from the background conditions of the experiment, namely the stiffness in the tree branches, which you did not consider as a physical variable. Hence, in a sense, the law that relates bobbing frequency to mass (for this tree) has indeed changed [3].

You thought that the change in the constant k was caused by using the wrong measuring units, but in fact it was due to a change in the physical constant k itself. This is an example of a scenario where a physical constant turns out not to be constant after all. If we simply assume Bridgman’s Principle to be true without carefully checking whether it is justified, then it is harder to discover situations in which the physical constants themselves are changing. So, Bridgman’s Principle can be thought of as the assumption that the values of physical constants (expressed in some fixed units) don’t change over time. If we are sure that the laws of physics are constant, then we can use the Principle to detect changes or inaccuracies in our measuring devices that define the physical units — i.e. we can leverage the laws of physics to improve the accuracy of our measuring devices.

We can’t always trust our measuring units, but the monkey also showed us that we can’t always trust the laws of physics. After all, scientific progress depends on occasionally throwing out old laws and replacing them with more accurate ones. In our example, a new law that includes the tree-branch stiffness as a variable would be the obvious next step.

One of the more artistic aspects of the scientific method is knowing when to trust your measuring devices, and when to trust the laws of physics [4]. Progress is made by `bootstrapping’ from one to the other: first we trust our units and use them to discover a physical law, and then we trust in the physical law and use it to define better units, and so on. It sounds like a circular process, but actually it represents the gradual refinement of knowledge, through increasingly smaller adjustments from different angles. Imagine trying to balance a scale by placing handfuls of sand on each side. At first you just dump about a handful on each side and see which is heavier. Then you add a smaller amount to the lighter side until it becomes heavier. Then you add an even smaller amount to the other side until it becomes heavier, and so on, until the scale is almost perfectly balanced. In a similar way, switching back and forth between physical laws and measurement units actually results in both the laws and measuring instruments becoming more accurate over time.

______________

[1] It is a shame that Craig’s work remains incomplete, because I think physicists could benefit from a re-examination of the principles of dimensional analysis. Simplified dimensional arguments are sometimes invoked in the literature on quantum gravity without due consideration for their meaning.

[2] Clowns have several birthdays a week, but they aren’t allowed to get drunk at them, which kind of defeats the purpose if you ask me.

[3] If you are uncomfortable with treating the branch stiffness as part of the physical law, imagine instead that the strength of gravity actually becomes weaker overnight.

[4] This is related to a deep result in the philosophy of science called the Duhem-Quine Thesis.
Quoth Duhem: `If the predicted phenomenon is not produced, not only is the questioned proposition put into doubt, but also the whole theoretical scaffolding used by the physicist’.

PRINCIPLES AS TOOLS
(Not to be confused with using Principals as tools, which is what happens if your school Principal is a tool because he never taught you the difference between a Principal and a principle. Also not to be confused with a Princey-pal, who is a friend that happens to be a Prince).

`These principles are the boldly generalized results of experiment; but they appear to derive from their very generality a high degree of certainty. In fact, the greater the generality, the more frequent are the opportunities for verifying them, and such verifications, as they multiply, as they take the most varied and most unexpected forms, leave in the end no room for doubt.’ -Poincaré

One of the great things Einstein did, besides doing physics, was trying to explain to people how to do it as good as him. Ultimately he failed, because so far nobody has managed to do better than him, but he left us with some really interesting insights into how to come up with new physical theories.

One of these ideas is the concept of using `principles’. A principle is a statement about how the word works (or should work), stated in ordinary language. They are not always called principles, but might be called laws, postulates or hypotheses. I am not going to argue about semantics here. Just consider these examples to get a flavour:

The Second Law of Thermodynamics: You can’t build an engine which does useful work and ends up back in its starting position without producing any heat.

The Principle of Relativity: It is impossible to tell by local experiments whether or not your laboratory is moving.

And some not strictly physics ones:

Shirky’s law: Institutions will try to preserve the problem to which they are the solution.

Murphy’s law: If something can go wrong, it will go wrong.

Stigler’s law: No scientific discovery is named after its original discoverer (this law was actually discovered by R.K. Merton, not Stigler).

Parkinson’s law: Work always expands to fill up the time allocated to doing it.
(See Wikipedia’s list of eponymous laws for more).

You’ll notice that principles are characterised by two main things: they ring true, and they are vague. Both of these properties are very important for their use in building theories.

Now I can practically hear the lice falling out as you scratch your head in confusion. “But Jacques! How can vagueness be a useful thing to have in a Principle? Shouldn’t it be made as precise as possible?”

No, doofus. A Principle is like an apple. You know what an apple is right?

Well, you think you do. But if I were to ask you, what colour is an apple, how sweet is an apple, how many worms are in an apple, you would have to admit that you don’t know, because the word “apple” is too vague to answer those questions. It is like asking how long is a piece of string. Nevertheless, when you want to go shopping, it suffices to say “buy me an apple” instead of “buy me a Malus domestica, reflective in the 620-750 nanometer range, ten percent sugar, one percent cydia pomonella“.

The only way to make a principle more precise is within the context of a precise theory. But then how would I build a new theory, if I am stuck using the language of the old theory? I can make the idea of an apple more precise using the various scientifically verified properties that apples are known to have, but all of that stuff had to come after we already had a basic vague understanding of what an “apple” was, e.g. a kind of round-ish thing on a tree that tastes nice when you eat it.

The vagueness of a principle means that it defines a whole family of possible theories, these being the ones that kind of fit with the principle if you take the right interpretation. On one hand, a principle that is too vague will not help you to make progress, because it will be too easy to make it fit with any future theory; on the other hand, a principle that is not vague enough will leave you stuck for choices and unable to progress.

The next aspect of a good principle is that it “rings true”. In other words, there is something about it that makes you want it to be true. We want our physical theories to be intuitive to our soft, human brains, and these brains of ours have evolved to think about the world in very specific terms. Why do you think physics seems to be all about the locations of objects in space, moving with time? There are infinitely many ways to describe physics, but we choose the ones we do because of the way our physical senses work, the way our bodies interact with the world, and the things we needed to do in order to survive up to this point. What is the principle of least action? It is a river flowing down a mountain. What is Newtonian mechanics? It is animals moving on the plains. We humans need to see the world in a special way in order to understand it, and good principles are what allow us to shoehorn abstract concepts like thermodynamics and gravitational physics into a picture that looks familiar to us, that we can work with.

That’s why a good principle has to ring true — it has to appeal to the limited imaginative abilities of us humans. Maybe if we were different animals, the laws of physics would be understood in very different terms. Like, the Newtonian mechanics of snakes would start with a simple model of objects moving along snake-paths in two dimensions (the ground), and then go from there to arbitrary motions and higher dimensions. So intelligent snakes might have discovered Fourier analysis way before humans would have, just because they would have been more used to thinking in wavy motions instead of linear motions.

So you see, coming up with good principles is really an art form, that requires you to be deeply in touch with your own humanity. Indeed, principle-finding is part of the great art of generating hypotheses. It is a pity that many scientists don’t practice hypothesis generation enough to realise that it is an art (or maybe they don’t practice art enough?) It is also ironic that science tries so hard to eliminate the human element from the theories, when it is so apparent in the final forms of the theories themselves. It is just like an artist who trains so hard to hide her brush strokes, to make the signature of her hand invisible, even though the subject of the painting is her own face.

Ok, now that we know what principles are, how do we find them? One of the best ways is by the age-old method of Induction. How does induction work? It really deserves its own post, but here it is in a nutshell. Let’s say that you are a turkey, and you observe that whenever the farmer makes a whistle, there is some corn in your bowl. So, being a smart turkey, you might decide to elevate this empirical pattern to a general principle, called the Turkey Principle: whenever the farmer whistles, there is corn in your bowl. BOOM, induction!

Now, what is the use of this principle? It helps you to narrow down which theories are good and which are bad. Suppose one day the farmer whistles but you discover there is not corn in the bowl, but rather rice. With your limited turkey imagination, you are able to come up with three hypotheses to explain this. 1. There was corn in the bowl when the farmer whistled, but then somebody came along and replaced it with rice; 2. the Turkey Principle should be amended to the Weak Turkey Principle, which states that when the farmer whistles, food, but not necessarily corn, will be in the bowl; 3. the contents of the bowl are actually independent of the farmer’s whistling, and the apparent link between these phenomena is just a coincidence. Now, with the aid of the Principle, we can see that there is a clear preference for hypothesis 1 over 2, and for 2 over 3, according to the extent that each hypothesis fits with the Turkey Principle.

This example makes it clear that deciding which patterns to upgrade to general principles, and which to regard as anomalies, is again a question of aesthetics and artistry. A more perceptive turkey might observe that the farmer is not a simple mechanistic process, but a complex and mysterious system, and therefore may not be subject to such strong constraints with regards to his whistling and corn-giving behaviour as are implied by the Turkey Principle. Indeed, were the turkey perceptive enough to guess at the farmer’s true motives, he might start checking the tool shed to see if the axe is missing before running to the food bowl every time the farmer whistles. But this turkey would no doubt be working on hypotheses of his own, motivated by principles of his own, such as the Farmer-is-Not-to-be-Trusted Principle (in connection with the observed correlation of turkey disappearances and family dinner parties).

An example more relevant to physics is Einstein’s Equivalence Principle: that no local experiment can determine whether the laboratory is in motion, or is stationary in a gravitational field. The principle is vague, as you can see by the number of variations, interpretations, and Weak and Strong versions that exist in the literature; but undoubtedly it rings true, since it appears to be widely obeyed all but the most esoteric phenomena, and it gels nicely with the Principle of Relativity. While the Equivalence Principle was instrumental in leading to General Relativity, it is a matter of debate how it should be formulated within the theory, and whether or not it is even true. Much like hammers and saws are needed to make a table, but are not needed after the table is complete, we use principles to make theories and then we set them aside when the theory is complete. The final theory makes predictions perfectly well without needing to refer to the principles that built it, and the principles are too vague to make good predictions on their own. (Sure, with enough fiddling around, you can sit on a hammer and eat food off a saw, but it isn’t really comfortable or easy).

For more intellectual reading on principle theories, see the SEP entry on Einstein’s Philosophy of Science, and Poincare’s excellent notes.

`When I think of the formal scientific method an image sometimes comes to mind of an enormous juggernaut, a huge bulldozer — slow, tedious; lumbering, laborious, but invincible. […] There’s no fault isolation problem in motorcycle maintenance that can stand up to it. When you’ve hit a really tough one, tried everything, racked your brain and nothing works, and you know that this time Nature has really decided to be difficult, you say, “Okay, Nature, that’s the end of the nice guy,” and you crank up the formal scientific method.’ -Robert Pirsig

The first time I learned the Scientific Method was in high school. I was told to keep a logbook, in which I had to record the hypothesis to be tested, the apparatus used, the method of using said apparatus, the results, the discussion and finally the conclusion. It was incredibly boring. Also, in high school, it was pointless because you already knew what the outcome was going to be.

Unfortunately, this state of affairs remains basically true all the way through undergraduate studies in physics at University. Once again, it is tacitly understood that you are there to gain knowledge – pre-existing knowledge that can be found in a textbook — which was gained through the infallible Scientific Method. So it was quite a shock when I happened to take a History and Philosophy of Science course (purely optional for physics majors) and there learned that the Scientific Method simply did not exist.

Or rather, my mental image of the Scientific Method as a hard-and-fast list of rules, handed down through generations of scientists like the Hippocratic oath or the Ten Commandments, was a complete fiction. Instead, hordes of slavering philosophers clawed at each other, trying to define this mysterious procedure by which humans gained knowledge, that has come to be called `science’. Oddly enough, very few scientists seemed to be troubled by this, being too busy actually doing science to really worry about whether what they were doing was well-defined or not. In fact, the act of doing science comes so naturally to us that we frequently do not think to question how it is that we are able to make successful deductions about the world.

For example, suppose you notice that the rooster crows every morning just after the sun rises. You would probably deduce that the appearance of the sun caused the rooster to crow. However, suppose I told you that I had a big machine, and that there was a particular cog in the machine that would turn just before a bell rang. Since every cause should precede its effect, you could deduce that either the turning of the cog causes the bell to ring, or else there is some other component that is a common cause of both the cog turning and the bell ringing. Beyond that, we can say nothing about their relationship. So why do we not similarly think that there might be a common event that causes the sun to rise, and also makes the rooster crow?

The answer is probably that our brains have evolved to be naturally good at making deductions about the world, taking into account previous experience and the results of our interactions with the world. Our observations of the rooster and the sun take place in a larger context, in which we know quite a lot of stuff about the behavior of roosters and the sun relative to other things and we have built up a mental model of the world in which the rising sun triggers the rooster’s call. It is this very same mental model-building that we employ when we try to understand the natural world through science. We gather information, and then make deductions, partly using our existing intuitions and knowledge about the world, and partly using pure logic and statistics. The problem is thrown into particularly sharp relief when we try to build artificial intelligences (AI’s) that can do science and make deductions about the world. The trouble is that our AI’s do not have the benefit of millions of years of evolution built into them like we do, and so we have to tell them how to make sense of the world from scratch. If everything looked like cogs and levers to you, how would you make deductions about cause and effect? [1]

One of the most famous philosophers of science was Karl Popper. Popper argued that a key criterion of science is the fact that its hypotheses are falsifiable. In particular, whatever you might guess about the turning cog and the ringing bell, you should be able to do an experiment where you turn the cog and see whether or not the bell rings, and thereby eliminate one of your hypotheses. Unfortunately, this criterion is not good enough. For example, I can claim that there is a Bogeyman in my closet. This is clearly falsifiable – I just have to look inside my closet to determine whether or not the Bogeyman is present. However, it would not be correct to call this a scientific hypothesis, because there is absolutely no reason to think that there should be a Bogeyman there in the first place.

Thomas Kuhn took a different approach and tried to define science as a sort of social phenomenon with special characteristics. He argued that most science is more like puzzle-solving, where the goal is not to discover new rules by making hypotheses, but rather to resolve well-defined puzzles within an existing framework of rules that everybody agrees upon. In Kuhn’s paradigm, it is widely accepted that Bogeymen do not exist, so there is no Bogeyman puzzle to be solved.

Even physicists have got into the mix. David Deutsch has argued that we should prefer theories that are harder to alter in the face of new information. He points out that, having apparently falsified the Bogeyman theory, one could rescue it by claiming that the Bogeyman was invisible. This too could be falsified, if the poking of a pointy stick into the closet failed to elicit a response from the alleged Bogeyman, but it is clear that the vagueness in definition of the “Bogeyman” would always leave a possible way out for a theorist who did not want to accept the falsification. To avoid this, one should always prefer theories that are less amenable to variation. If I said instead that there was a giant diamond in my closet, then while it seems just as implausible as the Bogeyman Hypothesis, it is much more scientifically valid because a giant diamond has certain incontrovertible properties that cannot be amended in light of falsification (for example, diamonds are visible to the human eye, so if you don’t see it, it just ain’t there).

While there is no clear consensus on what exactly constitutes the scientific method, there are a few things that seem to be true about it. First, it is unlikely that one can characterize science by just a single criterion like Popper’s idea of falsifiability; a short list of characteristics is likely to do much better. Secondly, if you are not trying to fool anybody and you have a genuinely burning urge to discover the truth, and if in addition you are more or less rational and logical in your approach, then you will almost inevitably be following something like the scientific method. And finally, when in doubt, read detective stories. We all understand how Sherlock Holmes catches the bad guys and gets to the bottom of things: he gathers the facts and makes deductions, and whatever is left – “no matter how improbable” – is the truth. This process of information gathering and logical deduction that pervades detective fiction is also at the heart of the scientific method. And if you really want to see it laid out plain, you could hardly find a better reference than Robert Pirsig’s description, in Zen and the Art of Motorcycle Maintenance, of how a mechanic uses the scientific method to fix a motorcycle [2]. Here’s an excerpt:

“The real purpose of scientific method is to make sure Nature hasn’t misled you into thinking you know something you don’t actually know. There’s not a mechanic or scientist or technician alive who hasn’t suffered from that one so much that he’s not instinctively on his guard. That’s the main reason why so much scientific and mechanical information sounds so dull and so cautious. If you get careless or go romanticizing scientific information, giving it a flourish here and there, Nature will soon make a complete fool out of you. It does it often enough anyway even when you don’t give it opportunities. One must be extremely careful and rigidly logical in dealing with Nature: one logical slip and an entire scientific edifice comes tumbling down. One false deduction about the machine and you can get hung up indefinitely.”

[1] Michael Nielsen has a neat introduction to the AI community’s answer to this question.

Ready to have your mind blown in 30 seconds? Check out the classic paradox of Aristotle’s Wheel. (Actually, it is more like Arist-NOT-le’s Wheel because it probably wasn’t his idea.) If you think you understand the resolution of the paradox, try resolving it again for wheels composed of discrete sets of points, representing the atoms in the wheel, for example.

Source: Wolfram MathWorld.

The controlled use of the wheel, much like the controlled use of fire, is one of those developments in human history that is taken as a benchmark in our path to sentience. While both wheels and fire are simple enough that they arise spontaneously in nature (think of boulders rolling down a hill), their mechanisms are subtle enough that it takes a leap of insight to learn how to create them at will and use them as tools. Consider the big stone wheels of the car in “The Flintsones”. They are not so different to rocks that you might find lying around, but how different they look when attached to the frame of a car! Now they no longer seem like simple rocks, but an amazing device that allows us to travel around while expending less energy. Even those of us who do advanced quantum mechanics like to sit back occasionally and just bask in the elegance of basic physics, like rolling wheels.

Fred’s sweet ride.

My personal tribute to the wheel is the following paradox, which came up in discussion while wandering drunkenly with friends through the hills of Hohenruppersdorf. Imagine a perfect wheel in a vacuum, of some radius and width, on a flat surface with some coefficient of friction. The wheel has some initial angular velocity, as might be imparted by applying a horizontal push to the top of the wheel. Suppose that the friction between the wheel and the ground is such that the wheel rolls without slipping.

Source: The Internets

We now make the following observations:

(1) In the rest frame of the flat ground, the point of contact between the rolling wheel and the ground is always stationary. Since we are considering an ideal system, the rolling motion does not dissipate any energy. Therefore, the wheel will continue to roll forever on the flat surface with the same kinetic energy that it started with.

(2) It is obvious that there is some nonzero friction between the wheel and the ground; if the surface were completely frictionless, the wheel would not be able to roll at all without slipping. In fact, it would simply spin on the spot forever.

(3) Friction dissipates energy as heat. Therefore, the wheel should be losing energy as it rolls, causing it to eventually come to a stop. But this contradicts (1), implying a paradox.

Clearly, at least one of the above statements is wrong, under our assumptions. But which one is it? They all sound quite reasonable. Just for fun, I’m going to let you ponder it for at least a day before I post the answer. Enjoy!

Edit: The Solution (probably)!

Ok, enough suspense. As elkement pointed out in the comments, any realistic wheel must dissipate some energy and hence eventually stop; indeed, the very assumption of a perfectly flat surface with friction might be suspect, since friction originates in microscopic irregularities and the electromagnetic interactions of particles. However, even in a realistic scenario, a wheel that is skidding along the ground will dissipate vastly more energy than a wheel that is rolling along the ground. This is exactly why a circular wheel is much better than a square wheel. So in a realistic scenario, the question becomes instead: why is the energy dissipated by rolling friction so much less than for sliding friction?

The answer lies in distinguishing two types of friction: static friction and sliding friction. Imagine you have two heavy pillars which have fallen inward and are resting on each other. Taken as an ideal system, these two pillars do not dissipate any energy. Realistically, they will dissipate a tiny amount of energy due to micro-movements of their atoms under the stress of supporting each other, but we can ignore this. The main point is that, even though virtually no energy is dissipated, there must be significant static friction between the base of the pillars and the ground, in order to keep the bottoms of the pillars from sliding outwards and collapsing the structure. If the pillars were on low-friction surface like ice, then the structure would be unstable: the horizontal component of force exerted on the ground by the pillars would overcome the coefficient of static friction and the pillars would slide apart, dissipating lots of heat in the process due to the resulting sliding friction. But as long as the static friction is high enough to prevent slipping, there is no sliding friction and hence there is (almost) no dissipation of energy.

Pretty much exactly the same logic applies to the rolling wheel. Here, the suspect assertion is the statement (3). In the ideal case, the static friction that keeps the wheel from slipping does not dissipate energy (and realistically, only very little energy compared to when there is sliding). So we throw out (3). The ideal wheel rolls forever, even though there is constant (static) friction between it and the ground. Prove me wrong!