There isn’t much in this post that wasn’t in my original article. I write this to summarise the important bits.

“Barnes does not challenge my basic conclusions.” Not even close. Re-read.

“Barnes seems to want me to reduce this to maybe 1-5 percent.” Nope. I didn’t say or imply such a figure anywhere in my article. On the contrary, the cosmological constant alone gives . The Higgs vev is fine-tuned to . The triple alpha process plausibly puts constraints of order on the fine-structure constant. The “famous fine-tuning problem” of inflation is (Turok, 2002). The fine-tuning implied by entropy is 1 in according to Penrose. For more examples, see my article. Or just pull a number out of nowhere and attribute it to me.

“He fails to explain why my simplifications are inadequate for my purposes.” Red herring. My issue is not oversimplification. I do not criticise the level of sophistication of Stenger’s arguments (with one exception – see my discussion of entropy in cosmology below). Stenger’s arguments do not fail for a lack of technical precision. Neither does the technical level of my arguments render them “irrelevant”.

Point of View Invariance (PoVI)

A major claim of my response (Section 4.1) to FoFT is that Stenger equivocates on the terms symmetry and PoVI. They are not synonymous. For example, in Lagrangian dynamics, PoVI is a feature of the entire Lagrangian formalism and holds for any Lagrangian and any (sufficiently smooth) coordinate transformation. A symmetry is a property of a particular Lagrangian, and is associated with a particular (family of) coordinate transformation. All Lagrangians are POVI, but only certain, special Lagrangians – and thus only certain, special physical systems – are symmetric. Stenger replies:

“PoVI is a necessary principle, but it does not by itself determine all the laws of physics. There are choices of what transformations are considered and any models developed must be tested against the data. However, it is well established, and certainly not my creation, that conservation principles and much more follow from symmetry principles.”

Note how a discussion of PoVI segues into a discussion of symmetry with no attempt to justify treating the two as synonymous, or giving an argument for why one follows from the other.

Of course conservation principles follow from symmetry principles – that’s Noether’s theorem. It’s perfectly true that “if [physicists] are to maintain the notion that there is no special point in space, then they can’t suggest a model that violates momentum conservation”. The issue is not the truth of the conditional, but the necessary truth of the antecedent. Physicists are not free to propose a model which is time-translation invariant and fails to conserve energy1. But we are free to propose a model that isn’t time-translation invariant without fear of subjectivity.

And we have! Stenger says: “But no physicist is going to propose a model that depends on his location and his point of view.” This is precisely what cosmologists have been doing since 1922. The Lagrangian that best describes the observable universe as a whole is not time-translation invariant. It’s right there in the Robertson-Walker metric: a(t). The predictions of the model depend on the time at which the universe is observed, and thus the universe does not conserve energy. Neither does it wallow in subjectivity.

Watch closely as Stenger gives the whole game away:

“… much of existing, empirically verified physics follows from a principle in which physicists force themselves to construct their models to be independent of the observer’s point of view. If, someday, experiment shows a violation of this principle, then we will have to discard it.”

Fine-tuning compares the set of life-permitting laws with the set of possible laws. Stenger’s argument can only be successful if it shows that symmetries restrict the set of possible laws. He must convince us that symmetry violations are not possible. It is to admit failure, then, to acknowledge that symmetry principles can be overturned by experiment. They are contingent. There are possible universes in which they do not hold, and that is all fine-tuning needs.

Stenger makes the same admission with respect to gauge invariance. “Barnes objects to my association of gauge invariance with PoVI, but gives no reason. Instead, he quotes various authors to the effect that gauge invariance could be wrong. Of course, it could be wrong.” It could be wrong! Gauge invariance is a contingent fact.

(Incidentally, the reasons I don’t give are on pages 14-15. Also, don’t confuse two different senses of “could be wrong”. I am not arguing that gauge invariance does not hold in this universe. There are possible universes that are not gauge invariant. Our universe is probably not one of them. It’s like the difference between “Bob could be a sailor, I just don’t know” and “Bob could have been a sailor, but instead became a plumber”.)

Gravitational Fictions

Our respective philosophies of science are irrelevant. I argue in Section 4.9 that fine-tuning claims can be understood and affirmed by realist, instrumentalist and every philosophy in between. Fine-tuning starts by asking: “what if the universe were different?” If the universe were different, we would (ex hypothesi) make different observations and propose different laws to account for them. Changing the laws and constants of nature on paper can be thought of as a convenient way of specifying which other universe (or set of observations) we are talking about. No commitment on the ontological status of the mathematical form of the laws of nature is required. I don’t have firm views on the philosophy of science. There is nothing in my article that defends Platonic realism.

FoFT claims that if there were no gravity then there would be no universe, that physicists must to put gravity into any model of the universe that contains separate masses. This isn’t a question of interpretation. It’s false. Take general relativity and set G = 0. All its PoVI properties remain, spacetime is Minkowskian, and you can fill your universe with matter to your heart’s content and yet there will be no gravity. It’s not our universe, but it is a possible universe. Further, physicists have proposed many different models for gravity which are observationally distinguishable (in principle) and yet are all PoVI: Newtonian gravity, general relativity, Brans-Dicke gravity, Einstein–Cartan gravity, Lovelock gravity, … (Wikipedia lists over 30). There is nothing to stop us asking the question: “what properties must gravity have in order for a universe to be life-permitting?”

Entropy and Cosmology

Stenger says: “Assume our universe starts out at the Planck time as a sphere of Planck dimensions”. That’s precisely the assumption I disputed – see the second last paragraph on page 25. The observable universe wasn’t Planck sized at the Planck time.

Secondly, as I mentioned above, this is the only place where I criticised the sophistication of Stenger’s arguments in FoFT. I gave my reasons in Section 4.3. For example, Bousso (2002) says that “a naive generalisation of the spherical entropy bound is unsuccessful. . . . [T]he idea that the area of surfaces generally bounds the entropy in enclosed spatial volumes has proven wrong. . . . [A] general entropy bound, if found, is no triviality”. Simplifications are fine, so long as we have good reason to believe that they capture the essentials of a more precise calculation. We can have no such confidence in Stenger’s argument. No one knows how to do the precise calculation, as there is no consensus on how to correctly apply the Bekenstein limit to cosmology. Even simple self-gravitating systems present major unsolved problems for statistical physics, problems that become immeasurably more difficult when the system in question is the entire universe in the quantum-gravity regime. These are not issues that can be ignored. Even Stenger’s simplified calculation is flawed, as it uses the Hubble sphere instead of a horizon, the particle horizon cannot be defined at the Planck time, and it assumes without justification homogeneity and isotropy. It’s a failed solution, abandoned by cosmologists decades ago.

Carbon and Oxygen Synthesis In Stars

Read page 41 again. Weinberg’s argument is inconclusive.

Expansion rate of the universe

“Barnes … goes into detail on the problems of inflation, showing that it could be wrong.” That is exactly what I didn’t say. In fact, I went to great lengths to make it clear that I wasn’t saying that. Section 4.4.1 spent three paragraphs asking “did inflation happen?”, concluding that the case is impressive but circumstantial. Section 4.4.2 then spent four pages asking a different question: “Is inflation itself fine-tuned?” The difference between these two questions is very important.

Suppose that I were defending this claim: in the space of all possible arrangements of metal and plastic, the subset of drivable cars is extremely small. Cars are fine-tuned. Ah yes, you reply, but Alan has hypothesised that cars are produced by a mechanism known as a car factory. Is there good evidence that my car was made by a car factory? Yes. Does that fact account for the fine-tuning of my car? No, because the car factory is at least as fine-tuned as my car. In the space of all possible arrangements of metal and plastic, the subset of working car factories is also extremely small.

Explaining one fine-tuned fact using another fine-tuned fact, even if true, doesn’t solve the fine-tuning problem. Stenger seems unable or unwilling to ask whether inflation is fine-tuned. In fact, at no point in his book does he ask whether any of his “solutions” are as fine-tuned as the problems they are supposed to solve. I’ll return to this point later.

Gravity and the Masses of Particles

Stenger: “I then propose a plausible explanation for this low mass, namely, in the standard model the masses are intrinsically zero and their observed masses are the result of small corrections, such as the Higgs mechanism.”

A perfect example of attacking a fine-tuned fact with a fine-tuned explanation. Here is what I said in my article: “It is precisely the smallness of the quantum corrections wherein the fine-tuning lies. If the Planck mass is the “natural” [Foft 175] mass scale in physics, then it sets the scale for all mass terms, corrections or otherwise. Just calling them “small” doesn’t explain anything.” The Higgs quantum corrections must be fine-tuned for the universe to be life-permitting. Once again, Stenger does not ask whether his solution is fine-tuned. It is not enough that the explanation is plausibly true in our universe. It must make a life-permitting outcome more probable to make a difference to fine-tuning2.

Charge Neutrality

Stenger has already admitted that gauge invariance doesn’t hold in all possible universes, and so we are free to consider universes in which it does not hold without fear of contradiction or subjectivity. Further, even if gauge invariance (and thus charge conservation) holds, it doesn’t imply that the net charge of the universe is zero, only that it is constant.

MonkeyGod

No … that’s best left to realcosmologists. I do, however, expect that when a model contains 8 equations, it will not botch 6 of them. I expect it to at least acknowledge the assumptions that it makes, and perhaps even attempt to justify them, especially when in the absence of such justification the model is worthless. I expect it to identify and attempt to correct any biases in its assumptions, such as the obvious selection effect that results from taking a region of parameter space around a known example of the phenomenon in question as being representative of the entire space. I expect it to understand the difference between the range of possible values and the range of values consistent with experiments in our universe.

Further, I expect simplifications to be just that. Einstein (supposedly) said that we must endeavour to make things as simple as possible but no simpler. The whole idea of a simplification is to neglect those features of the scenario that are least significant. Neglecting the mass of the Earth in a model of the solar system is a simplification. Neglecting the mass of the Sun is not.

Of all the constraints that a life-permitting universe must satisfy, Stenger has chosen to neglect many of the most significant. As I said in my article, “There are no cosmological limits, from big bang nucleosynthesis or from galaxy and star formation. The stability of hydrogen to electron capture, the stability of the proton against decay into a neutron, the limit on for stable structures, electron-positron pair instability for large , stellar stability, the triple-alpha process, and the binding and unbinding of the diproton and deuteron are not included … these are amongst the tightest limits in parameter space.” To ignore the most significant factors in a calculation is not simplification. It doesn’t even have the dignity to be an oversimplification. It’s not a toy model. It’s just incorrect.

Derived, Fundamental and Fine-tuned

“Barnes says, “to show (or conjecture) that a parameter is derived rather than fundamental does not mean that it is not fine-tuned.” Right. And the fact that we can’t prove that Bertrand Russell’s teapot is not orbiting the sun between Mars and Jupiter does not mean it is.”

Drivel. Stenger apparently believes that I am defending the principle: “Absence of evidence is evidence of existence”. Needless to say, I am not. The statement of mine quoted above is a mathematical fact. Suppose we have a probability space with probability measure , and outcomes parameterised by . Within this space, we specify a subset of interest . This subset is small if . Now suppose that we discover that the parameters are not fundamental, but derived from a set of parameters . We can form a new space in terms of the , and ask whether the subset of interest is still small. My claim is that it is not necessarily true that is large. In fact, the change of parameterisation will have to dramatically inflate or severely curtail $X$ for the smallness of not to imply the smallness of .

If that was too mathematical, let me rephrase the example I gave in Section 4.8.3. Suppose Bob sees Alice throw a dart and hit the bullseye. “Pretty impressive, don’t you think?”, says Alice. “Not at all”, says Bob, “the point-of-impact of the dart is a derived parameter. The more fundamental parameter is the velocity with which the dart left your hand (i.e. throwing speed and direction). Thus no fine-tuning is needed.” This conclusion obviously does not follow. All Bob has done is exchange the fine-tuning of the impact point for the fine-tuning of the initial velocity. This is true even though the initial velocity of the dart (plus Newtonian mechanics) explains the point of impact. Bob cannot claim that “as long as no one can disprove this explanation, I win the argument.”

We see this point again and again in the fine-tuning of the universe for intelligent life. The fine-tuning of the proton and neutron masses imply fine-tuning constraints on the quark masses, which in turn imply constraints on the Higgs vev and Yukawa parameters. If (broken) supersymmetry holds, then the constraints on the Higgs vev imply constraints on the supersymmetry breaking scale. The fine-tuning of the standard model coupling constants (e.g. and ) plausibly imply constraints on the parameters of GUTs (Section 4.8.2). The constraints on the initial expansion rate, density and perturbations of the universe imply constraints on the inflaton potential, coupling and initial conditions. The constraints on the cosmological constant still apply to quintessence models. The fact that a parameter may be derived does not mean that its fine-tuning will automatically go away3. Fundamental(Edit: 4/7/2012) Derived parameters can be fine-tuned.

Conclusion

Stenger’s two basic conclusions fail. Showing that “plausible explanations, consistent with existing knowledge, can be made for the observed values of [fine-tuned] parameters”, like showing that the impact point of a dart is explained by its initial velocity, doesn’t even address fine-tuning. Stenger has given us no reason to think that the life-permitting region is larger, or possibility space smaller, than has been calculated in the fine-tuning literature.

Stenger’s second conclusion, that “plausible ranges for the other parameters exist that are far from infinitesimal, contrary to what is claimed in the theistic literature” is meaningless. Any non-zero range is larger than an infinitesimal range. No reference to the theistic literature is given, here or in FoFT. At least some of the theist literature accurately represents the scientific literature4, something that Stenger has failed to do.

Postscript

Large red letters on Stenger’s homepage inform us that “No reputable physicist or cosmologist has disputed this book”. I guess that makes me a disreputable cosmologist. In the meantime, a shortened version of my paper has been accepted for publication by Publications of the Astronomical Society of Australia. The fate of Stenger’s paper ‘A Case Against the Fine-Tuning of the Cosmos’, submitted to the “Journal of Cosmology”, is unknown.

In any case, if you’d rather decide this issue by a show of hands rather than good arguments, then let’s play pick the odd one out of these non-theist scientists.

Wilczek: life appears to depend upon delicate coincidences that we have not been able to explain. The broad outlines of that situation have been apparent for many decades. When less was known, it seemed reasonable to hope that better understanding of symmetry and dynamics would clear things up. Now that hope seems much less reasonable. The happy coincidences between life’s requirements and nature’s choices of parameter values might be just a series of flukes, but one could be forgiven for beginning to suspect that something deeper is at work.

Hawking: “Most of the fundamental constants in our theories appear fine-tuned in the sense that if they were altered by only modest amounts, the universe would be qualitatively different, and in many cases unsuitable for the development of life. … The emergence of the complex structures capable of supporting intelligent observers seems to be very fragile. The laws of nature form a system that is extremely fine-tuned, and very little in physical law can be altered without destroying the possibility of the development of life as we know it.”

Rees: Any universe hospitable to life – what we might call a biophilic universe – has to be ‘adjusted’ in a particular way. The prerequisites for any life of the kind we know about — long-lived stable stars, stable atoms such as carbon, oxygen and silicon, able to combine into complex molecules, etc — are sensitive to the physical laws and to the size, expansion rate and contents of the universe. Indeed, even for the most open-minded science ﬁction writer, ‘life’ or ‘intelligence’ requires the emergence of some generic complex structures: it can’t exist in a homogeneous universe, not in a universe containing only a few dozen particles. Many recipes would lead to stillborn universes with no atoms, no chemistry, and no planets; or to universes too short-lived or too empty to allow anything to evolve beyond sterile uniformity.

Linde: the existence of an amazingly strong correlation between our own properties and the values of many parameters of our world, such as the masses and charges of electron and proton, the value of the gravitational constant, the amplitude of spontaneous symmetry breaking in the electroweak theory, the value of the vacuum energy, and the dimensionality of our world, is an experimental fact requiring an explanation.

Susskind: The Laws of Physics … are almost always deadly. In a sense the laws of nature are like East Coast weather: tremendously variable, almost always awful, but on rare occasions, perfectly lovely. … [O]ur own universe is an extraordinary place that appears to be fantastically well designed for our own existence. This specialness is not something that we can attribute to lucky accidents, which is far too unlikely. The apparent coincidences cry out for an explanation.

Guth: in the multiverse, life will evolve only in very rare regions where the local laws of physics just happen to have the properties needed for life, giving a simple explanation for why the observed universe appears to have just the right properties for the evolution of life. The incredibly small value of the cosmological constant is a telling example of a feature that seems to be needed for life, but for which an explanation from fundamental physics is painfully lacking.

Smolin: Our universe is much more complex than most universes with the same laws but different values of the parameters of those laws. In particular, it has a complex astrophysics, including galaxies and long lived stars, and a complex chemistry, including carbon chemistry. These necessary conditions for life are present in our universe as a consequence of the complexity which is made possible by the special values of the parameters.

Guess who?: The most commonly cited examples of apparent fine-tuning can be readily explained by the application of a little well-established physics and cosmology. . . . [S]ome form of life would have occurred in most universes that could be described by the same physical models as ours, with parameters whose ranges varied over ranges consistent with those models. … . My case against fine-tuning will not rely on speculations beyond well-established physics nor on the existence of multiple universes.

____________________

Footnotes

1. … and obeys the action principle.

2. Stenger says: “I explicitly attribute the mass differences of the d and u quarks to the electromagnetic force (Fallacy p. 178).” Still not correct, I’m afraid. Walker-Loud et al. explain: “There are two sources of [the proton-neutron mass difference] in the standard model, the masses of the up and down quarks as well as the electromagnetic interactions between quarks.” Gasser and Leutwyler say, “If the electromagnetic interaction is turned on, the quarks start emitting and absorbing photons. … A cloud of virtual photons surrounding a bound state of quarks contributes to the mass of the state.” Miller et al. says “If charge symmetry were exact, the proton and the neutron would have the same mass. … The electrostatic repulsion between quarks should make the proton heavier. But the mass difference between the quarks wins over their electrostatic repulsion”. The EM mass-energy of a proton is attributed to the virtual photon cloud, not the quarks, just as the QCD contribution is attributed to the gluons. The bare quark masses and the EM self-energy are separate contributions to the proton mass. (See also footnote 39 of my paper.) Note that nothing of relevance to fine-tuning hangs on this point.

3. As I noted at the end of Section 4.2.2, these aren’t independent constraints. We must not, for example, simply multiply the probability of a life-permitting proton mass and the probability of a life-permitting up-quark mass. The fine-tuning of a derived parameter is not further fine-tuning, beyond the fine-tuning of the fundamental parameters.

Share this:

Like this:

Related

60 Responses

I know this is a basic question, but I always get hung up on this point and perhaps you can explain it. You said, “Fine-tuning compares the set of life-permitting laws with the set of possible laws.” I don’t understand how we determine a sample space for the set of possible laws that’s justified. What I always wonder is if there is some underlying necessity truncating the range, possibly even to just one possibility in some cases.

This could just be my ignorance of the subject, but it seems like a live option based on my limited knowledge.

It’s a good question and probably worth a separate post. The short answer is that a universe is possible if it is free from contradiction. For a universe specified by its laws, constants and initial conditions, all we need to know is that its mathematical formulation is self-consistent.
* For initial conditions, this is easy – the laws of nature, expressed mathematically, define a space of solutions. Each solution is a possible description of a physical system, differing in their initial conditions.
* For fundamental constants, there is a range of values inside of which the logical consistency of the laws is unaffected. For example, masses make sense from zero to the Planck scale. Negative masses make no sense; masses above the Planck scale need a quantum gravity theory to handle. For all we know, there might not be any such thing as mass above the Planck scale.
* For the laws themselves, we can check mathematical consistency as we would with the laws that (we hope) describe our universe. It’s not trivial, but there is no problem in principle.

If the “underlying necessity” is not logical necessity, then what? Physical necessity? Fine-tuning is about changing the laws of nature i.e. changing what is physically possible and/or necessary. If there is a metalaw standing above the laws of nature as we know them, then fine-tuning asks what properties that law needs to have in order for a universe described by that law to be life-permitting. A good example is string theory. Even if string theory contains no free parameters, there are a huge number of solutions characterised by hundreds of parameters. The metalaw exchanges constants for initial conditions. We still have a parameter space to explore and a life-permitting range to identify.

It is interesting to find such a comprehensive counter, on his own turf, to Stenger’s ““The Fallacy of Fine-Tuning: Why the Universe Is Not Designed for Us”

However, the physical parameters are but the tip of the iceberg. There is actually a much greater body of evidence to support fine tuning to be found in fields of science far better established than cosmology.

After all, perhaps the earliest proponent of fine-tuning was the biochemist Lawrence Henderson. In “The Fitness of the Environment”, published in1913, he observed that “”the whole evolutionary process, both cosmic and organic, is one, and the biologist may now rightly regard the universe in its very essence as biocentric”

The most recent part of this evolutionary continuum is that most familiar to us and of which we have the best knowledge: The automonous evolution of technology within the medium of the collective imagination of our species.

Secondly, that the assumption that IF fine tuning is a valid phenomenon THEN it favors theism is flawed.

Because it predicated by the very common and entirely intuitive belief that it suggests a “designer”.

But it can be very plausibly argued that, except in a very trivial sense, the concept of a “designer” is but an anthropocentric conceit for which there is no empirical basis.

An objective examination of the history of science and technology bears this out.

To quickly put this counter-intuitive view into focus, would you not agree that the following statement has a sound basis?

We would have geometry without Euclid, calculus without Newton or Liebnitz, the camera without Johann Zahn, the cathode ray tube without JJ Thomson, relativity (and quantum mechanics) without Einstein, the digital computer without Turin, the Internet without Vinton Cerf.

The list can. of course be extended indefinitely.

This broad evolutionary model , extending well beyond the field of biology, is outlined, very informally, in “The Goldilocks Effect: What Has Serendipity Ever Done For Us?” which is a free download in e-book formats from the “Unusual Perspectives” website.

Of course it is possible to fully accept science and spirituality together if you are daft enough. And plenty are.
However, science is based in empiricism which has, as its fundamental tenet the interpretation of nature based on observational/experimental evidence.
Since all spirituality is based on hearsay and mere speculation there is an an essential incompatibility between these ways of thought.
But the irrational are more than capable of such syntheses.

I repeat my previous criticism, which you still don’t seem to address.

The claims of physicists aside, we simply have no idea what the set of possible universes is. Calling a universe “possible” doesn’t mean it is, for one thing. All the existing arguments seem to take our current universe and just tweak various parameters that interest physicists, and ooh and aah when they seem to make carbon-based life impossible.

But in this game, you’re never coming up with (for example) a discrete universe like Conway’s “game of life”. So if a “possible universe” means nothing more than something we can dream up, you didn’t consider this one. And since we don’t have a good and universally-accepted definition for what life is, you’re not really measuring which universes support life and which don’t.

An interesting example of this deficiency of your argument is the paper of John Koza, “Spontaneous Emergence of Self-Replicating and Evolutionarily Self-Improving Computer Programs”, which appeared in ALIFE 1994. If your set of possible universes doesn’t include this one, maybe you haven’t examined everything in the set of possible universes.

I think I answered that in the second comment, though I’m still not 100% sure what the objection is.

A “universe” is possible if it is self-consistent. So it’s not just a matter of declaring that a universe is possible. We investigate its mathematical description to see if it is free from contradiction. In most cases, the case for logical consistency is a strong as one could possibly hope for – the equations in question are known to describe our universe, and only differ in the constants we plug into them. The mathematical consistency of the equations is not dependent on the value of these constants (within limits – for example, a particle with mass larger than the Planck is probably not possible, since our concept of mass simply doesn’t make sense in such a regime). The same is true for initial conditions – the laws of nature describe a set of solutions, subject to the same laws but with different boundary conditions. (E.g. the set of orbits around the sun consistent with Newtonian gravity). Thus, the laws of nature carry with them a set of physically possible universes. Contrary to your assertion, we have a very good idea of what the set of possible universes is.

As a concrete example, consider the equations that describe the cosmic microwave background (CMB). These equations contain a term describing the density of baryonic (a.k.a. ordinary) matter, Om_b. We can solve the equations for a range of values of Om_b, as shown here: http://ned.ipac.caltech.edu/level5/March10/Garrett/Figures/figure3.jpg
If we want to know which one describes our universe, we have to go and measure CMB. There isn’t the slightest trace of logically consistency helping us out.

Perhaps you think that a universe could turn out not to be possible for some other reason, other than logical consistency, Well, what? If that other reason is another, deeper physical law, then fine-tuning asks what properties that law needs to have in order for a universe described by that law to be life-permitting. A good example is string theory. Even if string theory contains no free parameters, there are a huge number of solutions characterised by hundreds of parameters. The metalaw exchanges constants for initial conditions. We still have a parameter space to explore and a life-permitting range to identify.

Regarding the game of life and such, I did have a few comments in my paper. (You didn’t read it, did you? I can’t blame you … I did ramble on.) Here’s page 18: “John Conway’s marvellous `Game of Life’ uses very simple rules, but allows some very complex and fascinating patterns. In fact, one can build a universal Turing machine. Yet the simplicity of these rules didn’t come for free. Conway had to search for it (Guy, 2008, pg. 37): “His discovery of the Game of Life was effected only after the rejection of many patterns, triangular and hexagonal lattices as well as square ones, and of many other laws of birth and death, including the introduction of two and even three sexes. Acres of squared paper were covered, and he and his admiring entourage of graduate students shued poker chips, foreign coins, cowrie shells, Go stones, or whatever came to hand, until there was a viable balance between life and death.” It seems plausible that, even in the space of cellular automata, the set of laws that permit the emergence and persistence of complexity is a very small subset of all possible laws. Note that the question is not whether Conway’s Life is unique in having interesting properties. The point is that, however many ways there are of being interesting, there are vastly many more ways of being trivially simple or utterly chaotic.” Actually, I’d quite like to hear your views on this.

The fact that we’ve only considered universes like our’s supports the case. We know that our universe is life permitting, so to search parameter space in the vicinity of our universe is to bias our search *in favour* of regions of parameter space that contain life. That the life-permitting set is very small in our neighbourhood is good reason to think that the trend will not magically reverse itself just beyond the boundary of our knowledge. (I said that on page 7).

“We don’t have a good and universally-accepted definition for what life is.” True but irrelevant. Consider a universe whose matter density was 1 part in 10^15 times smaller when the universe was one second old. Such a universe expands too fast for structure to form. The universe contains only a diffuse gas of hydrogen and helium (which together have one chemical reaction: H + H -> H_2), diluting as the universe expands. 13 billion years later, each atom will only encounter another atom every trillion years, and even then the two will just bounce off each other and head back out for trillions more years of isolated non-interaction. It’s a very simple universe, and we can reasonably conclude that nothing that happens in that universe deserves to be called “life”. In short, these cases are usually so clear cut that the finer points of the definition of life are irrelevant.

“Jerry coyne has said that the notion of the laws of physics being fine tuned by a deity is anti-scientific. Is this true?”

Of course it is true, since there is no empirical basis for the assumption of the existence of any kind of deity.

The argument that “fine tuning” implies a designer is flawed because the reality is that except in the most trivial everyday sense there ARE no designers.

The phenomena we observe can be described as an evolutionary process without recourse to the anthropocentric notions that derive from our genetic and cultural legacy.

We do, of course have discoverers, those who happen to be the right types, in the right place at the right time who pick the low-hanging fruit.

But objectively, we have to interpret science and technology as evolving autonomously within the collective imagination of our species. A stochastic process given directionality by the dynamically changing prevailing conditions.

If you happen to find this notion hard to swallow, as most at present do, then perhaps considerations of the following kind will help bring it into focus:

Do you honestly believe that without Faraday we would have no electric motors or transformers”, no mathematical understanding of the electromagnetic field without Maxwell, no steam engines without Stephenson, without Marie Curie we would know nothing of radium, we would have no radio without Marconi?

Or that without Steve Jobs we would not have computers with GUIs and pointing devices and other gross features not too far removed fro the Apple Mac?

The broad evolutionary model which accommodates such considerations is very informally presented in “The Goldilocks Effect: What Has Serendipity Ever Done For Us?” (free download in e-book formats from my “Unusual Perspectives” website)

I would say the argument is still valid. More pointedly ” sodium perxenate” or “silicone rubbers” could be similarly substituted. After all, “intelligent” life, by which i presume you imply a species such as ours, is merely one small cog in a great evolutionary machine.

cf “The Goldilocks Effect: What Has Serendipity Ever Done For Us?”, a free download in e-book formats from the “Unusual Perspectives” website.

There is no equivalent of the anthropic principle for cyanide. It is not true that “if observers observe anything, they observe conditions that permit the existence of cyanide.” A multiverse, for example, would not explain why an observer sees a universe that contains cyanide, since there is no necessary connection between “observer” and “cyanide”.

This means that, while universes that contains cyanide are rare in the space of possible universes, we do not glimpse any non-ad-hoc hypothesis that would make such a universe more likely to be effected.

Yes that makes a lot of sense. One cannot make an anthropic argument for cyanide ; and the multiverse may explain why observers find themselves in a universe friendly to life whereas the same argument cant be made for cyanide.
But none of that affects the prior probaility of a universe with cyanide and this is what concerns me. I guess what Im thinking is the problem of fine tuning should apply equally to cyanide as to life, but a potentiall solution such as the multiverse does not apply equally, would you agree with that?

“The fact that we’ve only considered universes like our’s supports the case. We know that our universe is life permitting, so to search parameter space in the vicinity of our universe is to bias our search *in favour* of regions of parameter space that contain life. That the life-permitting set is very small in our neighbourhood is good reason to think that the trend will not magically reverse itself just beyond the boundary of our knowledge.”
It seems very much that you admit here that fine-tuning claims are based on setting the range of possible universes in such a way that the conclusion of tuning is guaranteed.
Claiming that the rarity of life in parameter space close to our universes’ shows the same must be true for parameter spaces completely different is a non sequitur. You do not have enough information to make such a claim, it is nothing but guesswork.

How do things work out in universes with a number of classes of force carrying particles of anything between 0 and 10^100? Or with anything from 2 to 10^100 spatial dimensions? Etc. Anything goes as long as we do not know the underlying mechanisms that set the parameters for universes. If anything goes it is impossible for us to claim extremely detailed knowledge about every possible way things could be.
Fine tuning claims require knowledge vastly superior to what cosmology can offer today and into the foreseeable future.
With that I have to conclude that fine-tuning claims of knowledge are arguments from ignorance. Fine-tuning is a hypothesis, it is not knowledge. Just like the multiverses or Gods are hypotheses, not knowledge as long as no further empirical advancements come about.

@cognosium
You can only make that claim if you have exhaustive knowledge on what universes are possible without any omissions of possibilities including those that cannot be deduced from this universe and can deductively show the stability and complexity calculations for all of them and give an exact percentage of universes that meet the criteria. As all the supercomputers on the planet combined will not be able to finish such calculations before the heat death of the universe I seriously doubt Barnes has actually made these calculations.
Therefore he is not justified in making knowledge claims. He can only hypothesize.

I think that it is speculative.
The level of perfection visualization will achieve cannot be inferred from projecting results of the past into the future. There is no guarantee that the level of improvement can be sustained. That is my central problem with many arguments. You need very solid information to extend your knowledge of the domain you have a level of X certainty in to a totally different domain where the level of certainty is unknown. The tuning argument does just that. It assumes that the unlikeliness of life in the universes similar to ours can be extrapolated to universes very unlike ours. That is completely unjustified. It is wishful thinking.
Unfortunately Barnes fails to defend his argument against this criticism which seems suspicious to me. Does he have any valid rebuttal to this criticism?

This question is not directly relevant to the Stenger argument, but I couldn’t find a better place to post it. I was listening to the interview with Luke on the Pale Blue Dot podcast. It’s a great, funny and informative chat, but it seemed to me that there’s a misunderstanding of the implications of the evolutionary universe hypothesis.

Barnes seems to take this argument to imply that the conditions necessary for life are the same as the conditions most conducive to black-hole formation. He rightly argues that this would seem to be a remarkable coincidence. Why should fine-tuning for black holes be connected to fine-tuning for life? But this is not what the argument implies. It does imply that universes that have more black holes will “reproduce” more rapidly, so there will be more of them, but this doesn’t mean that there won’t also be less-black-hole-productive universes. Remember that there is no selection pressure: “unproductive” universes are not killed off in some kind of competition for resources, they continue to exist. So it’s perfectly possible that we are in one of the many less-productive universes, which is not fine-tuned for black holes, but which happens to be fine-tuned for life. (Or rather, gives the appearance of having been fine-tuned for life.)

Another way of putting this is that we don’t need to be in the “most evolved” universe – we may well be in one of the countless dud universes that popped up along the way.

Good point and well made. Let me start by admitting that I haven’t read Smolin’s book – I’ve only heard a lecture and read a short article. I’m a tad confused, since what I’d do with Smolin’s proposal is try to find the physical parameters which strike the optimal balance between making black holes and making life. Suppose there were two types of universes, A and B. A makes 10 new universes per second (via black holes), and contains 10 observers. B makes 1000 new universes per second, and contains one observer. Then, the typical observer would expect to be in a type B universe. This seems a pretty straightforward anthropic conclusion.

However, Smolin doesn’t seem to take this route, since he rejects the anthropic principle as unscientific (http://adsabs.harvard.edu/abs/2004hep.th….7213S). So I’m not sure how he calculates what a typical observer would expect to observe. My (perhaps naive) expectation is that most observers will be lone observers clinging to existence in a universe packed with black holes. Our universe has black holes, but it seems obvious to me that we could have more black holes without affecting life very much.

I put the point a bit to crudely in the interview. I really should get around to reading Smolin’s book.

Thanks for your reply. I also haven’t read Smolin’s book – I should have made that clear – having only studied this material through secondary sources during a Philosophy degree. So I might be misrepresenting his argument or misunderstanding something.

But you say that what you’d do with his proposal is “to try to find the physical parameters which strike the optimal balance between making black holes and making life”. I don’t understand why this is necessary. It seems to me that his idea implies such an abundance of universes, of all stripes, that our universe becomes a likely occurrence. There will also be universes that balance the requirements for life and black holes more optimally, but it isn’t necessary that our universe is one of those.

You ask, given Smolin’s multiverse, how we can calculate what a typical observer would be likely to observe? I take the question to be, given the existence of the multiverse, what is the most likely position for any observer to be in? It may be that the most likely scenario is what you describe: that he/she/it is a single observer seeing a universe chock-a-block with black holes. But again, I don’t see why this is apropos. The relevant question is not, what kind of universe is an observer likely to see; it is, how likely is the existence of our universe? Given the existence of a Smolin multiverse, how likely is it that our particular universe, with x number of black holes and y number of living beings, will arise? And there the answer is, pretty likely.

I have greatly appreciated both your arXiv paper on fine tuning, and this post. I reviewed Stenger on fine tuning for “Science and Christian Belief” and came to the same conclusion about it as you; but my background knowledge is far less detailed than yours, so it is very encouraging to have my own conclusions so clearly vindicated! Thanks! Paul Wraight.

Why don’t you review that?
If that paper cannot be proven wrong the entire cosmological constant fine-tuning claim, one of the pillars of the tuning argument, is no longer a scientific claim but only wishful thinking.

1. Perhaps you’d like to begin by explaining how one can start with the WheelerDeWitt Equation (WDWE) and derive the fact that “a wave function in a gravitational theory with a negative cosmological constant can predict an ensemble of asymptotically classical histories which expand with a positive effective cosmological constant.” What assumptions are you making? What mathematical techniques are you using to solve the equation? Or did you just pull an paper at random off Arxiv and announce that it had solved “arguably the most severe theoretical problem in high-energy physics today, as measured by both the difference between observations and theoretical predictions, and by the lack of convincing theoretical ideas which address it” (http://www.amazon.com/Standard-Model-Primer-Cliff-Burgess/dp/0521860369)

2. It’s a tad strange that the paper itself doesn’t claim to have solved the cosmological constant problem. Have you seen something that Hartle, Hawking and Hertog missed? The paper claims to have shown that universes with negative cosmological constant could nevertheless behave as if they have a positive cosmological constant: “The asymptotic expansions suffice to show that quantum states obeying the WDWE in a negative [Lambda] theory imply an ensemble of classical histories which expand, driven by an `effective’ positive cosmological constant equal to [minus Lambda].” This makes almost no difference to fine-tuning. If we consider [Lambda] between plus/minus the Planck scale, then the life permitting range is roughly between -10^-120 and 10^-120. The range is roughly symmetric around zero, so being able to swap -Lambda for Lambda makes almost no difference.

3. In my review of the fine-tuning of the cosmological constant: http://arxiv.org/pdf/1112.4647v2.pdf, section 4.6, you’ll find the following quotes:
* There is not a single natural solution to the cosmological constant problem. … [With the discovery that > 0] The cosmological constant problem became suddenly harder, as one could no longer hope for a deep symmetry setting it to zero.” (Arkani-Hamed et al., 2005).
* Throughout the years many people . . . have tried to explain why the cosmological constant is small or zero. The overwhelming consensus is that these attempts have not been successful.” (Susskind, 2005, pg. 357)
* No concrete, viable theory predicting = 0 was known by 1998 [when the acceleration of the universe was discovered] and none has been found since.” (Bousso, 2008)
* There is no known symmetry to explains why the cosmological constant is either zero or of order the observed dark energy.” (Hall & Nomura, 2008)
* As of now, the only viable resolution of [the cosmological constant problem] is provided by the anthropic approach.” (Vilenkin, 2010)

Perhaps you’d like to contact all of these scientists, and the peer-reviewed journals in which these quotes appeared (Susskind excepted), and explain how their views are wishful thinking. You seem to be labouring under the assumption that the fine-tuning of the universe for intelligent life is a theologian invention, a God-botherer conspiracy, not taken seriously by real scientists. Think again.

i’m not nearly as smart and informed in all of this as you, but i did really appreciate your rebuttal of stenger’s book. i’m not a theist or agnostic, but while reading his book, his claims just didn’t sound honest in debunking what seems to me a pretty clear question of fine tuning, regardless of its cause. the gravity is a fiction bit really rubbed me wrong. then, to read his response to you and see that part of his claim was that he wasn’t trying to be hyper technical and (i thought he was making the case) more for the nonscientist curious/layperson just made me scoff. i read his book and was lost for good portions of it, hanging on to the bare outline of what he was trying to say. but for the layperson it was not, which was a big disappointment.

[…] those biased in favour of a position believe its claims. I’ve come across this in the context of the fine-tuning of the universe for intelligent life. This subject is so popular with theists that I often encounter the claim that it is the product of […]

It’s actually not as easy as I’d expected. My most promising idea so far is “blockbuster movie starring Christoph Waltz as Victor Stenger and Scarlett Johansson as the 7.65 MeV carbon resonance”, but I am the first to admit that certain details still remain to be filled in

Ok, you haven’t really re-started the discussion, but now that you’ve sort-of come back to it, I’d still be very keen to hear your reply to my follow-up question of July 14, 2012.

To summarise, I think you’ve misunderstood the implications of the evolutionary universe hypothesis. Properly understood, that hypothesis does not require that a maximally black-hole producing universe also be fine-tuned for life, as you suggest. Rather, many fine-tuned universes will inevitably arise as by-products of the evolution of universes. Without any selection pressure to kill them off, there’s no reason to suppose that fine-tuning and black hole production need to be maximised in any individual universe.

I recently gave an intro to Bayes theorem and probability theory: https://letterstonature.wordpress.com/2013/10/26/10-nice-things-about-bayes-theorem/ . Given any theory about the world, a relevant question to ask is: If all I knew was that this theory was true (and some generic background information), how probable is it that I would observe what I actually observe? This is known in the business as the likelihood, i.e. the probability of the data given the theory. For example, if I knew that Bob was cheating at poker, how likely is it that he would have been dealt 4 aces?

The important thing is this: the predictions of a theory must be taken seriously. One mustn’t simply focus on the successful predictions.

When it comes to the multiverse in general (and to Smolin’s multiverse in particular), we must ask: if I knew that I was in this particular multiverse, how probable is it that I would observe what I actually observe? (I can show this from Bayes theorem: see my talk here http://youtu.be/oiPMqWi-Myc?t=30m31s)

What this amounts to is asking: of all the observers in the multiverse, what fraction observe a universe like this one? To the extent that I observe features of this universe that are unlikely to be observed by a typical observer in the multiverse, then that counts against the multiverse hypothesis. It reduces its likelihood.

Thus, if Smolin’s multiverse predicts that the vast majority of observers will observe a universe which maximises black hole production, then the fact that we don’t observe such a universe is a strike against the theory. This is true, even if there will be many universes in which black hole production is not maximized. We must take the predictions of the model seriously.

Thank you for your reply. I’m somewhat familiar with Beyes’ theorem, coming from a philosophy background, but I appreciate the thorough answer.

But isn’t this reasoning problematic, in the light of Carter’s weak anthropic principle? You’ve summarised it elsewhere much better than I could, but here goes anyway:

So, you ask, of all the observers in the multiverse, what fraction observe a universe like our particular one? Of course, an observer of the type we are, a natural being not outside of the universe, can only ever be situated in a live universe. In other words, for any natural observer the probability of observing a dead universe is 0. The probability of *failing* to observe a dead universe is 1. How then, can the failure to observe a dead universe count as evidence for anything?

You’ve probably written some kind of reply to this before, but I couldn’t find anything in my brief search.

Apologies for a long post on a defunct thread. But five years later, another poster’s response to your post has brought me back to this discussion. And I confess that I still find fault in this key step in your argument: “…if Smolin’s multiverse predicts that the vast majority of observers will observe a universe which maximises black hole production…” I still think that this overlooks an important implication of Smolin’s argument (even if it isn’t in his original formulation).

Imagine that the multiverse is a branching tree, with one initial universe giving rise to others, which in turn give rise to yet more. The universes with the most black holes are analogous to the nodes of the tree with the most branches emerging from them. Then, it follows that the densest parts of the tree – the areas that are the most densely populated with branches – can be traced back to the most productive nodes. Similarly, the areas of the multiverse tree with the most universes can be traced back to “node universes” that have the most black holes.

So most observers of such a multiverse would be in that most dense area of the tree, just because that’s where most universes are – but, importantly, they wouldn’t necessarily be in the “node universes” themselves.

The premise I quoted assumes that all the universes that emerge from those node universes will only get more and more fine-tuned for black holes. It assumes that all the universes which branch off from the node universe will inherit that universe’s laws, and in particular the they will inherit the way that it maximises black hole production. But in real biological evolution there is also the possibility of harmful mutation – not every genetic change is beneficial. In fact the vast majority are not. And in Smolin’s evolution, in which there is no “pruning” of the tree through extinction, the assumption is even more wrong. The universes which branch off the node universe are likely to be *less* productive of black holes than the node universe, as well as varying in other ways.

As we get further from the productive node, in the absence of selection pressure we should expect to find that universes get less and less black-hole-productive, and more and more varied. (It’s not so much a tree as a thing which sprawls chaotically in every direction – like a rhizome.) Now, assume that we have to get a long way from the most black-hole-productive node universes in order to find a universe which is hospitable to life. That seems a fair assumption – after all, the entire premise of this discussion is that life is only possible in a universe with a particular, narrow range of physical constraints. Smolin’s account is not meant to abandon that observation.

So, contra your assertion, we should rather say that Smolin’s theory predicts that observers will inhabit a universe that has laws within the range that supports life, like ours; and that it would not necessarily be a universe which is maximally productive of black holes but would be the descendant of such a productive universe. The latter is impossible to observe, of course, but the fact that we observe the former is no strike against the Smolin.

What Smolin’s picture does for us is explain how fine-tuned universes can emerge from a physical process which is being driven by an entirely *independent* process of “evolution”. (A bad word, really, given the afore-mentioned lack of selective pressure. In Smolin’s multiverse, everything reproduces, but black-hole-rich universes reproduce more than poor ones.)

Perhaps you’ve addressed something like this issue in the time that’s passed – I’d be very keen to read it if so.

I haven’t seen it in a journal, but I think a version of it appears in “Debating Christian Theism”, edited by J.P. Moreland. Robin Collins responds there.

It was Geraint who recommended my article to be published in PASA. I think he would stand by this part of his review: “in places some elements of the fine-tuning argument were brushed aside with little real justification.” I might ask my later today…

[…] and I can only think of a handful that oppose this conclusion, and piles and piles that support it. Here are some quotes from non-theist scientists. For example, Andrei Linde says: “The existence […]

[…] most eminent cosmologists and theoretical physicists who support his conclusions on fine-tuning. In In Defence of The Fine-Tuning of the Universe for Intelligent Life, Barnes provides quotes from 7 eminent cosmologists (including Rees, Susskind and Smolin quoted […]

[…] Originally Posted by davidbilby Jimfit's "answer" here verbatim stolen from Q: If quantum mechanics says everything is random, then how can it also be the most accurate theory ever? | Ask a Mathematician / Ask a Physicist What a funny kind of Christian you are JimFit, one who believes wholeheartedly in presenting other people's words as if a) you understand them b) you actually wrote them yourself. What does your Bible have to say on that subject, for example, Exodus 20 verse 15, Jeremiah 23 verse 30? What a funny kind of debate, one person (richardparker) honestly bringing up points and another (you) simply googling and copy-pasting the most apparently relevant text, but lacking the knowledge, you can't even judge relevance accurately… The distinction here is in the definition of "random", by the way, which your answer fails to grasp because it was given by someone else to somebody else's (different) question… After your failure to show me that the cosmological constants is not a constant you now reply with cheap responses. First reply to this then we can talk. In Defence of The Fine-Tuning of the Universe for Intelligent Life | Letters to Nature […]

luke i would like to know whats your answer to this quote ” he rate of expansion of the universe would automatically become very close to the critical rate determined by the energy density of the universe. This could then explain why the rate of expansion is still so close to the critical rate, without having to assume that the initial rate of expansion of the universe was very carefully chosen “

OK.
1. Stenger quotes that passage from Hawking, and then says “Let me show how that comes about.” He then says absolutely nothing about cosmic inflation, which is what Hawking was talking about. Baffling.

2. Inflation explains why our universe is close to critical. Good. Now ask: is inflation itself fine-tuned? Yes! I discuss this in sections 4.4 and 4.5 of my paper (http://arxiv.org/abs/1112.4647). I say: “we do not have a physical model[of inflation], and even we had such a model, “although inflationary models may alleviate the “fine tuning” in the choice of initial conditions, the models themselves create new “fine tuning” issues with regard to the properties of the scalar field” (Hollands & Wald, 2002b).

I am pleased to find this discussion is still active. (You will find my previous comments above.)

However unfortunately both Stenger and Luke Barnes hide behind the language of mathematics and the associated Platonist tendencies which this very limited viewpoint often entails.

For the benefit of those capable of broader and more realistic empirical analysis a discussion is presented in my latest work “The Intricacy Generator: Pushing Chemistry and Geometry Uphill”. This is now in print and should be available from Amazon and other usual outlets next week

Stenger, in his book, denies being a Platonist. In his reply to me critique, he calls me a Platonic realist. Above, I deny the charge (I don’t have firm views on the matter) and note that fine-tuning is independent of such issues. So you’ve got an accusation to uphold – neither of us claims to be a Platonist, and I don’t think it would matter if I was.

The fact that our models contain quantifiers that are “fine tuned” is not sufficient for me to believe that the Universe necessarily is. We are now well aware, thanks to several decades of progress in cognitive science, that we make unconscious ontological commitments to metaphors because that is simply the way we humans think. But I try my best to make myself aware of these presupposed metaphors whilst thinking about things, including the World Is a Machine Metaphor and the God is(was) a Mathematician Metaphor. A Platonist in these regards is someone who actually believes the world is fine-tuned (in which case the world and our models are isomorphic), and who may further believe that mathematics inheres in rather than merely describes the world. Wringing ones hands (or ones brain) over both of these problems, as if metaophors didn’t exist, is an instanace of naive realism in science, aka Platonism.

[…] for life. “It isn’t,” declares Krauss dogmatically, without even hinting at the fact that the vast majority of cosmologists (see Barnes’ Postscript) would disagree with him. He goes on to say that life is fine-tuned for […]

[…] science upon which Craig wants to make his case is sound, in my opinion. And the opinion of many other scientists, believing and non-believing alike. The pressing question is: what, if anything, should we conclude […]