Christmas Tripe – A Fine-Tuned Critique of Richard Carrier (Part 3)

I thought I was done with Richard Carrier’s views on the fine-tuning of the universe for intelligent life (Part 1, Part 2). And then someone pointed me to this. It comes in response to an article by William Lane Craig. I’ve critiqued Craig’s views on fine-tuning here and here. The quotes below are from Carrier unless otherwise noted.

[H]e claims “the fundamental constants and quantities of nature must fall into an incomprehensibly narrow life-permitting range,” but that claim has been refuted–by scientists–again and again. We actually do not know that there is only a narrow life-permitting range of possible configurations of the universe. As has been pointed out to Craig by several theoretical physicists (from Krauss to Stenger), he can only get his “narrow range” by varying one single constant and holding all the others fixed, which is simply not how a universe would be randomly selected. When you allow all the constants to vary freely, the number of configurations that are life permitting actually ends up respectably high (between 1 in 8 and 1 in 4: see Victor Stenger’s The Fallacy of Fine-Tuning).

I’ve said an awful lot in response to that paragraph, so let’s just run through the highlights.

“Refuted by scientists again and again”. What, in the peer-reviewed scientific literature? I’ve published a review of the scientific literature, 200+ papers, and I can only think of a handful that oppose this conclusion, and piles and piles that support it. Here are some quotes from non-theist scientists. For example, Andrei Linde says: “The existence of an amazingly strong correlation between our own properties and the values of many parameters of our world, such as the masses and charges of electron and proton, the value of the gravitational constant, the amplitude of spontaneous symmetry breaking in the electroweak theory, the value of the vacuum energy, and the dimensionality of our world, is an experimental fact requiring an explanation.” [emphasis added.]

“By several theoretical physicists (from Krauss to Stenger)”. I’ve replied to Stenger. I had a chance to talk to Krauss briefly about fine-tuning but I’m still not sure what he thinks. His published work on anthropic matters doesn’t address the more general fine-tuning claim. Also, by saying “from” and “to”, Carrier is trying to give the impression that a great multitude stands with his claim. I’m not even sure if Krauss is with him. I’ve read loads on this subject and only Stenger defends Carrier’s point, and in a popular (ish) level book. On the other hand, Craig can cite Barrow, Carr, Carter, Davies, Deutsch, Ellis, Greene, Guth, Harrison, Hawking, Linde, Page, Penrose, Polkinghorne, Rees, Sandage, Smolin, Susskind, Tegmark, Tipler, Vilenkin, Weinberg, Wheeler, and Wilczek. (See here). With regards to the claim that “the fundamental constants and quantities of nature must fall into an incomprehensibly narrow life-permitting range”, the weight of the peer-reviewed scientific literature is overwhelmingly with Craig. (If you disagree, start citing papers).

“He can only get his “narrow range” by varying one single constant”. Wrong. The very thing that got this field started was physicists noting coincidences between a number of constants and the requirements of life. Only a handful of the 200+ scientific papers in this field vary only one variable. Read this.

“1 in 8 and 1 in 4: see Victor Stenger”. If Carrier is referring to Stenger’s program MonkeyGod, then he’s kidding himself. That “model” has 8 high school-level equations, 6 of which are wrong. It fails to understand the difference between an experimental range and a possible range, which is fatal to any discussion of fine-tuning. Assumptions are cherry-picked. Crucial constraints and constants are missing. Carrier has previously called MonkeyGod “a serious research product, defended at length in a technical article”. It was published in a philosophical journal of a humanist society, and a popular level book, and would be laughed out of any scientific journal. MonkeyGod is a bad joke.

And even those models are artificially limiting the constants that vary to the constants in our universe, when in fact there can be any number of other constants and variables.

In all the possible universes we have explored, we have found that a tiny fraction would permit the existence of intelligent life. There are other possible universes,that we haven’t explored. This is only relevant if we have some reason to believe that the trend we have observed until now will be miraculously reversed just beyond the horizon of what we have explored. In the absence of such evidence, we are justified in concluding that the possible universes we have explored are typical of all the possible universes. In fact, by beginning in our universe, known to be life-permitting, we have biased our search in favour of finding life-permitting universes.

It [is] completely impossible for any mortal to calculate the probability of a life-bearing universe from any randomly produced universe.

Nope. For a given possible universe, we specify the physics. So we know that there are no other constants and variables. A universe with other constants would be a different universe.

And we needn’t merely conjecture their innumerability: leading cosmological theories already entail, even from a single simple beginning, the formation of innumerable differently-configured regions of the universe. This is the inevitable consequence of Chaotic Inflation Theory, for example, the most popular going theory in cosmological physics today. But will Craig tell his readers that? No.

How does a historian come to think that he can crown a theory “the most popular going theory in cosmological physics today” without giving a reference? He has no authority on cosmology – no training, to expertise, no publications, and a growing pile of physics blunders.

In any case, the claim is wrong. The most popular theory in cosmological physics today is the Lambda-CDM model, the standard model of cosmology. Inflationary theory is an extension of the standard model, describing a period of accelerating expansion in the very early universe. Chaotic inflation is one of a huge number of versions of inflation (old, new, hybrid, hilltop, eternal, exponential, natural … 10 years ago Shellard could list 111 inflationary models.). A multiverse is not inevitable given inflation, and certainly not part of the standard model of cosmology. George Ellis says:

A multiverse is implied by some forms of inflation but not others. Inflation is not yet a well defined theory and chaotic inflation is just one variant of it. …the key physics involved in chaotic inflation (Coleman-de Luccia tunnelling) is extrapolated from known and tested physics to quite different regimes; that extrapolation is unverified and indeed unverifiable.

The number of different regions created by chaotic inflation is not necessarily infinite. They are not necessarily “differently-configured” either – that depends on the details of the model, in particular how the symmetries of particle physics are broken. Inflationary multiverses, as with all multiverses, must deal with the problem of Boltzmann brains – even if the multiverse exists, it may not explain fine-tuning. And most importantly, inflation itself seems to need to be fine-tuned in order to start, last, end, reheat the universe and create the right perturbations. “Although inflationary models may alleviate the “fine tuning” in the choice of initial conditions, the models themselves create new “fine tuning” issues with regard to the properties of the scalar field” (Hollands & Wald, 2002).

Most configurations of constants produce either a collapsing universe (which re-explodes, by crunch or bounce, rolling the dice all over again, so those configurations must be excluded from any randomization ratio) or a universe that accelerates its expansion until it rips apart (as its energy density approaches infinity, which results in another Big Bang, rolling the dice all over again, so those configurations must also be excluded from any randomization ratio) or a universe in between (most of which are life friendly).

You don’t “re-explode” from a big crunch. You crunch. That’s why its called a crunch. Spacetime ends. Game over. A big rip is the same – the end of space time. You don’t start over. The bounce in bouncing models is usually inserted by hand, so it is somewhat arbitrary what changes from cycle to cycle. Also, crunch vs. rip isn’t a matter of a single variable, so there is no “in between”. (Roughly, crunch depends on energy density, rip on equation of state). If by “in between”, Carrier means just “not bounce, crunch or rip” then the “most configurations” claim at the start of the quote above is vacuous. In any case, it is certainly not true that most universes “in between” are life friendly. They face the fine-tuning of their initial energy densities, inhomogeneity, entropy, cosmological constant, inflation parameters (if they inflate) etc.

For we can logically deduce the existence of innumerable universes from positing the single simplest entity imaginable at the beginning of it all: a lawless singular point of space-time with no properties other than the absence of all logically impossible states.

By what criteria is that the simplest entity imaginable? If the point is lawless, why does it evolve into something else? How does it evolve? What evolves? What defines the state space? If it is a singular point, how are there now many spacetime points? Why are they arranged in a smooth manifold? Why spacetime? What if space and time aren’t fundamental? It’s not clear that a lawless physical state makes any sense. Even if it does, if it’s lawless, why do we observe a law-like universe? If one invokes the anthropic principle and supposes that life requires a law-like environment, then you’ve got a problem, neatly explained by Paul Davies:

The many-universes theory, in its strongest form, proposes that there are no laws – just chaos. In a very tiny fraction of the universes, law-like regularities appear purely by chance – as a result of statistical freaks, such as getting heads from a coin toss a million times in a row, or dealing a perfect suit at cards. Such events, which may be so unusual as to give the appearance of a rule, are in fact just accidents that are bound to occur somewhere if you have untold zillions of trials. But if the only thing that selects the ordered universes from the chaotic ones is the existence of life, then in a universe like ours (that contains life) you would only expect to see a level of order to a degree that is just necessary for life to be maintained. Any additional order, over and above this minimal level required for life, would be exceedingly improbable, for the same reason that tossing a billion heads in a row is exceedingly less likely than merely tossing a million heads. However, there are many law-like aspects of our universe which are such that, if those laws were to fail or falter just a bit, would not constitute a threat to life. … If the law of conservation of electric charge, which is a very fundamental law of physics, should falter – if it should get a little bit wobbly, so to speak – then what would be the consequences for life of small fluctuations in the magnitude of atomic charges? Well, as far as chemistry and biology are concerned, absolutely nothing.

The problem with laws emerging from lawlessness is that we have good reason to believe that it is false. Perhaps the quote from Carrier’s article links to the writing of a professional cosmologist who will explain all these things to us! Nope. It’s another one of his blog posts.

This universe is 99.99999 percent composed of lethal radiation-filled vacuum, and 99.99999 percent of all the material in the universe comprises stars and black holes on which nothing can ever live, and 99.99999 percent of all other material in the universe (all planets, moons, clouds, asteroids) is barren of life or even outright inhospitable to life.

Fine-tuning doesn’t claim that this universe has the maximum amount of life per unit volume (or baryon, or whatever). So this argument is irrelevant. John Leslie says it well:

The issue here is not the rarity or otherwise of living beings in our universe. It is instead whether living beings could evolve in a universe just slightly different in its basic characteristics. The main evidence for multiple universes or for God is the seeming fact that tiny changes would have made our universe permanently lifeless. How curious to argue that the frozen desert of the Antarctic, the emptiness of interstellar space, and the inferno inside the stars are strong evidence against design! As if the only acceptable sign of a universe’s being God-created would be that it was crammed with living beings from end to end and from start to finish! As if God could only create a single universe so that he would need to ensure that it was well packed! (Universes).

Further, we understand why a life-permitting universe might be large and diffuse. That argument is in Barrow and Tipler’s 1986 book, so there is no excuse for not discussing it. A shortened version can be found here.

The universe we observe (including all its apparent fine tuning) has a probability of 100% if there is no god.

See my last post for an explanation of where this claim fails. In short, its a likelihood, not a posterior. Once again, Carrier manages to shed no light whatsoever on the topic at hand. He manages to accuse Craig of ignoring critics, whilst himself ignoring the entire scientific literature and any scientist who doesn’t agree with him. Which basically leaves Stenger. It seems those two were made for each other.

As I said at the end of my last post: there are some very good atheist critiques of the argument from fine-tuning out there. Read those. Avoid Carrier’s tripe.

I think there is misconception here that I think is very common in the literature and I may have mentioned before. It’s a misconception that is made by both Richard Carrier and George Ellis. This misconception is how you go from inflationary cosmology to a multiverse. Ellis and Carrier both make the mistake it seems of seeing the main link through Linde’s chaotic inflation model.
Chaotic inflation and eternal inflation are not the same thing. Vielnkin wrote a good paper explaining he difference, you can read it here:http://arxiv.org/pdf/gr-qc/0409055.pdf
Some of the people Vilenkin gives as example of getting it wrong are George Ellis, Martin Rees and Max Tegmark.
.
The important point to note that is whilst it’s true that there are a large number of inflationary models that are not necessarily chaotic; it does not appear to be true that there are a large number of inflationary models that are not eternal. You only need eternal inflation to generate a multiverse, chaotic inflation is not needed.
According to inflations chief proponents (Guth and Vilenkin ) and its chief critics (Steinhardt and Turok) inflation is generically eternal, hence if inflation is true then a multiverse exists. Yes there are a lot of models but if theya re all (or least the vast majority) eternal then the argument doesn’t work. Now I will have to take Guth etc word for it that they are all eternal. Maybe they have done their sums wrong. But I think Ellis misrepresents the state of the field here. If the key players in the inflation debate al think inflation is generically eternal then that’s an important fact. Chaotic inflation is irrelevant here. Yes chaotic inflation is eternal but eternal inflation is not necessarily chaotic.
Also inflation may not need fine tuning in the presence of super inflation form a non singular bounce. See Ashtekar and Sloan:http://arxiv.org/abs/1103.2475
To say that you don’t re explode from a crunch is true in GR but who believes Gr is applicable at these high densities?
I don’t think anyone believes that do you?
Also the statement “The bounce in bouncing models is usually inserted by hand, so it is somewhat arbitrary what changes from cycle to cycle.” Seems questionable. Yes in Gr it may be true but again who trusts Gr here?
What has been a significant development in the last ten years has been the rise of the “non singular bounce” these have been shown in applying contemporary models of quantum gravity to the big bang big/crunch. You can see this in string theory http://arxiv.org/abs/hep-th/0312182
LQC:http://arxiv.org/abs/1108.0893
and in Horava Liftshitzhttp://arxiv.org/abs/0904.2835
These are the only models of quantum gravity that as far I know are still discussed in the ligature and have been applied to the big bang. Given that a bounce may be common to all them it’s hard to argue they are put in by hand. I’m only familiar with the bounce in lQC and not the other two cases but here it is a direct result of the quantisation process, it is not put in by hand. As far I know other model of quantum gravity have not been applied to the big bang. I spoke to someone working on Casual Dynamic triangulations and they say they are not ready for that yet. If you know of other examples I would like to hear it.
Whether the big rip produces another big bang or not depends on the dynamics of the rip. Braum Frampton did publish in Physical Review an argument saying it does produce more big bangs:http://cds.cern.ch/record/990660/files/?docname=0610213&version=all
they have to introduce a turnaround point but they argue its well motivated.

Lastly I want to differ significantly from the quote given by John Leslie where he said
“The main evidence for multiple universes or for God is the seeming fact that tiny changes would have made our universe permanently lifeless.”
This seems to be written as if ignorant of the actual scientific literature. The multiverse from eternal inflation doesn’t care about life, according to Guth; it’s a direct result of how inflation is thought to end. If a universe produces life or not, makes no more difference than if it produces jelly doughnuts or not.
I suggest Guth’s paper here:http://arxiv.org/abs/hep-th/0702178

My main point, which I assume you agree with, is that inflation is not part of the standard model of cosmology, and so predictions of multiverses can not part of standard cosmology.

I think Ellis’ comment applies more widely, though he applies it to chaotic inflation. A single inflating universe fits all the data. Any mechanism for creating the multiple universe domains relies on untested and possibly untestable physics.

That said, I’ve found that these terms are used somewhat loosely. Chaotic inflation often means “the one with the parabolic phi^2 potential” in textbooks. I hadn’t seen Vilenkin’s article so I’ll have to give that a read.

“You only need eternal inflation to generate a multiverse”. Surely the claim is that “you only need eternal inflation to generate multiple universe domains”. That would take care of cosmological parameters like Lambda, Q etc. You’ll need some jiggery-pokery to ensure that each domain gets different particle physics. Plausible but hardly tested physics.

I take a crunch to be a GR phenomenon. Anything else is some kind of bounce. The same goes with big rips – I’m thinking of the GR version. Anything else is something else.

Regarding bounces, if GR says “no!” then any bounce must be a quantum gravity phenomenon. There are singularities in string theory, so I don’t think one can say that one generically expects a bounce in quantum gravity. In the absence of a theory of quantum gravity, I think we can formulate toy models of a bounce. I don’t think that these are much more than “here’s something that could happen” but I’m willing to be corrected. Again, thanks for the references.

Regarding Leslie quote, he’s writing in 1990, when that statement may have been true. I don’t think that’s a criticism – evidence is evidence. Perhaps “main” is too strong for 1990, and certainly too strong today. However, inflation must worry about life, or at least observers, for the same reason that any scientific investigation must worry about selection effects. If a mulitverse failed to produce life then it couldn’t be our multiverse. Guth’s discussion of the Youngness paradox makes this point very clear.

“The Barnes pieces don’t even respond to my argument. You would know that if you read my argument yourself. This is a fallacy called red herring: make a series of completely irrelevant points, around selective quotations of an argument, and claim to have answered that argument. Did you really fall for it? (His second article relies more on the fallacy called straw man: selective quoting, and irrelevant claims made against things I didn’t say or that ignore what I did say.)”

Selective quotations: look how long my quotations are! I had to type those bastards out. I’ve all but quoted the entire relevant section of the article.

Red herrings and straw men: He claims in his article that he will prove his claim “with … logical certainty”. Where is the mathematical formalism? For the fine-tuning argument, it’s in footnotes 22 and 23. If I answer that, then I’ve answered the central argument of the article. And I did here (https://letterstonature.wordpress.com/2013/12/15/what-chance-looks-like-a-fine-tuned-critique-of-richard-carrier-part-2/) under the heading “Bayes’ Theorem Omits Redundancies”. I’ve responded to his central claim, and shown that many of his examples (e.g. the poker game, the firing squad, the lottery) draw incorrect conclusions, demonstrating his lack of competence in probability theory.

Look at this blog post. An entire paragraph quoted, analysed sentence by sentence, argued to be false. If I’m wrong then show where. If there are straw men then make that case. Which comments are red herrings and why? Put up or shut up.

“Just because you claim to be responding to x, doesn’t mean you actually did. That’s my point. You can make all sorts of claims you want, it doesn’t make them correct. You just ignore the things we say, quote mine, and handwave about a different point. Over and over again. And then claim to have proved something.

A classic example is you wasting tons of time on a footnote that explicitly says the conclusion reached there is ignored in the article! No shit. Yet you seem to think addressing it relates to the article!

Another classic example is building a straw man (“getting dealt a royal flush twenty times in a row”), show it is absurd, then declare what I said false, even though what I said would have gotten the same conclusion (had I used the same premise). That’s a straw man. Then in the process you ignore the point I actually made with the example you incorrectly think I’m challenging.

All this will be obvious to anyone of sense who reads my actual work in context, and not your quote-mined, straw-manned shreddery of it.”

My reply is still “awaiting moderation” over at his blog, so I’ll post it here:

You [Carrier] are either reading the wrong footnote or a liar. The sections in question … From page 294 (of my edition):
____________________

[We] will only ever find [ourselves] in a finely tuned universe whether it was designed or not. The fact of their universe being finely tuned can never tell them anything about how it got that way … This conclusion cannot be rationally denied: if only finely tuned universes can produce life, then if intelligent observers exist (and we can see that they do) then the probability that their universe will be finely tuned will be 100 percent (22).

This is a summary of your main reply to the fine-tuning argument in your article, and in the blog post above, starting with the sentence “… we will only ever find ourselves in universes like ours …”. Your article goes on:
____________________

Always. Regardless of whether a “finely tuned universe” is a product of chance, and regardless of how improbable a chance it is (23).

[Skipping to footnote 23 … page 409]:
This is undeniable: if only a finely tuned universe can produce life, then by definition P(finely tuned universe | intelligent observers exist) = 1, because of [two justifications. I agree with this section]. Collins concedes that if we include in b “everything we know about the world, including our existence”, then p(L | ~God & A life-bearing universe is observed) = 100 percent (Collins, Blackwell). He thus desperately needs to somehow “not count” such known facts. That’s irrational.
____________________

The remainder of footnote 23 is a reply to Collins’s argument, discussing how the probabilities are dependent on what one puts in b. It doesn’t refer back to the main text. You say that *his* calculation is irrelevant, not yours. I’m happy to write out the rest of the footnote if required. [Also, by the axioms of probability theory, there cannot be any dependence of the posterior on the background/evidence split. So that whole discussion is wrong. See here: https://letterstonature.wordpress.com/2013/11/17/bayes-theorem-what-is-this-background-information/ . But I digress.]

The purpose of these long quotations of your article is to show that:
a) The footnotes do in fact claim to be a formalisation of the main point of your article regarding fine-tuning. It is exactly the point with which to conclude your fine-tuning comments in this blog post. It is your “100 percent expected on atheism” slogan.
b) Nowhere in the footnotes does it say that the conclusion is ignored in the article. (Were you thinking of footnote 20, regarding the multiverse? I addressed that footnote in part 1 of my critique only in so far as it revealed an inconsistency in your approach to probability theory. Regarding your response to fine-tuning, it’s 22 and 23 that I’m interested in.)

If I’ve missed something, quote me a line number and a page number from your article. Write out the quotation. Show the context. Let’s see it.

The “twenty royal flushes” example is not an attempt to represent your argument, so cannot be said to misrepresent your argument, so cannot be a straw man. Rather, it is a counterexample, a reductio ad absurdum. As follows:
1. If your principle “if the evidence looks exactly the same on either hypothesis, there is no logical sense in which we can say the evidence is more likely on either hypothesis” were valid, then twenty royal flushes in a row is not evidence of cheating.
2. Twenty royal flushes in a row is evidence of cheating.
3. Thus your principle is not valid.

Further, since a lot of your article stands by this principle, a lot of your article falls with this principle.

You’ve admitted premise 2. So our disagreement is over premise 1. I deny that you “would have gotten the same conclusion had I used the same premise”. Twenty royal flushes in a row “looks exactly the same on either hypothesis [fair or cheating]” … it’s just cards on a table, either way. Thus you should conclude that “there is no logical sense in which we can say the evidence is more likely on either hypothesis”. That seems a straightforward consequence of your reasoning. Where is the misrepresentation? Where is the straw man?

The correct principle is this: if the probability of the evidence (the likelihood) is the same on either hypothesis, then the evidence does not help us choose between the hypotheses. (i.e. posterior = prior). This follows from Bayes theorem: https://letterstonature.wordpress.com/2013/10/26/10-nice-things-about-bayes-theorem/). But that principle is not the same as yours, and would make “how improbable a chance [a finely tuned universe] is” very relevant to fine-tuning and the probability of NID.

This is the same scam you keep pulling. Change what you are talking about, as if that responds to what someone said.

I was talking about your critique of note 20. In which I conclude “we don’t need this hypothesis, so I will proceed without it.” Your method is to ignore my actual arguments, make bogus claims of inconsistency, do the math wrong, and then claim my arguments fails. Arguments you never actually address. Yet repeatedly claim to have.

You make completely different errors in your treatment of note 23. It’s simply impossible to observe a lifeless universe, so p(~e) = 0, which entails P(e)= 1 for all observers. You have no valid argument against that. And all your handwaving to try and avoid that conclusion is just desperate.

As to the royal flush’s example, it can’t be a counterexample when I would have gotten the same result you did using that changed premise. That only confirms the method I used. That you don’t even grasp that is precisely my point, and why you can’t even understand what my arguments are.

* “P(e)= 1 for all observers. You have no valid argument against that.” Here it is. Again.

1. From Bayes’ theorem one can show that, for any A, B, C, T:

p(B|A) = 1 does not imply that p(T|ABC) = p(~T|ABC) (1)

2. Applying this to fine tuning, let

f = finely tuned universe
o = observers exist

then, from (1)

p(f | o) = 1 does not imply p(NID | f.o.b) = p(~NID | f.o.b)

3. Thus, I can admit that “p(finely tuned universe | observers exist) = 1” and still conclude that

p(NID | f.o.b) >> p(~NID | f.o.b).

There it is. That’s my argument: “P(e)= 1 for all observers” does not prove that “the fact of [our] universe being finely tuned can never [us] anything about how it got that way”. The proof of (1) is in my part 2, in the section “The Firing Squad Machine” and following. If the argument is invalid, then show me where.

* Footnote 20: You want to talk about that, fine. I addressed that footnote in part 1 of my critique only in so far as it revealed an inconsistency in your approach to probability theory. That you don’t discuss the multiverse further in the text is irrelevant to my point. Your discussion of the multiverse doesn’t treat probabilities as frequencies, going against your own interpretation of probabilities. I never claimed that it was your main argument. (I said 130 words about that footnote, 4% of the post. Hardly “tons”.)

* “[You] Do the math wrong”. Them’s fighting words to a physicist. You’d better be able to back them up. Where?

* “I would have gotten the same result you did using that changed premise”. I understand that you think that, but you haven’t argued for it. Why is that sentence true? I don’t think it is, for this reason: Twenty royal flushes in a row “looks exactly the same on either hypothesis [fair or cheating]” … it’s just cards on a table, either way. Thus you should conclude that “there is no logical sense in which we can say the evidence is more likely on either hypothesis”. Where have I gone wrong?

Hi Barnes, with respect to Carriers reply concerning his discussion of Collins, isn’t it the case that he just means that you can’t put L (life exists) as evidence because that would change the posterior? I agree that switching information from data to background information doesn’t change the posterior, but I think he’s saying that when you put information from your background or data to the evidence side, the posterior does change. So that when you take “L” from the background/data and put it as part of the evidence than the posterior would chamge bur that probability no longer becomes relevant for us because L should stay as background knowledge. Any thoughts?

Huh? No one argues that. This is the kind of nonsense I am talking about. You don’t even understand my argument, and aren’t even replying to it,

“p(f | o) = 1 does not imply p(NID | f.o.b) = p(~NID | f.o.b)”

Ditto.

“3. Thus, I can admit that “p(finely tuned universe | observers exist) = 1″ and still conclude that

p(NID | f.o.b) >> p(~NID | f.o.b).”

And you “can” conclude that God is a complex fungus and lives at 32 Privet drive, Essex, UK.

The fact that a conclusion is logically compatible with a fact does not make that conclusion sound.

“I never claimed that it was your main argument.”

You also never mentioned that the note itself says its result is not used in my argument.

(You also don’t there mention that that note’s argument has a formal demonstration online, but that’s less misleading.)

“I understand that you think that, but you haven’t argued for it.”

You did. Your argument is identical to what mine would be had I started with the same premise.

It’s your inability to see that (repeatedly) that makes your rebuttal clueless.

“Twenty royal flushes in a row “looks exactly the same on either hypothesis [fair or cheating]””

But is an event that belongs to a different reference class.

(You don’t seem to realize that all prior probabilities are the posterior probabilities of previous equations, thus all reference classes can be converted to likelihood ratios and vice versa. Your inability to grasp that defines your entire line of argument here, and makes it look like you don’t know what you are talking about.)

* (I take it that P(e)= 1 and p(f | o) = 1 are synonymous in this context.) Does your essay argue this:

“p(f | o) = 1 for all observers” shows that “the fact of [our] universe being finely tuned can never [us] anything about how it got that way”.

That seems to be the argument on page 293-4. It’s just after you make that conclusion at the end of page 293 that you say near the top of page 294 that “This conclusion cannot be rationally denied … the probability that [intelligent life’s] universe will be finely tuned will be 100 percent.” Then comes footnote 22 and 23.

* If fine-tuning doesn’t tells us anything about NID vs ~NID, then the evidence leaves those hypothesis with roughly equal probabilities. Formally, we would write this as:

* If that’s not your argument, then what are you trying to prove from p(f | o) = 1? After all, that’s a likelihood, not a posterior. What effect does p(f | o) = 1 have on the probability of NID? p(f | o) = 1, therefore what? Give your answer in probabilistic notation.

* “You also never mentioned that the note itself says its result is not used in my argument.”

I wasn’t critiquing your argument in that section, or even in that post, so that fact is irrelevant. It does nothing to change my point, or to salvage your inconsistency with finite frequentism. Heck … I’ll edit the post to put it in, if you’re that worried. It changes nothing.

* You say that “your argument is identical to mine” and then argue *against* the main premise of my argument, that Twenty royal flushes in a row “looks exactly the same on either hypothesis [fair or cheating]”. Make up your mind.

* “But is an event that belongs to a different reference class.”

What event? Different to what? There is only one event in question – twenty royal flushes.

Your principle “if the evidence looks exactly the same on either hypothesis, there is no logical sense in which we can say the evidence is more likely on either hypothesis” doesn’t mention reference classes. All that matters, according to you, is whether the evidence looks the same. Why now bring up reference classes? (Also, if you’d read part 1 you’d have read my critique of the whole idea of reference classes as infinitely plastic and thus useless. This is not my invention. It is so standard it has its own wikipedia page: http://en.wikipedia.org/wiki/Reference_class_problem .)

Learn to use probability terminology correctly, please. Don’t talk about the probability of an equation.

In any case, false. The posterior is the probability of a theory with respect to everything we know, EB. The prior is the probability of T with respect to B only. The prior *might* be the posterior probability with respect to what we knew at some previous time (i.e. before we knew E) but this is not required by Bayes theorem. Bayes theorem is an identity; it knows nothing of mere labels like “prior” and “posterior”. This is important, so I’ll quote from Jayne’s “Probability theory”, page 87.

“But we caution that the term ‘prior’ is another of those terms from the distant past that can be inappropriate and misleading today. In the first place, it does not necessarily mean ‘earlier in time’. Indeed, the very concept of time is not in our general theory (although we may introduce it in a particular problem). The distinction is a purely logical one; any additional information beyond the immediate data D of the current problem is by definition ‘prior information’. …

Old misconceptions about the origin, nature and proper functional use of prior probabilities are still common among those who continue to use the archaic term ‘a-priori probabilities’. The term ‘a-priori’ was introduced by Immanuel Kant to denote a proposition whose truth can be known independently of experience; which is most emphatically what we do not mean here. X denotes simply whatever additional information the robot has beyond what we have chosen to call the data.”

* “all reference classes can be converted to likelihood ratios and vice versa”.

According to you, he can convert this into reference classes. What are the corresponding references classes? Is he counting the number of universes in which the perihelion shift occurs due to general relativity, and comparing to the number where it occurs due to Newtonian gravity? Does this calculation bring into existence universes governed by GR and NG? After all, “probability measures frequency”, or so you say. When a scientist compares GR with NG, what are the reference classes?

Bayesian’s don’t usually talk about reference classes. That’s more of a frequentist thing. What probability textbook did you learn from? (Not a rhetorical question.)

Hi Thanks for the kind words Luke. Apologies for not replying sooner have been on holiday.
Is inflation part of standard cosmology? I don’t think that’s such an easy question to answer and requires some discussion. I think most textbooks I have include inflation in their narrative of the universe I know you use Liddle , I don’t have it to hand as I type this, but I seem to remember it having a whole chapter on inflation, am I right?
NASA includes inflation in their timeline:http://map.gsfc.nasa.gov/media/060915/index.html
The wiki timeline includes it:http://en.wikipedia.org/wiki/Chronology_of_the_universe
Even when you talk to critics of inflation such Roger Penrose and Paul Steinhardt they will generally complain that inflation is widely accepted as a fact by most when they think it should not be. . A good example Steinhardt said in his critical article on inflation in Scientific American:http://www.physics.princeton.edu/~steinh/0411036.pdf
“The idea is so compelling that cosmologists, including me, routinely describe it to students, journalists and the public as an established fact.”

Nevertheless there are serious doubters to inflation, the obvious examples being the aforementioned authors, but of course there are others.
What I would say is most people recognise that the standard big bang picture has many difficult problems which they believe can be solved by inflation. Furthermore the NASA WMAp and ESa Planck map team pretty much endorsed inflation.
On the other hand, no one seriously challenges the idea that black holes exist but they do challenge inflation, such challengers include CCC, ekpyrotic, VSl etc. These are however minority views. So I’m inclined to go against you and say inflation is part of standard cosmology. But it’s not quite as standard as other elements. Does that make sense? Perhaps the problem with the phrase is that it’s too black and white. Something either is or is not standard cosmology. Perhaps inflation is in a grey area between the two.

To say that a single inflationary epoch fits the data maybe true but it ignores all the dynamics of how the inflation field evolves. People that study inflation can’t just ignore these things. Did you watch Guth’s presentation at the early universe conference at PI?
I think the case the makes is quiet strong.http://streamer.perimeterinstitute.ca/Flash/baee3047-333f-4f20-b980-a3bd73526eaf/viewer.html
What Guth is saying is that in order for inflation to solve the standard problems the inflaton field has to evolve in such a way that the expansion of the field has to happen faster than the decay rate. As long as you have that (the exponential expansion faster than exponential decay) then you have eternal inflation and therefore eternal inflation is far more generic than Ellis implies. Ellis seems to be saying there are many many versions of inflation and we don’t have to take chaotic inflation as the right one. But this ignores that chaotic inflation isn’t needed for eternal inflation and according to Guth almost all of these models are eternal. . Now I have to say I have to take his word for that. But I do observe that the Pi conference was an interesting conference, organised by Turok a chief critic of inflation and hosted many of the leading pro and anti inflation commentators. As far as I can see the critics and the supporters all agreed with this point (inflation is generically eternal). Some thought it a strength of the model, others a weakness. But they all seemed to agree it was a feature of the model.
So what I would say is to write eternal inflation off as an unnecessary add on to standard inflation is way too simplistic and possibly just plain wrong.
I am not saying inflation is fact and eternal inflation is an inevitable outcome therefore the multiverse is a fact.
Rather something a bit softer, inflation is the dominant paradigm for the early universe. It has good evidence in favour of it (e.g. ns=.96) but the evidence is not so overwhelming that we can’t consider alternatives such as CCC, VSl etc. The alternatives to inflation all seem to be cyclic cosmologies. Eternal inflation seems generic according to both critics and supporters of inflation. Maybe they all have it wrong; I never believe a result only because theoreticians tell me it’s true. But nor do I write them off too lightly either which I think is a mistake Ellis makes.

I agree eternal inflation gives you multiple domains, whether they all have different constants in them remains to be seen. I would say it means we know the dice was rolled many times. Whether the dice have different numbers on each side is not so certain. FI you believe the constants of nature can take on different values to what we observe then domains with different values seems more plausible.

The Harvard links you include I couldn’t open I’m afraid.

Of course a bounce is a quantum gravity phenomenon.. My take is LQC is definitely giving you a bounce, HArava gravity is too and string theory has some differing opinions. Certainly there are those arguing for a bounce as a result from string theory as well but I don’t think they are unanimous so the bounce is looking like a good prospect at the moment.

To say that the big crunch doesn’t bounce is a statement loaded with assumptions; it assumes Gr is still applicable in what is expected to be the quantum gravity realm. An assumption I think very few people take seriously. So I would say you can’t make these statements as if they are facts but better to explain what your assumptions are and why you think they hold. Don’t you agree?

ArnaudAntoineA. I openend you link but didn’t see a reference to any papers showing what your models do when they are applied to the big bang. The wiki list of quantum gravity candidates is here:http://en.wikipedia.org/wiki/Quantum_gravity#Other_approaches
Fermionic in not on there so I have to say I’m suspicious of claims this is a serious candidate, but I’m happy to be proved by wrong by proper references. But something more than a blog post would be helpful.

* Copy and paste the entire harvard links into the address window and they should work. I’ll try to correct the links.

* Is inflation part of standard cosmology? The sociological answer to this question is interesting, but not what I’m getting at. Certainly, inflation deserves to be in any cosmology textbook. I taught it in my intro to cosmology course last year.

I’m more interested in its status as a theory. In particular, what needs to be assumed in a cosmological theory in order to explain the data? Given the standard cosmological data: CMB, nucleosynthesis, galaxy formation, large scale structure (including redshifts), cluster data, etc., one needs to posit small, adiabatic, nearly-scale invariant, Gaussian fluctuations in a very- nearly-flat FLRW model (containing dark energy, dark matter, baryons and radiation). That is Lambda CDM. Inflation is an extension.

This is not to discount inflation. It’s successes are impressive. But it is trying to explain initial conditions, and so in a sense is a degree of separation away from the cosmological data. It’s an attempt to reduce the number of assumptions (and the fine-tuned-ness of those assumptions) in the standard model. A noble cause, no doubt. But it is not directly required by the data. Here’s what I said in my fine-tuning paper:

In spite of this, inflation does provide some robust predictions, that is, predictions shared by a wide variety of inflationary potentials. The problem is that these predictions are not unique to inflation. Inflation predicts a Gaussian random field of density fluctuations, but thanks to the central limit theorem this is nothing particularly unique (Peacock, 1999, pg. 342, 503). Inflation predicts a nearly scale-invariant spectrum of fluctuations, but such a spectrum was proposed for independent reasons by Harrison (1970) and Zel’dovich (1972) a decade before inflation was proposed. Inflation is a clever solution of the flatness and horizon problem, but could be rendered unnecessary by a quantum-gravity theory of initial conditions. The evidence for inflation is impressive but circumstantial.

* Big crunch: I think this is a terminology thing. A big crunch refers to an end of spacetime, an a(t) = 0 singularity. They happen in GR. They might happen in Quantum gravity. A universe which GR predicts will crunch might avoid the crunch due to quantum gravity effects. In which case, the crunch is avoided. Something else happens.

I’m not saying that if GR predicts a crunch, a crunch will happen. I’m not saying that contracting universes never bounce. I’m simply saying that the term “big crunch” refers to a future singularity, the end of the universe. If the universe bounces, then it doesn’t crunch. Let’s keep our terminology clear.

GGDFan777: in the usual probabilistic terminology, “data” = “evidence”. The terms are interchangeable. So I don’t understand what you mean by “I agree that switching information from data to background information doesn’t change the posterior, but I think he’s saying that when you put information from your background or data to the evidence side, the posterior does change.”

In any case, when you move anything from evidence (or data) to background, you merely rearrange the same problem. It’s like rearranging the order of 1 + 2 + 3 as (1 + 2) + 3 or 1 + (2 + 3). They’re the same problem. Thus, it cannot be true that one version is relevant and the other irrelevant. They are mere calculating tools. The posterior is what really matters.

If I’m understanding Carrier correctly (which is hard to do sometimes), he is responding to Collins argument where Collins uses the “Likelihood principle”. Collins argues:

“Given the fine-tuning evidence, LPU (life permitting universe) is very, very epistemically unlikely under NSU (naturalistic single universe hypothesis), that is, P(LPU|NSU & k′) << 1, where k′ represents some appropriately chosen background information, and << represents much, much less than (thus making P(LPU|NSU & k′) close to zero).

(2) Given the fine-tuning evidence, LPU is not unlikely under T: that is, ~P(LPU|T &k′) << 1.

(3) T was advocated prior to the fine-tuning evidence (and has independent
motivation).

(4) Therefore, by the restricted version of the Likelihood Principle, LPU strongly supports T over NSU."

Now Collins himself later writes:
"As we mentioned in Section 1.3, we cannot
simply take k′ to be our entire background information k, since k includes the fact that we exist, and hence entails LPU. To determine what to include in k′
, therefore, we must confront what is called the “problem of old evidence.”

The much-discussed problem is that if we include known evidence e in our background information k, then even if an hypothesis h entails e, it cannot confirm h under the Likelihood Principle, or any Bayesian or quasi-
Bayesian methodology, since P(e|k & h) = P(e|k & ~h)."

Collins has his own response to this problem but it is this reasoning that Carrier is objecting to I think. Your approach to the fine tuning argument has a different form I believe, so I don't think Carriers objection applies to your version of the argument. Anyway thats how I understand the discussion. Any thoughts?

hmm, the more I read Carriers objection the more I get confused. Before discussing Collins objection he first states in footnote 23:

“if only a finely tuned universe can produce life, then by definition P(FINELY TUNED UNIVERSE|INTELLIGENT OBSERVERS EXIST) = 1, because of (a) the logical fact that “if and only if A, then B” entails “if B, then A” (hence “if and only if a finely tuned universe, then intelligent observers” entails “if intelligent observers, then a finely tuned universe,” which is strict entailment, hence true regardless of how that fine-tuning came about; by analogy with “if and only if colors exist, then orange is a color” entails “if orange is a color, then colors exist”; note that this is not the fallacy of affirming the consequent because it properly derives from a biconditional), and because of (b) the fact in conditional probability that P(INTELLIGENT OBSERVERS EXIST) = 1 (the probability that we are mistaken about intelligent observers existing is zero, a la Descartes, therefore the probability that they exist is 100 percent) and P(A and B) = P(A|B) × P(B), and 1 × 1 = 1.”

But I think there are a couple of problems here.
1. I think he is confusing “finely tuned universe” with “life permitting universe”. Given that intelligent observers exist, it’s true that the universe they exist is life permitting, but it doesn’t necessarily follow that it had to be fine-tuned in the relevant sense that in the total range the constants could have fallen, only a very small part of that range results in life permitting conditions. So that his “P(A|B)” isn’t necessarily 1 when “finely tuned universe” is used in the relevant sense, for al we knew, the range the constants could have fallen into that resulted in life permitting conditions was large.

2. His second point that: P(INTELLIGENT OBSERVERS EXIST) = 1, also is not accurate. Let “B” = “Intelligent observers exist. According to Carrier:

P(B) = 1 – P(not B) = 1 – 0 = 1
But when he says: “the probability that we are mistaken about intelligent observers existing is zero” That sentence has built in it that we exist, but are also mistaken about that we exist. so I think he is actually talking about P(not-B|B) which ofcourse is 0, but not the same as P(not B).

P(B) is not the probability that we observe that we exist. It is the probability that intelligent observers exist instead of something else. Therefore, P(B) is in general not equal to 1, even after you have observed that you exist. What is equal to 1 is P(B|B) but that’s trivial and not relevant it seems to me.

Either way I do believe in the general anthropic principle that if we exist, the universe has life permitting conditions.

Carrier says that P(e)=1. That is true if the evidence is a life-supporting universe, but that doesn’t necessarily imply fine tuning. A life-supporting universe which is achieved through fine tuning may be what you expect on the theory of design but not otherwise. In the absence of design you would expect that fine tuning wasn’t necessary for a life-supporting universe.

In fact, that is exactly what Victor Stenger argues. He doesn’t think the Universe is particularly fine tuned. If he is right then the probability of the evidence is high on ~D. If he is wrong then the probability of the evidence is very low on ~D.

Hi Luke, I think I agree with a lot of what you said.
On the issue of the crunch I think in many cases cosmological terminology is not as clear as we would like. For example I have heard many quantum gravity researchers say there was not a big bang but there was a big bounce. But of course the term big bang doesn’t just refer to the universe with no prior history idea. It can also just refer to the fact the universe has been expanding for 13.8 bio years. Hence there could be a big bang and a big bounce, both can be true if we use a particular definition. I think the big crunch can be seen in a similar ways, defined as the end of space time following a collapse as is the case in GR. Or it just be defined as the collapse itself with no comment on what happens after the collapse, if there is an after. So to say there is a bounce following a crunch is not necessarily a contradiction or wrong. I think the important point to note is that if the universe were to collapse, then end of that collapse maybe a singularity or it may be a bounce , at the moment its an open question. I think we will both agree with that statement I hope.

One thing I’m not sure about it: I know Harrison and Zeldovich proposed a scale invariant spectrum, but did they predict the red tilt in the power spectrum? I think Harrison Zeldovitch predicted ns= 1 rather than .96. but maybe I’m wrong , please correct me if so. I believe this is what WMAp team highlighted in their endorsement of inflation.

You comment inflation “is a clever solution of the flatness and horizon problem, but could be rendered unnecessary by a quantum-gravity theory of initial conditions. The evidence for inflation is impressive but circumstantial.”
I agree with this, but I think we need to look at some of those models and see how else we can get a solution to these problems. When I look at the literature I see people proposing either inflationary models or cyclic models to solve these problems. In other words , in all cases the dice gets rolled more than once. If you reject both inflationary cosmology and cyclic cosmology I don’t see how you get an explanation for these problems. The universe is flat to a very high degree of accuracy, there is a horizon problem, maybe there’s a monopole problem too (but I have to believe GUT theories for that) to me the first two at least represent data and they are not explained in Lambda CDM without inflation or some other solution such as ekpyrotic or VSl or whatever.
Maybe it depends on what you mean by “needed to explain the data”, of course you could just say initial conditions to anything, but I’m not sure that really counts as an explanation.

Incidentally some of the big guns have been firing on the issue of Planck and inflation, not sure if you saw these papers.
Steinhardt and friends published this:http://arxiv.org/abs/1304.2785
And Guth and friends published this in reply just over a week ago.http://arxiv.org/abs/1312.7619
What I note is that both sides in the inflation debate agree in the generic nature of eternal inflation.

According to Carrier, the prior probability of design is less than 0.5 so if D and ~D explain the evidence equally well then ~D wins. But he goes further than that. In his opinion ~D explains the evidence better. See his reply to someone else in the thread – comment 30.1.

[…] is said by both of them in their contributions is beyond me, but I did want to quote one part from a blog post by Luke Barnes about whether most scientists accept that our universe is finely-tuned for complex embodied […]