Pages

Monday, June 03, 2019

The multiverse hypothesis: Are there other universes besides our own?

You are one of some billion people on this planet. This planet is one of some
hundred billion planets in this galaxy. This galaxy is one of some hundred
billion galaxies in the universe. Is our universe the only one? Or are there
other universes?

In the past decades, the idea that our universe is only one of many, has become
popular among physicists. If there are several universes, their collection is
called the “multiverse”, and physicists have a few theories for this that I
want to briefly tell you about.

1. Eternal Inflation.

We do not know how our universe was created and maybe we will never know. But
according to a presently popular theory, called “inflation”, our universe was
created from a quantum fluctuation of a field called the “inflaton”. In this case,
there would be infinitely many such fluctuations giving rise to infinitely many
universes. This process of universe-creation never stops, which is why it is called
eternal inflation.

These other universes may contain the same matter as ours,
but in different arrangements, or they may contain different types of matter.
They may have the same laws of nature, or entirely different laws. Really,
pretty much anything goes, as long as you have space, time, and matter.

2. The String Theory Landscape

The string theory landscape came out of the realization that string theory does
not, as originally hoped, uniquely predict the laws of nature we observe.
Instead, the theory allows for many different laws of nature, that would give
rise to universes different from our own. The idea that all of them exist goes together
well with eternal inflation, and so, the two theories are often lumped
together.

3. Many Worlds

Many Worlds is an interpretation of quantum mechanics. In quantum mechanics, we
can make predictions only for probabilities. We can say, for example, a
particle goes left or right, each with 50% probability. But then, when we
measure it, we find it either left or right. And then we know where it iswith 100% probability. So what happened with
the other option?

The most common attitude you find among physicists is who
cares? We are here and that’s what we have measured, now let’s move on.

The many worlds interpretation, however, postulates that all possible outcomes
of an experiment exist, each in a separate universe. It’s just that we happen
to live in only one of those universes, and never see the other ones.

4. The Simulation Hypothesis
Video games are getting better by the day, and it’s easy to imagine that maybe
one day they will be so good we can no longer tell apart the virtual world and
the real world.

This brings up the question whether maybe we already live in a
virtual world, one that is programmed by some being more intelligent than us
and technologically ahead? If that is so, there is no reason to think that our
universe is the only simulation that is going on. There may be many other
universe simulations, programmed by superintelligent beings. This, too, is a
variant of the multiverse.

5. The Mathematical Universe

Finally, let me briefly mention the idea, popularized by Max Tegmark, that all
of mathematics exists, and that we merely observe a very small part of it. It is this small part of mathematics that we call our universe.

Are these theories science? Or are they fiction? Let me know what you think.

You're right, the simplest explanation is not in fact always the correct one. General Relativity is certainly more complex than Newton's laws. However: it is the simplest explanation that accounts for the evidence (also the most elegant) and therefore satisfies Occam's Razor.

Nevertheless: “entities should not be multiplied without necessity." And there has never in the history of science been a theory that multiplies entities to a greater extent than the multiverse -- aside from the closely related "many worlds" theory, which is just as bad. While the simplest theory may not always be "correct", a theory that tosses all hope of simplicity to the winds would never satisfy Occam, trust me.

Our current understanding of the evolution of the universe, however incomplete, is far simpler (actually infinitely simpler) and thus infinitely preferable. Newton's theory was also incomplete, but it sure beat the hell out of Ptolemy's epicycles.

The multiverse was clearly concocted as a fudge to explain away all the many weaknesses of both cosmic inflation and string theory. It's precisely the sort of "saving hypothesis" Occam's Razor was designed to protect science from.

"The multiverse was clearly concocted as a fudge to explain away all the many weaknesses of both cosmic inflation and string theory."

Wrong. Everett's multiverse hypothesis was developed decades before either string theory or cosmic inflation were around.

"And there has never in the history of science been a theory that multiplies entities to a greater extent than the multiverse -- aside from the closely related "many worlds" theory, which is just as bad."

Wrong. The 'entities' that Occam's Razor refers to are the elements of an explanation, not the objects produced under that explanation.

Mongo: "Everett's multiverse hypothesis was developed decades before either string theory or cosmic inflation were around."

Everett was not responsible for the multiverse hypothesis, though his "many worlds" idea is related. The multiverse was developed as a response to certain issues that arose much later, due to difficulties stemming from Guth's theory of cosmic inflation (itself a fudge designed to explain away certain difficulties associated with the Big Bang).

Everett concocted the "many worlds" hypothesis as part of a very different attempt to tame quantum physics by resolving certain paradoxes associated with the Copenhagen interpretation, notably the measurement paradox. Both "many worlds" and "multiverse" are comical examples of how absurdly far afield one must go to shore up a failed theory.

Mongo: "The 'entities' that Occam's Razor refers to are the elements of an explanation, not the objects produced under that explanation."

Where does that come from? Occam made no such distinction and no such distinction is necessary. “Entities should not be multiplied without necessity" is clear enough. Both multiverse and many worlds multiply entities endlessly, in clear violation of Occam's principle. Sure, it's easy to state an absurd theory in simple terms: flying objects that can't be readily identified must mean we are being visited by aliens from outer space. What could be simpler? It's in the attempt to support such a hypothesis that Occam's Razor becomes relevant.

The basic flaw in all these ideas is that space time is assumed to exist before the beginning but there is no reason to assume this. There is, however, good reason to assume otherwise, that space time, or time, did not pre exist and that the beginning was the beginning of time.

What is Time then? Time is energy. clearly, since all energy including kinetic or mass are forms of energy and which dilate time (inertial or gravitational time dilation). Energy is provided either for moving through time, or through space, but this field of energy is finite. If you use it all up via motion ("c") or via mass (black hole), then time stops altogether.

Also, space is emergent from the passing of time as both SR and GR demonstrate. Slow your clock by speed and distances effectively reduce (in the direction of motion). At "c" distance or space has shrunk to zero, Slow your clock by mass energy and distances or space again shrinks to zero volume at the event horizon, forming an effective spherical hologram around the surface.

So, if time suddenly came into action at the beginning, then inflation would be the natural and predictable result of the commencement and increasing rate of time.

Mass and time are forms of energy. If time were the first step, then matter (particles) must have emerged from this field, which is why time loses energy in the presence of mass (energy).

There is much more to this of course and a full explanation of these ideas can be found in the book "The Binary Universe". If you're at all interested in a much more sensible and causally based idea of the Big Bang. Or, maybe you will react emotionally to some "jumped up nobody" telling you what we've missed?

Let's face it, these physicists are not very good so far at sensible speculating.

A common objection to any multiverse hypothesis is that it violates Occam's Razor. This is an invalid objection, since Occam's Razor says that the simplest CAUSE is to be preferred, not the simplest RESULT.

I personally support 2.5 of the above 5 options:

The Quantum Multiverse (I prefer the Decoherence Theory version) is the simplest interpretation of QM, since no state reduction is required.

I also m a fan of Max Tegmark's Mathemtic Unverse, since it's the only explanation of physical existence I am aware of that seems plausible, given a starting point of absolutely nothing (other than the laws of logical inference and mathematics).

The Simulation hypothesis would therefor be half-true, in the sense that the Universe itself would be a "simulation", running on no platform outside itself.

I do not know what simplicity of a cause is supposed to be. We can speak about the simplicity of explanations, which is quantifiable by the extent to which they reduce the complexity of collected data. If you have a definition of simplicity of a cause, please let me know.

The original version of Occam's razor is "entities should not be multiplied unnecessarily". what exactly these entities appears to be open to debate. I interpret it to mean "causes" or "explanations" instead of "results". A classic example is evolution, a relatively simple explanation (or cause) that results in millions of species of insect.

1. Maybe. I used to be an inflation sceptic, but am less sceptical than I was. This might be correct.

2. I'm rather sceptical of string theory.

3. Many worlds (interpretation of quantum mechanics): I used to be a sceptic, but David Deutsch might have convinced me.

4. Perhaps; difficult to really rule out. However, if it is true, the fact that there is probably more than one simulation is one of the lesser interesting aspects.

5. This is essentially an extreme form of Platonism. It might be right. Olaf Stapledon described something similar in Star Maker almost a hundred years ago. Though more relevant to point 1 above, here's your daily dose of Zen, courtesy of Stapledon: "When the cosmos wakes, if ever she does, she will find herself not the single beloved of her maker, but merely a little bubble adrift on the boundless and bottomless ocean of being."

For clarity, Max Tegmark's levels of multiverse are:

I: Our universe, including stuff beyond various horizons (i.e. Max's Level-I multiverse is what some people called the universe, and what Max calls the universe, some call the observable universe).

II: Something like the eternal-inflation landscape, though such a plurality of worlds does not depend on inflation; there could just be more.

III: Many worlds of quantum mechanics

IV: The mathematical universe

Readers here might like my review of Max's book on this, or at least like to read it: http://www.astro.multivax.de:8000/helbig/research/publications/info/our_mathematical_universe.html

I'll see if I can find Stapledon anticipating Max's mathematical universe. Here he is anticipating the many worlds of quantum mechanics:

"In one inconceivably complex cosmos, whenever a creature was faced with several possible courses of action, it took them all, thereby creating many distinct temporal dimensions and distinct histories of the cosmos. Since in every evolutionary sequence of the cosmos there were very many creatures, and each was constantly faced with many possible courses, and the combination of all their courses were innumerable, an infinity of distinct universes exfoliated from every moment of every temporal sequence in this cosmos."

This is pleasant shallow litterature. But the idea of the many worlds of QM, if I understand well - that all possibilities must be actualized, seems really ugly to me. Can you tell me why many people seem to hate the idea of self-determination? The fact that possibilities exist – or that possibilities ask to be included in the category of existence - does not entail at all that all possibilities must find actual forms! Throwing away the problem of actualization and evolution like this seems a bit nasty. And among known actual forms, some of them don’t last more than a nanosecond or even less. If I ask the question « possibilities of what ?», so far my answer is « possibilities of form».There’s a selection process, a ‘choice’ is made about definite forms. There’s enough room so that the past does not entirely determine the present. This and not that, here and not there. Contingencies probably makes half of the universe. This is perhaps this idea of contingency that some people don’t like, prefering to think their own existence as something necessary. Possibilities are not obligations!

With several of these, if we're supposed to take them seriously, I (amateur only) start to wonder about thermodynamics/entropy and conservation laws.

For example, is it plausible to even once split a universe into two identical copies (except for a single quantum level event)? Where does the energy for an infinitude of inflated universes come from? etc.

You have a somewhat misleading impression of what many worlds is about. One does not actually create two universes from one. Instead, you take each of the possibilities and continue within them. If you were to calculate the total energy, you would have to take into account that each of these universes had only a small chance to come into existence.

Many worlds is identical to usual quantum mechanics. As the name already says, it's an interpretation. All the conservation laws work exactly the same way as they always do.

Especially if the total energy in a specific universe sums to zero (which it probably does, otherwise the creation of even one universe from nothing appears to be impossible). If each individual universe has zero total energy, then any number of them can be created with no energy cost. It seems to me that a multiverse is no more energetically demanding to create than a single universe -- both of them requiring no net energy input.

I'm sure there are deeper mathematics where it makes sense, but even if the probabilities are small we can still observe events actually occurring all the time. Rather a lot of events, I'm sure you would agree. So I'm leery of making such a "physical interpretation", if you will permit the expression. (I can accept that it's a useful as-if way to view things.)

PS. To answer your concluding question, I view these more as idle speculation than actual theories (some of them closer to the dorm room than others), but I of course would not mind any forthcoming evidence on their validity.

I find it interesting that Max Tegmark's "Mathematical Universe" suggests an infinite number of cosmoses, but he also writes:

Infinity Is a Beautiful Concept – And It’s Ruining Physics"Our challenge as physicists is to discover ... infinity-free equations describing ... the true laws of physics. To start this search in earnest, we need to question infinity. I’m betting that we also need to let go of it."- http://blogs.discovermagazine.com/crux/2015/02/20/infinity-ruining-physics/

I think all of these are useless speculation. I won't say the don't "explain" anything, but to my knowledge they don't predict anything in our universe that cannot be explained in any other way.

To the extent that is the case, what is the point? They are akin to a religion in that sense, a belief in something that can result in everything we see but is then, perforce, incapable of predicting anything we see, because it cannot rule out anything possible with an accuracy better than plain statistics which requires no supernatural (or alternate universe) hypothesis.

(Or the same thing goes for postdiction, in sciences like astronomy, paleontology, geology, forensics, etc.)

A hypothesis must to do better than chance in accord with some distribution that doesn't include the hypothesis; or I don't consider it science.

1. Eternal Inflation. Doesn't predict anything about our universe. Maybe it "explains" the setting of various constants in our physics, but it doesn't predict them, it just claims they happened by chance in accord with some unknown distribution. That is anti-science, it offers a solution that, if accepted, forestalls any future investigation into reasons why the constants are what they are.

2. The String Theory Landscape. A simple dodge instead of just admitting your theory doesn't work; claim it works too good to be limited to just this universe! We are just one of 10e500, and we don't know which one! Also, same reasoning as #1.

3. Many Worlds. Explains everything by predicting everything, like God. Which explains nothing, the eigenstate any version of me (in the many worlds) will experience after a wavefunction collapse is still unpredictable to me, their progenitor.

It's like they teach you in writing, if you emphasize everything (bolding, underlining, adding stars and exclamation points), you have emphasized nothing. If your hypothesis predicts everything according to the distributions we know, you have predicted nothing we did not already know, so your hypothesis doesn't help anything.

4. The Simulation Hypothesis. Is not useful for predicting anything in our universe that cannot be predicted by statistics without the hypothesis. A useless speculation and anti-science; like God's Will every anomaly can be blamed on "that's how the simulators wanted it."

5. The Mathematical Universe. Same thing as #4.

I have not seen a multiverse hypothesis that is actually useful for predicting outcomes in our universe that do better than a statistical distribution without the multiverse hypothesis. Worse, some of the multiverse hypotheses are dodges so scientists don't have to admit they are wrong or don't understand something, and those can be anti-science by roadblocking further investigation into a question; declare it "settled" and move on.

David Deutch has proposed an experimental test of the quantum multiverse. Basically perform a quantum computation with so many coherent states that the computation would be impossible to perform in a single universe within its lifetime to date, even if all matter in the universe were devoted to the computation. For example, factoring a very large integer. His proposed (quite simple) quantum computer could do this, provided that the many-histories interpretation of QM is correct. If one of the non-many-histories interpretations is correct, the task would be impossible.

4. The Simulation Hypothesis. Is not useful for predicting anything in our universe that cannot be predicted by statistics without the hypothesis. A useless speculation and anti-science; like God's Will every anomaly can be blamed on "that's how the simulators wanted it."

Agreed, I think one furthermore might well say that everything is then supernaturally caused (i.e., by something outside the simulated world). Even if we have figured out some nifty maths for how things work on the inside, there are really no guarantees for anything, past, present or future.

These hypotheses are intended to demonstrate secular alternatives to assertions of the need for a god to explain the "fine-tuning" of the fundamental constants (although the simulation hypothesis is adjacent to Enlightenment deism).They aren't "proof" of anything but the lack of imagination in those who insist on intelligent design.(I have described elsewhere how many-world could be verified, though ... given nearly infinite patience)

What Deutsch suggests is basically suspending the Born rule at the macroscopic scale. If we could do that, a reversible quantum computation would be one of the many interesting consequences, but by no means the most interesting one.

"1. Eternal Inflation. Doesn't predict anything about our universe. Maybe it "explains" the setting of various constants in our physics, but it doesn't predict them, it just claims they happened by chance in accord with some unknown distribution. That is anti-science, it offers a solution that, if accepted, forestalls any future investigation into reasons why the constants are what they are."

It's the other way around: perhaps people will stop talking about the universe if someone comes up with a better explanation for the values of the physical constants, the fact that many are fine-tuned, etc.

Phillip Helbig: My point is that "Explanation" and "Prediction" are different things. Explaining a phenomenon in a way that doesn't let us predict something better about that phenomenon is a worse-than-useless exercise.

The values of the physical constants don't need an explanation for their fine-tuning, unless you can prove there is some mechanism for determining these values. Otherwise they are a given, and you cannot claim a given value is "unusual" if you have no statistical model that makes that particular value "unlikely". If you are just comparing it to other values, you need some reason to believe they should be "close". That would demand a model of how fundamental constants can be arranged, and we've only got one of those. With only one sample, the only reasonable statistical assumption is that the one we see is the most likely model, thus to our knowledge nothing is unusual about it.

The only way, in my view, to explain the values of the physical constants would be by some clever reduction of the number of parameters required to specify them; for example to show they (or some of them) are the roots of some higher equation with fewer parameters. Or less ad hoc parameters; like e or pi or sqrt(3). A discovery like that, for example being able to derive six parameters as roots of an equation that uses only two new parameters, that would be a curiosity worth trying to translate into a physical model, because it could be more than just numerology.

But inventing a distribution for a fundamental constant out of whole cloth that we cannot observe or justify and saying the one observable we have (the fundamental constant) is "unusually small" and an outlier in our imaginary distribution is nonsensical. So is inventing an imaginary high-dimensional distribution for X constants, and then saying our one observation of X constants occupy an outlier point in that distribution.

We can't possibly know that. Eternal Inflation doesn't predict anything about OUR universe, and is not a useful explanation of the values of our physical constants or their fine-tuning.

"if someone comes up with a better explanation for the values of the physical constants, the fact that many are fine-tuned, etc."

When will you ever learn some basic logic?Fine-tuning of physical constants is pure speculation. It is because there is no explanation of the values of some constants currently that people speculate about fine-tuning. If, for example, a natural explanation for the value of the Cosm. Const. were found then people would stop being surprised at its apparently very particular value. Without further information, *we simply don't know*.

Bernard Carr, one of the authors of a well-known paper on fine-tuning and the multiverse, wrote this on the topic in an email to me which I presume he won't mind me quoting:

"... I agree that fine-tuning is not a scientific fact. People (including myself) write about it in scientific journals but - as with string theory - it is not a fact in the sense that I would regard the big bang as a fact. The reason is two-fold: (1) although I don't believe these tunings can be dismissed as mere coincidences, it is very hard to calculate the probabilities involved; (2) there is no universally agreed explanation of the tunings."

There is a line, Phillip Helbig, on one side of which stand people who think you can just believe any old nonsense (the irrational), and on the other stand people who understand you need a reason to think something is the case (the rational). You are on the wrong side of this line.

To the fine-tuning sceptics: First, note that the question whether something is fine-tuned is independent of the question whether it is unlikely. Conflating low probability with fine-tuning is a common mistake. Alas, the topic is too complex to discuss in a comment box, but read the book by Lewis and Barnes to get an idea of the complexity of the topic, which can't be dismissed by a few speech-balloon style jibes.

As I mention in my review linked to above, there are several other possible explanations, but no-one has come up with them yet.

No, the question of fine-tuning is not independent of the question of likelihood. You can define a notion of fine-tuning without referring to likelihood, but then you have no argument for why finetuning is problematic. To argue that it is problematic, you need to refer to likelihood.

"No, the question of fine-tuning is not independent of the question of likelihood. You can define a notion of fine-tuning without referring to likelihood, but then you have no argument for why finetuning is problematic. To argue that it is problematic, you need to refer to likelihood."

Maybe we agree here, I'm not sure. As an example, take a claim such as "changing this quark mass by half of one per cent would make atoms heavier than hydrogen impossible". This is a fine-tuning claim: the claim is that changing a parameter only slightly would have appreciable consequences. Whether the observed value is unlikely is a completely different question. For example, there might be some theory which shows that only the observed quark masses are possible. In that case, the observed values would have a probability of 1, but they would still be fine-tuned.

Perhaps the correct theory says that the masses are essentially random variables, then the observed values would be unlikely, but the fact that we observe them could be explained by the anthropic principle in the multiverse, etc.

Similarly, there are values which are not fined-tuned, in the sense that changing them would not have appreciable consequences. In these cases as well, they could be likely or not.

Whether fine-tuning is problematic is a different question still, but this could be answered only after one knows whether the value is unlikely or not and whether it is relevant for life. (Lots of things might be random variables, but, like someone winning the lottery every week, we have to observe some value, so such a thing could be unlikely yet not problematic.)

"Yes, now you only have to realize that we cannot ever find out whether the value is likely or not and you may finally understand why I say it's not a scientific criterion."

How can you be so sure? Remember Auguste Comte? "Of all objects, the planets are those which appear to us under the least varied aspect. We see how we may determine their forms, their distances, their bulk, and their motions, but we can never known anything of their chemical or mineralogical structure; and, much less, that of organized beings living on their surface ..."

Auguste Comte, The Positive Philosophy, Book II, Chapter 1 (1842)

He believed that we could know even less about stars. But just a few decades after his claim, spectroscopy was telling us the chemical composition of extraterrestrial objects.

At one time, masses of hadrons, even masses of atoms, were essentially "random variables". Now we understand them. I see no reason to categorically rule out that we might understand the values of at least some fundamental constants. If we do, then perhaps the conclusion will be fine-tuned, but probability of 1. If not, then we need to look for other explanations, just like when a magician flips a coin a hundred times and gets a hundred heads I don't just shrug and say "just as likely as any other sequence".

You are using my reply out of context. I was talking about the probability with regard to the finetuning of paramters in the multiverse, pointing out that there is no way to obtain this probability out of principle. Of course if you find a way to actually compute a parameter, that would be great, but that's a different story entirely and has nothing to do with the multiverse.

Phillip: This is a fine-tuning claim: the claim is that changing a parameter only slightly would have appreciable consequences.

That seems like the same impossibility as the Free Will discussion; "If I had decided differently 50 years ago, my whole life would have worked out differently."

There is no evidence the mass of any quark could be different by 0.5%. So this too is just a fantasy about a fundamentally different universe that cannot be observed.

Until someone comes up with a model of quarks that reveals a substructure of quarks, or reduces the amount of information (number of parameters) needed to derive all the quarks, "fine tuning" won't mean anything.

Finding a substructure of atoms simplified and put limits on atoms, it clarified relationships and interactions between atoms. Not only that, it ruled out possibilities.

Finding a substructure of particles simplified and put limits on them, and clarified relationships and interactions between particles. Not only that, it ruled out some possibilities.

Finding a substructure of quarks may clarify the fine tuning, and it may rule out anything but the fine tuning.

I'm not sure how one could prove that quarks have no substructure; that there is no model with fewer parameters from which we could derive all the quarks; but it is the only route to explaining fine-tuning.

Because even if we did prove quarks have no substructure, just properties, that would not mean the mass of any quark could be different than it is. That is just a tautology; "If the laws of physics were even a tiny bit different, reality could be very different."

Sure. But that observation doesn't give us license to claim the laws of physics vary or can be any different.

@Philip Helbig "...perhaps people will stop talking about the universe if someone comes up with a better explanation for the values of the physical constants, the fact that many are fine-tuned, etc."

You have it exactly backwards. When you stop thinking of the cosmos as a "universe" many of the fine-tuning problems will disappear. The cosmological constant, for instance, will simply disappear since its only role is to reconcile the "universe" model derived from the FLRW metric with observations.

"fine-tuned is independent of the question whether it is unlikely.""Tuned" means set to a value; "fine" implies a very particular value. The clue is in the term used.

"Alas, the topic is too complex to discuss in a comment box,"Your misunderstanding is very easily summarised by example: the Cosm. Const. appears to be a very particular value, but no-one knows why it is that value - there could be a perfectly natural explanation. You don't understand this sentence, apparently.

" the book by Lewis and Barnes "A book written by cranks one of whom is blatantly pushing the whacko theory that Jesus' papa made the universe, and has received over $300,000 to push this nonsense by the corrupt Templeton Foundation. Why don't they publish something in a reputable journal? Because they have nothing to say.

"which can't be dismissed by a few speech-balloon style jibes."You have continuously failed to answer the simple point being made. Previously, you claimed that suggesting the Cosm. Const. may have to be the value it is is ridiculous as the distribution of the possible values of the Cosm. Const. would be represented by a delta function! Ho, ho, ho.

There is only one known possible value of the Cosm. Const., and that is the one that has been observed. Similarly for the speed of light, etc., etc. Do you really not understand this?

"there are several other possible explanations, but no-one has come up with them yet."So why are you stating fine-tuning is a fact then????????

If the Cosmological Constant were a "slightly" different value, the nature of the universe may have been very different. *If*. It's not known that it can be a different value, though. That's the point - you don't understand modus ponens i.e. basic logic.

"When you stop thinking of the cosmos as a "universe" many of the fine-tuning problems will disappear. The cosmological constant, for instance, will simply disappear since its only role is to reconcile the "universe" model derived from the FLRW metric with observations."

Where did you ever get that idea? The cosmological constant was in the first paper on relativistic cosmology, in 1917, more than 100 years ago, and even before that there were ideas about a Newtonian version. Claiming that the cosmological constant is some sort of fudge factor displays a huge ignorance both of physics and of the history of physics.

"You are using my reply out of context. I was talking about the probability with regard to the finetuning of paramters in the multiverse, pointing out that there is no way to obtain this probability out of principle. Of course if you find a way to actually compute a parameter, that would be great, but that's a different story entirely and has nothing to do with the multiverse."

I agree that there is no way to know this now, but that might change. Other solar systems are a type of multiverse (and explain why the Earth is at exactly the right distance from the Sun to enable life). I'm no expert on solar-system formation, but it seem reasonable that some sort of probability distributions for various properties of planets could be produced, if not now than in the future, and we could then calculate how likely our Earth is.

But this misses the point, namely that the probability distribution is irrelevant to the multiverse explanation. Suppose we observe some examples of fine-tuning, and realize that if the parameters were slightly different life could not exist (whether they could be slightly different is another question). If the parameters could take on different values in the many universes of the multiverse, then there is no surprise that life observes parameters compatible with life. How likely these values are is completely irrelevant, just as it is irrelevant what fraction of planets in the universe have life.

On the other hand, if we want to use the multiverse to explain the value of a parameter which is not essential to life, then we would need a probability distribution, which we don't have (but, again, I see no reason to believe that we will never have it).

This is perhaps my last reply to Steven Evans, who doesn't seem to be interested in genuine discussion. However, some readers here might be interested in sensible replies to his ejaculations. Let me ask, though, if you have anything important to say and, if so, whether it has been published in a reputable journal. The last time I looked, PASA was a reputable journal; Barnes has published on this topic there.

Yes, he is a theist. I am an atheist. That doesn't make his ideas on fine-tuning wrong anymore than if he says "cigarette smoking causes cancer" then I shouldn't believe him because he is a theist and so everything he says must be wrong. You seem to lack a basic understanding of logic. The biggest fool can say that the sun is shining, but that doesn't make it dark outside.

You don't even appreciate the irony of this ad-hominem criticism of Barnes in the context of the fine-tuning debate. Historically, the fine-tuning of organisms to their environment was seen as evidence for a Creator, then Darwin came up with a naturalistic explanation. In other words, the fact of fine-tuning is independent of what one believes is the explanation for it. No evolutionist doubts the examples of fine-tuning of organisms touted by intelligent-design advocates; they doubt the explanation touted by said advocates.

Templeton money? I dislike the Templeton Foundation because of their agenda, and I think that, above a certain pay grade (Barnes isn't there, but Martin Rees definitely is), one should not take money from them as a matter of principle, at least if one is not religious. However, while they do support some anti-science stuff, they also support a lot of stuff which has nothing to say about their agenda, or is even opposed to it.

The problem with taking money from the Templeton Foundation is that it makes it look like a credible institution; at least in most cases, the source of the money doesn't seem to influence the research. (In Barnes's case, I'm sure that he was a theist before he had even heard of the Templeton Foundation.)

Claiming that the cosmological constant is some sort of fudge factor displays a huge ignorance both of physics and of the history of physics.

While I didn't claim the cc is a fudge factor, I do appreciate your taking the time to bring up the historical record demonstrating that it is indeed, just that.

In 1917, Einstein deployed the cc to produce a stable (neither expanding or contracting) model of the "universe". Ten years later when the observed cosmological redshift-distance was misinterpreted as consequent upon a "universal expansion", the cc was retired and often referred to as Einstein's greatest mistake."

Fast forward @70 years to 1998 and suddenly the cc makes a reappearance as a the simplest way to provide a hypothetical "dark energy" to reconcile the "expanding universe" model with observations of a discrepancy between luminosity distance and redshift distance in observations of SnIa supernovae in distant galaxies.

So, the cc can be deployed or ignored as need be, to align the model with observations or assumptions. That means it is serving as a fudge factor - unless you want to redefine the meaning of that term to suit your purposes.

But that wasn't my point, which was that the cc is an artifact of the "universe" model of the cosmos. If the cosmos is not, in fact, a unitary entity engaged in a "universal expansion" then the need for the cc simply vanishes. It is, in other words, a model dependent constant; it only exists in the model where it is occasionally deployed (or not) as the model's discrepancies with observation arise. Get rid of that miserable failure of a model and the cc has no reason to exist at all.

In 1917, Einstein deployed the cc to produce a stable (neither expanding or contracting) model of the "universe". Ten years later when the observed cosmological redshift-distance was misinterpreted as consequent upon a "universal expansion", the cc was retired and often referred to as Einstein's greatest mistake."

Fast forward @70 years to 1998

I agree that Einstein used it as a fudge factor, to some extent. However, he couldn't have used just anything; he used something which made sense mathematically and had been discussed before in a Newtonian context. The main point, though, is that the universe is independent of how we learn about it. If someone, even Einstein, introduced it as a fudge factor, that cannot somehow magically cause it to go out of existence.

Your fast-forwarding 70 years is precisely my point: during that time, many cosmological models involving the cosmological constant were investigated. The whole point of observational cosmology is that one fits the parameters to observations. Only in 1998 were the observations good enough that a cosmological constant could no longer be denied. Where is a fudge factor? As to using it or not depending on whether it is needed to fit the data, that applies to everything.

As to your non-universe model of the cosmos, where has it been published?

"This is perhaps my last reply to Steven Evans, who doesn't seem to be interested in genuine discussion."

It is you who has failed to answer the simple points repeatedly put to you. All that is currently known about the CC is that it is the value it has been measured to be in this universe. As soon as people start talking about the distribution of its possible values or saying that if the CC had been “slightly” different then galaxies wouldn’t have formed, they have entered the realm of pure speculation.

Why don’t you answer this point? You haven’t done so far. But apparently it is I who is not interested in genuine discussion.

Also, I am not aware of any physical fact having been discovered via any form of fine-tuning. So it is currently an abstract idea that has no connection with physical reality.

“ The last time I looked, PASA was a reputable journal; Barnes has published on this topic there.”In PASA, Barnes published “The Fine-Tuning of the Universe for Intelligent Life”. So PASA is not a reputable journal, it is a crank journal for cranks to publish nonsense in.http://sydney.edu.au/news/physics_sifa/3134.html?newsstoryid=16876Luke Barnes is telling the newspapers that the universe is special in that life has appeared, but not so special that a multiverse explanation is required. How transparently agenda-driven can this guy be? All the data he uses for this attempt to shore up the Genesis fairy tale is *on his PC*, *not observed*, and this fantasy is all paid for by the Templeton Foundation. He’s delusionally insane. How he has kept his job, his God only knows.

‘"cigarette smoking causes cancer" then I shouldn't believe him because he is a theist and so everything he says must be wrong. ‘There is evidence that cigarette smoking causes cancer, but no evidence that the Cosm. Const. can be any other value than the one observed. Also, as shown in Barnes’ book and the newspaper article linked to, Barnes is clearly tying his “research” to his lunatic religious beliefs.

“You seem to lack a basic understanding of logic.”Unlike you, I’m happy to learn. Show me where the faulty logic is, and I will correct it.

“You don't even appreciate the irony of this ad-hominem criticism of Barnes.”I can only apologise for my deep ignorance, omniscient one. Evolution is adaptation not fine-tuning. And everything is ultimately happening according to the laws of physics, which seems to be unavoidable in this universe.

“Templeton money? …However, while they do support some anti-science stuff, they also support a lot of stuff which has nothing to say about their agenda, or is even opposed to it.”

Even opposed to it? How fair-handed of them! But what we know of reality (the physical record, historical record, etc.) shows religious belief to be 100% nonsense. The Foundation has a clear agenda to try to blur the line between science and philosophy/religion. They are particularly fond of professional scientists who have gone mad, sorry, found God. See multi-million dollar Templeton grant recipient, Andrew Pinsent, “formerly a particle physicist .. at CERN” whose “publications include work” on “divine action, and the nature of evil.” Templeton are attempting to extend the credibility of science to the ravings of complete and utter lunatics.

“In Barnes's case, I'm sure that he was a theist before he had even heard of the Templeton Foundation.”What’s that got to do with the price of fish? Their agendas align. Barnes has the training to gather the dodgy data on his PC and TF provide the funds for him to carry out the fraud. According to this kind of “science”, Sonic the Hedgehog is real.

Anyway, the scientific facts are clear. quantum phenomena, general relativistic phenomena, etc., are supported by innumerable data, while String Theory, the multiverse, fine-tuning are all pure speculation. Barnes publishes his nonsense because he is a deeply brainwashed cult member, but what is your excuse for your anti-scientific position?

“ The last time I looked, PASA was a reputable journal; Barnes has published on this topic there.”In PASA, Barnes published “The Fine-Tuning of the Universe for Intelligent Life”. So PASA is not a reputable journal, it is a crank journal for cranks to publish nonsense in.

Just one last example, to illustrate who does not understand elementary logic.

Steven: Barnes's stuff is not good, because not published in a good journal.

Phillip: PASA is a good journal, and Barnes has published there.

Steven: PASA is not a good journal, because Barnes has published there.

Yes, you could claim (wrongly) that PASA is not a good journal because Barnes has published there, but in order to do so, you need independent evidence that his stuff is not good, while your evidence is "he hasn't published in a good journal". Or, you would need independent evidence that PASA isn't a good journal, but you have provided none.

And still you fail to address the main point which is your completely unsupported and ludicrous claim that fine-tuning is a scientific fact. Even one of the researchers who came up with the idea originally admits it's not a scientific fact. Because it plainly isn't.

Do you genuinely not understand the difference between empirical facts and speculation? Do you think "Star Wars" is real?

"you need independent evidence that his stuff is not good,"

The paper is entitled“The Fine-Tuning of the Universe for Intelligent Life”, for God's sake, by Dr. Clearly A. Crank. Anyone with 2 brain cells to rub together has no need to read any further. There is no evidence the universe is fine-tuned and nothing is known about the prevalence of intelligent life in the universe or what forms it can take beyond one data point. PASA should be ashamed to publish such garbage. Anthropic principles and fine-tuning principles have not led to a single scientific fact, so their current scientific status is "completely useless".

I'll rest my case by saying that Martin Rees and Max Tegmark agree with me and not with you, noting that you have still not replied to my questions to you, while I have addressed all your questions to me (but you weren't able or didn't want to acknowledge my answers; perhaps you didn't recognize that they were answers).

Why not concentrate on others, including avowed atheists, who support the idea of the universe being fine-tuned for life, rather than pick the low-hanging fruit which is Barnes?

The whole point of observational cosmology is that one fits the parameters to observations.

Not quite. Those parameters belong to a specific model of the cosmos based on two foundational assumptions made in the 1920s. First there was the unitary assumption inherent in Friedmann's adoption of, what has come to be known as the FLRW metric, and from which he derived the FLRW equations by inappropriately solving the field equations of GR for that "universal" FLRW metric.

Then came the assumption that the observed cosmological redshift-distance relationship was the consequence of some form of recessional velocity. Coupling that assumption to the first yields the "expanding universe" model that cosmology has been saddled with ever since. There is no empirical evidence supporting either assumption.

Despite the ludicrous depiction of the cosmos provided by the the latest version of that model, known as L-CDM, those assumptions are functionally inviolate in modern cosmology -they are not subject to reconsideration. That is because, as you correctly point out, the model can be freely parameterized to meet observations. Unfortunately, that is not science - it is only mathematical model-fitting.

The ability to freely parameterize the model renders it unfalsifiable and therefore unscientific. L-CDM depicts a universe that we do not observe based on archaic assumptions that have no basis in empirical reality.

However, there are lots of predigested and pre-approved papers on the virtues of L-CDM published, by the guild of theoretical cosmologists for your consumption, no critical thinking required, if you find comfort in such shared delusions.

I think they are scientific but they are not science. This is how I arrive at that conclusion: When scientists work scientifically, the knowledge produced is scientific, but the knowledge can only become science through experimentation.

No room for Lee Smolin on your list? If I understand it, Smolin has expressed the view that what looks like the collapse of a star into a Black Hole from one point of view looks like the birth of a universe, a Big Bang, from another point of view. They are the front and back ends, so to speak, of the same singularity. He even speculates about the operation of the law of natural selection, or something akin to it, among universes within the multiverse this entails.So call this the "Darwinian multiverse" theory if we need a name for it.

there exists a mathematical multiverse as a set of all algorithms. Observers are just algorithms, your brain when it is processing some information is in some computational state, so it can be identified as a point in the space of all algorithms. A second later, you brain will be in a slightly different computational state, therefore it will be identified as a slightly different algorithm. An (approximate) local notion of inverse time evolution that maps a "future" state to an "initial" state can then be defined if the algorithms have a (possibly imperfect) memory.

So, some of the observers in this space will have a notion an universe they think they live in and some of them will be physicists who know a lot about mathematical laws that explain the outcome of experiments and observations about the universe they perceive to exist.

I am aware, therefore something exists.Since something exists, math exists (all possible relationships of "1" with itself).Within the mathspace, all systems of relationships exist, including laws of physics which lead to awareness (#5). But I can only be aware of the sliver that leads to MY consciousness (#1, #2, #3).And a simulation of a simpler set of laws (#4) is just another slice of the same mathspace ... like how the set of all even numbers is functionally identical with regards to the number "2" as the set of all whole numbers is with regards to the number "1".The many-worlds hypothesis (#3) implies immortality from the interior perspective (cheating death -- no matter how improbably -- is always an option) and can be tested statistically, akin to how if you flip a coin repeatedly and get 30 heads in a row, this would strongly imply that your assumptions about the coin are in error -- NOT that you encountered an unlikely outcome for an unbiased coin.I was born in 1971. If I am conscious past the year 2091, this would be an unexpected outcome from my current perspective, even if the explanation for my continued awareness in 2092 is unremarkable at that time. As long as entropy exists, no matter what our level of technological sophistication, there will be a point in time beyond which I should have no reasonable expectation of being alive. If I reach such a point, however, I can make a prediction about a new limit beyond which I shouldn't survive, and if I survive 30 such cycles of prediction and upset, this implies death is not a phenomenon which applies to the internal perspective.This might take a while, though. If each threshold is achieved using technology that allows survival for 10 times longer than previous expectations, then 30 cycles would take 10^32 years (admittedly still an eye-blink for an immortal consciousness).

The bare bones idea of inflation has some support. It does solve the flatness problem and indicates one way that distant regions of the universe may have been in the earliest times causally connected. The B-mode issue is unfortunately unresolved, but if it can be shown that some set of what appear to be B-modes are due to gravitational waves this would be a big boost for inflation.

Whether there is eternal inflation and the multiverse is far less clear. The idea does give some generalization to inflation. The problem is there are now many possible inflaton potential functions that can give rise to the universe that appears inflationary, but we have no criteria for determining which are false. If there is some de Sitter-like manifold with a huge cosmological constant, and which by vacuum instability gives rise to pocket worlds, the only prospect for determining much is if there are evidences of interactions between these pockets. The hypothesis this is a manifestation of the string landscape, which for various reasons is maybe more swampland, is in this uncertain set of hypotheses. The problem is that supersymmetry is broken for positive cosmological constant and so if there is this inflating dS spacetime SUSY is likely very highly broken. So the role of string theory is very difficult to address.

The mathematical universe and the many worlds idea I do not take as much. The many world interpretation is an auxillary assumption on QM meant to give some interpretation of measurements. There is no reason to think this is a correct interpretation and the other false. The simulation hypothesis I regard as pure science fiction.

Lawrence Crowell: I was under the impression the flatness problem is equally solved by the Big Bounce, without the need for the instantaneous creation of all matter and inflation, and thus also the need for eternal inflation. It seems a more parsimonious theory to me.

I think the big bounce is rule out by data. The accelerated expansion of the universe pretty clearly excludes the possibility of a recollapse. I say pretty much because surprises can happen. Also there is a curious incompatibility in the measurement of the cosmological constant, or Hubble parameter, which so far indicates it is increasing.

Penrose has another take on this, but I think it is problematic. He says space and time cease to exist when there is no thermal energy to run a clock. Interesting idea, and something I thought of as an undergraduate. The problem is there is no thermodynamics minimum or equilibrium in spacetime physics.

I find it hilarious that the Big Bang ate itself by hacking up a theory of eternity, like the Steady State model it once competed against. Nothing could be more representative of the tenuous nature of cosmology as science :)

BTW that Steady State model had a thing called the Hoyle-Narlikar C-field, which was matter created from nothing, or from negative energy, if you prefer. So it could be called the Little Bang theory.

Yes, I am making an effort to streamline the video production. Reason is not hard to guess, if I want to be able to continue this blog, I need to make more money. It's not paying off. And the bulk of people, it seems, are on YouTube, not on blogger.

I have now found a mode of operation in which I basically use the same text (with some minor tweaks) for the blog and for the video. This is good recycling. But I constantly make mistakes with the video recording and editing and have to redo it. This eats up a lot of time. I hope, however, that with some more practice it will work a little faster. As someone (I think on this blog) pointed out to me years ago, it's really all about having a good workflow.

But honestly, even with my most optimistic projections, I cannot see ad revenues ever coming into a range where it would be sustainable economically. This would mean I'd have to increase my audience by a factor of 1000, which just isn't going to happen (at least not in this universe). This means that sooner or later most of my writing will either vanish entirely or vanish behind paywalls.

I do have a Patreon account, but I don't use it. The reason is that they expect a commitment on the amount of content produced regularly. I cannot make any such commitment. I sometimes have deadlines popping up that pretty much stall any outreach activity for months. That's why I use the donation button that people can use for content that is available, rather than paying forward for what they expect. But of course the vast majority of people are free riders. And that's simply not sustainable.

Sabine Hossenfelder: ...even with my most optimistic projections, I cannot see ad revenues ever coming into a range where it would be sustainable economically. This would mean I'd have to increase my audience by a factor of 1000,

Everyone knows that unsolicited advice is the best advice, especially from strangers!

But I do know something about marketing. I began and ran a niche company from scratch (a software product in a corner of the US legal system). Something like the product was already out there, a friend of mine pointed it out, and we partnered for it. I began with 12 competitors, and put 10 of them out of business within two years, and captured 70% of the available market. I wrote every word of marketing and advertising (with feedback from my partner). We sold out after three years.

I think your problem is incoherency in your marketing approach. You are throwing free content, and nothing but free content, into the world with hardly a plan for making it do any work for you.

Your free content should still come at a price: Valid contact information; a real email address or some way for you to notify casual readers of your other free content.

You don't have to run any paid advertisements with your free content, but you should advertise your own work, paid and unpaid, with all your free content.

The answer is not to hide everything behind a paywall, but about half of what you do behind a subscription service.

The free parts are your "red meat" posts (Does God Exist?), short YouTube videos, FCC articles, some educational stuff. Perhaps occasionally more serious stuff, but most of that is for subscribers. You don't have to do anything you are not doing now; you don't have to promise to keep a schedule, just make it an annual subscription fee. It is clear you are a one-woman show, with academic responsibilities, writing books and attending conferences. So advertise an approximation, historically you have averaged X articles per year.

What you are doing now is a "loss leader", a free trial, but you have nothing to sell after that (well, your book). You are right, free riders will abound if altruism is the only reason to pay. You need a product!

I know many academics feel like sales is a grubby business. I did, when I was young. But (30 years ago) I thought the opportunity to execute this product “the right way” was too great to pass up, so I approached the whole thing academically. Or like an engineer. I am not a salesman by nature, I am not at all socially skilled (or even competent). But I can do research and learn new things, so I did, and it worked great.

Your market is people that like your writing. Ad agencies are happy with a 1% response rate; likely 1% would pay a small annual subscription fee. You find the 100% with what you can give away free (which you can also post to your subscription version), and then it is 1% of those that can be convinced to pay for the other half of what you produce, people that like real science and maybe that's where you engage in 3/4 of your subscriber interaction.

Always remind the free riders, on twitter and the blog and YouTube, what they are missing that is only on your website, using unobtrusive side-bar ads, perhaps. Some of your comments there, so they will pay to engage and be heard.

You have 26K twitter followers, and 6.6 million page visits. You are building significant popularity but you need to start making it do some work, in the form of gathering a contact list. Not to spam them, but to keep them aware of you. What would you have to charge 1% of them to make your living as a part-time writer?

You need a sales machine that practically runs itself, and although I have not built one online, I don't think one should be insurmountably difficult to build.

Sorry for the know-it-all Uncle approach, especially if you have already considered it all. Just sharing my experience to try and help.

I hear what you are saying and I appreciate the advice, but it's not workable. I have no medium to do that. I can't simply put half of this blog behind a paywall, how is this supposed to work? I'd have to move this blog to a different platform, and that would mean losing 95% of readers. And even if I could, how would I cash in on it?

(The actual number of page visits is about twice as high. It's just that I only added this counter half into the game.)

Yeah, basically I'd need money to make money. Not that that's a new insight.

Fair enough; then logically the product to sell is not this blog. You say you are trying to streamline your video production, perhaps that becomes a paid channel in some way.

You cash in via subscriptions, like a magazine does for its writing, or via selling books, or via pay-per-view like a movie studio does.

You find the time by carving it out of the current 'free content' production time. I don't think you (in particular) need money to make money; you have invested ten plus years in creating a public persona that has leverage. People have confidence in you and your thinking, they trust you and want to hear what you have to say. Your value to them is not just that you are free, you have competed successfully with a billion other free blogs. Your additional asset is quality through clarity, logic, and humor unique to you. Which I do not say to flatter you: Those assets are rare enough that people will pay to experience them.

I'd consider the fame-building investment phase a success; and though it demands some maintenance to sustain it, without adding effort or money, you can divert some of your effort into the creation of a product, video channel, publications or books or some product you can use your fame to promote, to all the people you already reach that know they want to hear what you have to say.

In FQXi there was a proposal of an experiment to test the virtual reality hypothesis involving quantum computing performed at high speeds since presumably there are computational limitations locally. https://fqxi.org/community/forum/topic/846

All five of the theories seem far-out but possible. But, as you have pointed out previously, the over-popularized (by Brian Greene, etc.) string theory is as unsubstantiated experimentally as the other four; and so does not deserve the special attention it has received, both in the press and among some theoretical physicists.

Here we are in "The Universe that discovered itself". We could be a part of a competition where the contestants (Gods) have to create the simplest universe that can create intelligent life which can work out its laws of nature and the initial conditions. Of course if we succeed that might end the competition and bring the universe to an abrupt halt.

John Wheeler proposed that if you were to use a remote detection of particles after they passed a double slit that this would reduce the wave function in a way equivalent to having a detector at the slit. In other words such ex-post facto detection reduces the wave function and determines which slit the electron or photon passed through after the wave has interacted with the slits. This is Wheeler's Delayed Choice Experiment, and it has been tested by actual experiments.

Now suppose intelligent life in a cosmos makes ever more exacting measurements and is able to probe further to the earliest moments of the universe. We might do this with neutrino and gravitational wave astronomy or cosmology. The intended measurement of B-modes in the CMB comes close to this. Now suppose enterprising IGUS (Information Gathering and Utilizing Systems) start to measure the coupling parameters of the early universe. Now ponder, could the measurement of the earliest state of the universe and various observables be a cosmological case of Wheeler's Delayed Choice Experiment? Is the solution to the so called fine tuning problem that observable parameters are such because they were selected for by observations by IGUS in subsequent cosmic evolution?

This is interesting to ponder. One may think hard on why this can't be the case or why it must be wrong. If this is correct then how can we test this as a scientific principle? That is a toughy. Also if this is the case it is an argument for there being intelligent life in the rest of the universe. We humans are not a good bet to survive long enough to accomplish this.

Depends on whether or not you take the measurement axiom for fundamental. In the Copenhagen interpretation, the answer is "it's unlikely". In many worlds, the answer is yes, if you speak about all possible universes ("worlds"), but then it's unlikely you'll end up in this particular branch.

In 1986 I read Robert Heinling's, "The Cat Who Walks Through Walls" which is a fictional story involving Everett-Wheeler universes. The E-W universe craze was getting started and showing up everywhere in Science Fiction. I really didn't expect professional physicists to take it or anything like it very seriously, but as with all things unexpected, I can't say I'm surprised. That was shortly after the 1984 discoveries in string theory that put it on its path to being the dominant model. Then M-theory to Multiverse to what I see as Science Fiction again. Isn't this career suicide for physicists, even if it's the only acceptable game in town? The long term prognosis for the Multiverse doesn't look good...

It was career suicide for Hugh Everett. He proposed the idea of splitting waves or states in 1957 and he approached Bohr about this. Bohr was not impressed and what we now call MWI languished. Everett went to do Pentagon work, and he was an important thinker in laying down how nuclear war was not winnable or possible and this influenced Nixon with the SALT talks with the USSR. Everett also fell into extreme alcoholism and was a chain smoker. In the 1970s the MWI started to attract a bit of attention, and when I was an undergraduate it was percolating into mainstream physics. This was largely because of Bryce DeWitt's writing on the subject. Everett had a bit of celebrity in giving talks during the late 1970s and in his early 50s died of a heart attack.

David Deutsch is probably the biggest influence in bringing MWI to the mainstream. String theory and its exponent Green and Carroll have pushed the MWI bandwagon. I am not that committed to it, for I question whether MWI or any quantum interpretation is really quantum physics and not just an adjunct add-on. There is however this interesting bit making the rounds

https://phys.org/news/2019-06-physicists-schrodinger-cat.html

which has me scratching my head a bit. There was a PBS episode of NOVA that featured Hugh Everett through the eyes of his son. The whole family seems to have been sadly dysfunctional, and Hugh was negligent of family emotional issues.

If these ideas are included as a part of the "physical sciences", then the physical sciences will lose their differentiation from other forms of human knowledge, such as philosophy and religion. This will be a net loss for humanity, because the "rules" that have made the physical sciences unique have brought us great value.

Science is not the only domain of knowledge, but it is an important one. And science is at its best when its rules are defined narrowly, leaving ambiguous topic areas on the outside.

As promised, here is Olaf Stapledon anticipating the mathematical-universe hyposthesis:

Many of these early universes were non-spatial, though none the less physical. And of these non-spatial universes not a few were of the "musical" type, in which space was strangely represented by a dimension corresponding to musical pitch, and capacious with myriads of tonal differences. The creatures appeared to one another as complex patterns and rhythms of tonal characters. They could move their tonal bodies in the dimension of pitch, and sometimes in other dimensions, humanly inconceivable. A creature's body was a more or less constant tonal pattern, with much the same degree of flexibility and minor changefulness as a human body.

...

There followed creations with spatial characters of several dimensions, creations Euclidean and non-Euclidean, creations exemplifying a great diversity of geometrical and physical principles. Sometimes time, or space-time, was the fundamental reality of the cosmos, and the entities were but fleeting modifications of it; but more often, qualitative events were fundamental, and these were related in spatio-temporal manners. In some cases the system of spatial relations was infinite, in others finite though boundless. In some the finite extent of space was of constant magnitude in relation to the atomic material constituents of the cosmos; in some, as in our own cosmos, it was manifested as in many respects "expanding." In others again space "contracted"; so that the end of such a cosmos, rich perhaps in intelligent communities, was the collision and congestion of all its parts, and their final coincidence and vanishing into a dimensionless point.

In some creations expansion and ultimate quiescence were followed by contraction and entirely new kinds of physical activity. Sometimes, for example, gravity was replaced by anti-gravity. All large lumps of matter tended to burst asunder, and all small ones to fly apart from each other. In one such cosmos the law of entropy also was reversed. Energy, instead of gradually spreading itself evenly throughout the cosmos, gradually piled itself upon the ultimate material units. I came in time to suspect that my own cosmos was followed by a reversed cosmos of this kind, in which, of course, the nature of living things was profoundly different from anything conceivable to man. But this is a digression, for I am at present describing much earlier and simpler universes. Many a universe was physically a continuous fluid in which the solid creatures swam. Others were constructed as series of concentric spheres, peopled by diverse orders of creatures. Some quite early universes were quasi-astronomical, consisting of a void sprinkled with rare and minute centers of power. Sometimes the Star Maker fashioned a cosmos which was without any single, objective, physical nature. Its creatures were wholly without influence on one another; but under the direct stimulation of the Star Maker each creature conceived an illusory but reliable and useful physical world of its own, and peopled it with figments of its imagination. These subjective worlds the mathematical genius of the Star Maker correlated in a manner that was perfectly systematic.

Very nice posting Sanbine! Imo, we live in one of many ( 8 or 12) Cyclic SuSy Multiverse bubbles. However, with a bubble configuration ( raspberry multiverse) with central big bang and big crunch, due different ( fermion repelling) black hole physics.Super Symmetric by multiverse entanglement between all anti-material and material copy bubbles, based on chiral or anti chiral vacua.

It may turn out there is a multiverse. Any many who pursue the question do so with scientific integrity, even if there may never be a way to verify the theories.

One problem is that many arguments for a multiverse are based primarily on an attempt to deny Intelligent Design. The odds of complex life as we know it evolving on earth in a mere few billion years are generally considered so astronomically slim that many infer Intelligent Design or a Creator had to have been involved. The Multiverse is often used as no more than an escape hatch from any implications of ID. It is argued that if there are a near-infinity of universes, the odds become more robust for life as we know it spontaneously evolving.

That is metaphysics, not physics: the multiverse as a crutch for Naturalism. It becomes as much a philosophical faith stance as the Intelligent Design theories it purports to refute.

The Universe we inhabit is either singular, or there are an infinite number of universes. It is unreasonable to think that there are a limited (but greater than 1) number of universes. That being the case, Occam dictates there is only one Universe.

Sabine wrote: I cannot see ad revenues ever coming into a range where it would be sustainable economically.

I listen to a number of podcasts. A lot of them seem to be not only sustainable (covering costs), but profitable. They achieve profitability in two ways: Ad revenue from sponsors or Patreon subscribers.

Most of the podcasts I listen to are science and economics related, and I'm pleased that such things can be popular enough to be sustainable. One doesn't have to be wildly popular to be profitable. A typical Patreon subscription is $5 a month. If you get a thousand subscribers that's $5,000. Some podcasts have both ad revenue and Patreon subscribers (e.g. the science podcast Startalk hosted by Neil deGrasse Tyson).

The thing is, to maintain subscriber interest, there has to be a steady, reliable output. I get the impression that it's more or less a fulltime job. So sure, it's all about good workflow.

As far as I'm concerned, there aren't enough good science podcasts. Science is woefully under-communicated in the US; the average American would rather talk about Meghan Markle than science topics. When Americans talk about science, more often than not it's a partisan argument over climate change, evolution, or when life begins (abortion). Science is becoming synonymous with politics. Thank goodness there are still many good science books being published each year and enough people buying them. That gives me some hope. It goes without saying that your book, Lost in Math, is one of the good ones.

When I first came across the "Many Worlds" interpretation of QM (many years ago), I recoiled! As far as I can see, MW requires that every single quantum event anywhere in the universe bifurcates (at least) the entire universe. To me, that is simply a mathematical abstraction.

It also seems to have other problems - for example, given a degenerate system, you could have an infinity of possibilities, giving rise to an infinite set of universes.

I find the various versions of the multiverse ugly, but that a personal problem, I guess. More to the point, it is not clear to me that any of them have observable consequences. If they don't, what do they have to do with physics?

Very interesting. And very similar to older theories on how many angels can dance on the head of a pin. The Universe being so large that somewhere there must be angels dancing so this must be science.I may not be a physicist, but I am a taxpayer. Although I have no objection at all if people (and scientists) want to pursue these things, I should prefer that they pay for it from their day-job at the Patent Office.

So, mathematicism trips wildly through the verdant fields of science fantasy, falling to its knees to admire its reflection in a puddle of metaphysical grotesqueries. Interesting if you like that sort of thing I suppose, but science it ain't.

My issue is all of these theories fail to address the absurdly young age of our Universe compared to it's potential age. The odds are that 'now' wouldn't be the first one trillionth of this potential age. It's more likely that the age of the Universe, 13.7 billion years, is at least a fifth or more of the potential age. So...our Universe will end in less then 70 billion years ...maybe 25 billion. None of these theories ever address this very obvious mathematical probability.

My issue is all of these theories fail to address the absurdly young age of our Universe compared to it's potential age. The odds are that 'now' wouldn't be the first one trillionth of this potential age. It's more likely that the age of the Universe, 13.7 billion years, is at least a fifth or more of the potential age. So...our Universe will end in less then 70 billion years ...maybe 25 billion. None of these theories ever address this very obvious mathematical probability.

They do have observational consequences, because in a generic multiverse scenario any particular observer only has access to a finite amount of information about her local environment (including herself). This causes the observer defined by all the information she has access to, to have multiple copies across a range of very similar multiverses. The effective laws of physics that apply to the observer will then have an additional probabilistic aspect to it, the probability distribution of multiverses will enter into it. So, the laws of physics we observe described by the Standard Model should in part have a multiverse contribution to it.

Read some of Your posts (very quickly) - Love them all, but very little time to comment. Perhaps I can comment here on more than one. It was said a long time ago.

If God did not exist, Man would have invented him. - not exclusive, Both of these things are possible. It can also be said (with a humorous twist) - If 'The Multiverse' did not exist, Man would invent it.

If there are theories YOU cannot form -because some equations lead to 'infinity' or a 'singularitu'. If there is phenomena YOU cannot scientifically explain. If there are questions YOU cannot answer ... then .. - invent something that can.

Even as a cynic, I hope that this is not some 'basic default mode' of human nature.

"Werner Heisenberg already wrote in Die physikalischen Prinzipien der Quantentheorie: "One must remember here that human language generally permits the formation of sentences from which no consequences can be drawn, i.e. which are actually completely empty of content, although they convey a kind of vivid idea. Thus, for example, the assertion that there is a second world besides our world, with which, however, in principle, no connection is possible, leads to no conclusion at all; nevertheless, in our imagination, this assertion produces a kind of picture." Anton Zeilinger comments on this sentence in his foreword to the quoted edition of Heisenberg's book: "A reflection on statements of this kind would considerably shorten many of today's discussions of interpretation"."

from Wikipedia https://de.wikipedia.org/wiki/Viele-Welten-Interpretation

Number 5 – the Mathematical universe – is correct. However, the universe is also a fractal. Therefore it is like a simulation from the human point of view (number 4). Our universe is a volume so all the changes are transformations (math). But transformations within a volume are topological deformations that show average behavior that looks like probability (partly number 3). The transfer of a quantum in space can be compared roughly with a string (number 2). Number 1 shows a lack of knowledge about the amplitudes of the average electromagnetic field in the early universe. The average amplitudes decrease by the creation of mass and rest mass. Our universe is non-local thus the wave length of “light” will increase during the evolution of the universe.

Yes, but not in the minds of mainstream physicists (yet). It's complicated but in overview, a model of time as a field of energy with two opposing waves (with a net zero energy) does explain many conundrums while aligning perfectly with current theory. CPT Symmetry demonstrates this but to date, no one has taken it further. (Read "The Binary Universe")

You, and others dispute the reality of the cc, proclaiming it to be nothing more than some artifact of our current model of the universe.

The cc is an observed entity, a process that we can quantify from the red shift of light from distant galaxies, but I'm sure you know that. Then how do you explain or interpret this red shift if not by physical expansion?

Mind you, the physical "expansion" of space itself is a taxing notion, seemingly nonsensical and without an identified cause, an interpretation of an observation, an opinion!

There is another way to interpret this observed effect which is not nonsensical and which does present a cause for the effect. That is, if we consider that the rate of passage of time has not been constant over the eons, but has been increasing since the big bang. This being the case, we would observe a red shift in all directions, and which increases in value the further back in time we look. This effect would be observed from any position and in any direction and we would not have to invent some physical expansion of nothing.

Let's not forget that SR tells us the space becomes "smaller" (in the direction of travel) as we speed up. Space actually loses its meaning for the traveller as he/she approaches "c" and as time stops. GR also tells us the same thing, that space loses its meaning for an occupant of the event horizon and as time again has slowed to zero rate. One might conclude that the passage of time "creates" space.

Clearly, the process of time defines the "size" of space. With no time there IS no space. This view would demonstrate HOW space could "expand" over time, with time increasing in rate, over time.We do not have to search for a cause for the impossible "expansion" of "nothing" and we can see that this view aligns perfectly with relativity theory.

Hubble should have considered this option as an alternative to some "physical expansion", but he didn't. Today we are stuck with this single minded interpretation which has no causality and simply does not make sense. I have suggested here an alternative which is worthy of consideration. As far as I can determine, it works much better and indeed explains many other conundrums. (Read "The Binary Universe").

The cc is an observed entity, a process that we can quantify from the red shift of light from distant galaxies, but I'm sure you know that.

That sentence is simply wrong on every level. The cc is not an entity (a physical thing) it is parameter of the standard model of cosmology; it's value has varied significantly over the lifetime of the model. It's current value has been chosen as a way to account for a supposed acceleration in the expansion of the "universe".

This supposed acceleration is inferred from a discrepancy in luminosity distance and redshift distance measurements of distant SnIa supernovae. The purpose of the cc, then, is to provide the energy for that inferred acceleration and so reconcile the standard model with observations.

Then how do you explain or interpret this red shift if not by physical expansion?

Using the GR equation for gravitational redshift calculate the redshift for an expanding spherical wavefront of light emitted by a galaxy at increasing radii, recalculating the entire mass content of the sphere, using a reasonable mass density estimate, at each iteration. You can do it on a spreadsheet. It yields a crude redshift-distance relationship over significant cosmological distances.

As to space and time, there is no evidence whatsoever that they are substantival entities, rather than simply relational concepts like temperature. There is no empirical evidence that space, time, and/or spacetime have a causal relationship with empirical reality. The relational concepts of space and time do not cause the movement and change of physical objects anymore than temperature causes things to get hot or cold.

"The cc is not an entity (a physical thing) it is parameter of the standard model of cosmology."

Well, entity is perhaps the wrong word. You are right, it is not a physical thing. But, it IS a physical PROCESS, either an expansion (a physical change in separation) or a change in time rate (a physical change in rate at which events occur). Certainly, it is more than just a "parameter".

GR does not infer just a change in the frequency of light over distance travelled. That's the old "tired light theory". It actually predicts an expanding universe and that is different.

"... there is no evidence whatsoever that they (time and space), are substantive entities, rather than simply relational concepts like temperature."

The absence of evidence is not evidence of absence, and we are at liberty to speculate. After all, what is more speculative than a multiverse?

"The relational concepts of space and time do not cause the movement and change of physical objects anymore than temperature causes things to get hot or cold."

This statement is indeed what you would call, "wrong on every level". Clearly, the process of time, of change, is the driver of all events and processes, the complete opposite to your conjecture.

Oh yes it is, in a scientific context absence of evidence is indeed evidence of absence. That nonsensical argument is beloved of certain philosophical types who like to think their fevered metaphysical imaginings carry more weight than any contravening empirical evidence.

If you point to a box and say there is money in it and I open the box and show you that there is no money in it, are you going to argue that the money is really there despite the absence of evidence for your claim?

...we are at liberty to speculate. After all, what is more speculative than a multiverse?

Certainly, you are at liberty to speculate to your heart's content but if your various philosophical musings don't resolve to empirically verifiable objects or events then you are only doing metaphysics. Science is not, (or at least is not supposed to be) a compendium of idle speculations by reality challenged philosophers.

...the process of time, of change, is the driver of all events and processes...

Changes are real physical processes, time is a human devised relational concept for measuring rates of change. Neither you nor anyone else has any empirical evidence for the existence of a substantival time or space that affects the matter-energy content of the cosmos.

Time and space are only relational concepts - that is the only view that the absence of evidence supports. If you find such a view does not accord with your fervently held beliefs, I'm sorry to inform you that science doesn't care what you believe.

I'm not sure you are in a position to patronise given the palpable nonsense you have written:

" In that case, the observed values would have a probability of 1, but they would still be fine-tuned."If a constant can only take one value then it isn't "tuned", is it? It can't be any other value. This is a misnomer.

"Perhaps the correct theory says that the masses are essentially random variables, then the observed values would be unlikely, but the fact that we observe them could be explained by the anthropic principle in the multiverse, etc."

"Perhaps" is the key word here. Pure speculation.

"Whether fine-tuning is problematic is a different question still, but this could be answered only after one knows whether the value is unlikely or not and whether it is relevant for life."

And you do not have the first clue about the likelihood of values or their relevance to "life". More pure speculation.

" (Lots of things might be random variables, but, like someone winning the lottery every week, we have to observe some value, so such a thing could be unlikely yet not problematic.)"

You mean like their being a winning number chosen by the lottery machine every week. Someone winning the lottery every week is due to the fact that so many people play it.

You started out by saying that fine-tuning was a scientific fact, but your examples are either misnomers or pure speculation.

The issue here, as with other arguments I've seen in these comments with Luke Barnes, Phillip Goff, Tam (forgot his surname), is not some technical point, it is your psychology: you are arguing without actually possessing an argument like a religious loony.

Not one single scientific fact has been discovered by the application of anthropic principles or fine-tuning ideas. Do you agree or not? If not, then provide such a fact. Simples.

Finetuning arguments are not always unscientific. They are scientific if you have a way to determine the probability distribution from observation. Take the well-known example of a pencil balanced on its tip. That's finetuned. You can say that because if you threw a billion pencils in the air none of them would land on its tip. (Of course you don't need to actually do this because we have a lot of other observations from which we have inferred the behavior of pencils already, but this is an example for what you could do.)

When it comes to the finetuning of parameters in the laws of nature, however, there is no way to ever obtain any information about the probability distribution. That's why the supposed problem is ill-defined.

Let me iterate what I said earlier, it is possible to define a notion of fine-tuning without drawing on a probability, but then it's unclear why a fine-tuned parameter is problematic and needs explanation. If you want to claim it needs explanation, you need the probability distribution.

This is not an impossibility of the type that hinges on not being able to make a good enough measurement, it's an impossibility in principle. Talking about finetuning of the laws of nature is unscientific because you cannot observe the laws of nature in other universe, not ever.

I have explained all this hundreds of times and really if Phillip doesn't understand by now in extrapolation I deduce he will never understand it. I am just repeating this in the hope to clarify really what the debate is about.

"I have explained all this hundreds of times and really if Phillip doesn't understand by now in extrapolation I deduce he will never understand it. I am just repeating this in the hope to clarify really what the debate is about."

As I've said, if your "Screams for explanation" ever appears in a respectable journal, I'll try to reply there. :-)

I've always had a problem with logical positivism. On the one hand we have theories which tell about observables. These have truth-value because observations will either confirm or disconfirm such theories. Then there are formal constructions - math, etc. These have truth value because the reasoning behind them can be tested, true or false.

Where I have trouble is that there seem to be theories which fit in both categories. Logical constructions which predict observables but also have parts which don't seem to correspond to any observables. What are we to make of such things?

Will they perhaps be modified so as to enable us to confirm, disconfirm in their entirety? Can we accept them provisionally? What if there is no prospect of modification?

I've never understood the need for the concept of fine-tuning. An example I've heard given is inflationary theory to explain universal flatness. But that's just a proposed explanation for a physical phenomenon. Why would one call it fine-tuning? One wouldn't say gravity is fine-tuned to give the Earth a pretty elliptic orbit, but that Earth's orbit is elliptic because gravity obeys an inverse square law.

"I am just repeating this in the hope to clarify really what the debate is about. "The issue seems to be more psychological (Phillip Helbig) and political (Luke Barnes) than technical, like PASA publishing a paper in which the title has no meaning in physics and the crank author is motivated by his love of baby Jesus. "The Fine-Tuning of the Universe for Intelligent Life". Seriously?

In Maths you can't publish it unless you can prove it; in Physics you shouldn't be able to publish it unless you can measure it. That would keep the cranks like Luke Barnes at bay.

Ah, I see. Phillip Helbig wrote the Fortunate Universe review for The Observatory, so he's sticking with his answer there. Doesn't want to rock the boat with his only remaining gig in semi-academia. The Fortunate Universe has a foreword by a Nobel Prize winner, who lives in Australia so presumably it's some great Australian mates thing. I am staggered that a Nobel Prize winner would put his name to a book which uses classic conspiracy theory tactics - list up several examples of very large numbers and very small numbers that "can't just be", but actually each example carries zero weight in support of the hypothesis. And the book finishes with the greatest conspiracy theory of all - baby Jesus' daddy made the universe. While reviews of the book in publications like Physics Today don't even mention that it's all speculation and the ending is voodoo.

So a Nobel Prize-winning physicist has put his name to a book that ends by claiming some mythical character from a Bronze Age fairy tale made the universe.

This is completely f*cked. A Nobel Prize winner is supporting the deliberate corruption of physics by a religious lunatic.

Did he get it for providing evidence of universal fine-tuning? Nope.Mmm..Did he get it for providing evidence of a multiverse? Nope.Mmm..Did he get it for providing evidence that baby Jesus' daddy "created" the universe?Nope.

He got it for taking *physical measurements*.

He measured properties that *have actual meaning* in physics, unlike all the drivel in Fortunate Universe which he clearly then should not have been putting his name to.

Anyway, I now know you did a review of Fortunate Universe, and you're here. So why don't you reveal what other values the Cosm. Const, could have taken and how you know this?

"Finetuning arguments are not always unscientific. They are scientific if you have a way to determine the probability distribution from observation. Take the well-known example of a pencil balanced on its tip. That's finetuned. You can say that because if you threw a billion pencils in the air none of them would land on its tip. (Of course you don't need to actually do this because we have a lot of other observations from which we have inferred the behavior of pencils already, but this is an example for what you could do.)

When it comes to the finetuning of parameters in the laws of nature, however, there is no way to ever obtain any information about the probability distribution. That's why the supposed problem is ill-defined.

Let me iterate what I said earlier, it is possible to define a notion of fine-tuning without drawing on a probability, but then it's unclear why a fine-tuned parameter is problematic and needs explanation. If you want to claim it needs explanation, you need the probability distribution.

This is not an impossibility of the type that hinges on not being able to make a good enough measurement, it's an impossibility in principle. Talking about finetuning of the laws of nature is unscientific because you cannot observe the laws of nature in other universe, not ever."

I think that there might be semantic problem. Some people use "fine-tuned" as a synonym for "unlikely". Some use it to mean something which, if slightly changed, would have large consequences. In the latter sense, whether a fine-tuning is unlikely is not a tautology; it's a separate question.

I also don't see the reason why you think that, even in principle, we can never know the probability distribution for the laws of nature. 5000 years ago, one could have said the same thing about the Earth, now we know how planets form etc (other solar systems are a type of multiverse).

But even with no knowledge of probability distributions, the multiverse is a sufficient explanation for quantities which are fine-tuned for life; how likely these are, as long as there is a finite probability, doesn't matter, just like the fact that the Earth is at the right distance from the Sun for life is a fact which does not depend on how likely that is.

I'm not saying that it is the only explanation, but it is a possible explanation of why the universe is fine-tuned for life. Of course, if you can show that the various constants of nature must have the values they have, that would also be an explanation, but I haven't seen that demonstrated anywhere.

The fine-tuned distance of the Earth from the Sun---just the right distance to allow our sort of life---is a good analogy.

Aaaand here we o again with me repeating once again why your statement is logically wrong. "The universe is fine-tuned for life" is not an empirical fact. It is a statement about an unobservable probability distribution. I have already several times explained why it is unobservable in principle. There is nothing in need of explanation here. We have a set of parameters. We use these parameters to make prediction. Saying "all other parameters exist" adds no explanatory power. It's literally the same as saying "god exists" - that too adds no explanatory power.

And you state this as *a fact*!So not only do you *know* that the physical constants can take other values (otherwise why describe them as "tuned"?), you also *know* all possible forms of "life" and their likelihood in all these possible universes with different values of the constants!

Are you flying around the multiverse in the Starship Enterprise noting encounters with life forms in your captain's log?

"the Earth is at the right distance from the Sun for life"

There may be or have been life on other planets or moons in our solar system, and then there are the other 10 sextillion star systems to investigate. Life can exist at 98 million miles from the Sun, but it might be able to exist at plenty of other distances from the Sun, too, so to describe this as the "right distance from the Sun for life" is highly illogical, captain.

Continuing what discussion? You haven't provided a single argument for your position and now you are running away just like little Luke did. You don't have an argument and nor do little Luke, the Astronomer Royal, Nobel Prize Winner Brian Schmidt and Max Tegmark.

I think I get Sabine's point, but there is another way to look at this: the standard model includes a couple dozen constants which are taken as axiomatic. With multiverses you can leave those values out of the theory. Isn't that arguably simpler from Occam's point of view?

It would be simpler theory if you could leave out the constants but of course you cannot, because then you do not have a theory that describes our universe. You *still* need to assume these constants one way or the other. In this case, assuming that any other combination of constants than our own exists is superfluous. It's an assumption that should not be made. It's not scientific.

Then you apply a form of the anthropic principle: we (intelligent life) will only find ourselves in universes in which the constants allow intelligent life to exist. I think the question of whether this is preferable by Occam's criterion is at least debatable - see Phillip Helbig's post near the top.

I agree that multiple universes _feel_ extravagant, but we shouldn't let our feelings rule us - isn't that the point of your book?

"The universe is fine-tuned for life" is not an empirical fact. It is a statement about an unobservable probability distribution.

Maybe we have to agree to disagree. In any case, we're not making any progress here. See my definition above of fine-tuning. Of course, things are different if you define fine-tuning as improbability.

Maybe my rhetorical powers are not sufficient. Why not debate, say, Martin Rees or Max Tegmark? You could probably sell tickets for the match of the century, which would help your financial situation after your funding runs out in three years. :-)

I already explained repeatedly that if you define fine-tuning without the probablilty then that's just a property of nature and no reason to explain anything. You making fun about my job troubles is disgusting.

Note the smiley. :-) Job troubles? At least you are where you want to be and have funding for another three years, which is better than what most people on the planet have. Of course we would all like to be able to do what we want with infinite funding, but if that's not possible, one has to choose.

With regard to the debate, I just want to point out (though one could call it an argument from authority) that many people much smarter than I am also share my views on the multiverse and its relevance for fine-tuning.

There is no reason that I should trust my own feelings on these matters.If physics cannot answer this question, no-one can.

Personally I assume they are wrong; I treat them no different from Gods, Unicorns or UFO's. I would change my mind if physics says they are right and real and pointing to some convincing evidence (convincing for other respectable physicists).

The multiverse is just a way for materialist/naturalist scientists and their unquestioning fans, to explain away evidence (much of it uncovered in genomics and cosmology) that clearly points to an intelligent source for life and the arena it is played out in. Instead of seriously looking at the data, and considering common everyday experience of processes in physics and chemistry, the best explanation for the origin of the universe and life, is that some kind of intellegence is involved. I find it humorous and sometimes very annoying, that simulation theory, that we used to laugh at when proposed in the 70's is given a fair shake, but of course always leaves the first cause unexplained - i.e. who were the first simulation makers and how intelligent were they, AND WHAT OR WHO made them? Even Richard Dawkins knows that it is laughable to think random chemical an physical processes just stumbled on the most complex and intracite machine ever discovered, the human cell - packed full of highly specified, purposeful information, all with an encoding and decoding system. So ingenious, that segments of DNA can be read in an overlapping manner to increase efficiency and to make two proteins. Even more incredible, dual sets of genetic information can be read and produce thousands of different proteins. Try and write an a letter to your friend that has a meaning in the text, and an underlying meaning in the encrypted part of the message - you say easy right? But here is the rub, DNA can is not "hiding" the information - both the original letter and the part that is encoded are highly specific an useful for the cell! We know that this kind of information processing system, complete with incredibly efficient molecular machines, could only come about with foresight and intelligence - it is literally insane to believe it is chance, or chance and chemical necessity, when you look at the absolute genius beyond any human mind, that goes into building a single cell....

I strongly believe, and so did Max Plank and many other experts and developers of arguably the most successful theory of all time, in the observer dependent principle of QM, it makes sense and does not go down the rabbit hole of infinite solutions to our biggest questions ever (the standard model is also right up there - but physicists not so secretly hate it, as it is fined tuned to an mind blowing level).

All of the predictions for more exotic physics that could explain away the "illusion" of fine tuning were clearly said to show up at much lower TEV values before the upgrade to the LHC, let alone after it. Its all there in black and white, and even after the huge upgrade to the LHC, no new hints to this fine tuning breaker, could be found).

One would think that those interested in such matters would wonder why another divide by zero explanation of Quantum mechanics is popularized, while the interpretation that was favored by the developers of quantum mechanics, was that the observer's consciousness, or a collective consciousness, or a God consciousness is Primary and space/time are derivative.

The same thing is happening in biology as even though Neo-Darwinism has been show to be a joke, and slowly but steadily, biologists are openly dumping random change and selection, as it can do little to nothing..

When developing a mathematical solution to a problem, if one stumbles on a divide by zero solution, it is immediately rejected, as a theory that explains everything, truly does explain nothing. AND if you CAN'T explain the first life, let alone the progression of life in THIS universe, the a multiverse is MUTE, and has nothing to say about origins.

When string theorists became aware that other groups were "solving" the basic equations of the theory, they were mortified and knew something was wrong, and yet the hype still to this day outweighs the evidence - similar to the hard problem of consciousness - now materialist are claiming it is an illusion - so problem solved, it actually does not exist!!!!

Comment moderation on this blog is turned on. Submitted comments will only appear after manual approval, which can take up to 24 hours. Comments posted as "Unknown" go straight to junk. You may have to click on the orange-white blogger icon next to your name to change to a different account.