Saturday, February 21, 2015
... /////

where he tries to defend some misconceptions about the "many worlds interpretation" of quantum mechanics while showing that he is totally unable and unwilling to think rationally and honestly.

After some vacuous replies to vacuous complaints by vacuous critics who love to say that physics isn't testable, he claims that the following two assumptions,

The world is described by a quantum state, which is an element of a kind of vector space known as Hilbert space.

The quantum state evolves through time in accordance with the Schrödinger equation, with some particular Hamiltonian.

imply that the worlds where other outcomes of quantum measurements materialized must be as "real" as our branch of this network of parallel world. This claim is self-evidently untrue. Quantum mechanics – as understood for 90 years – says that no such words "really" exist (they can't even be well-defined in any way) even though the theory respects the postulates.

So Carroll's claim is equivalent to saying that \(12\times 2 = 1917\) because \(1+1=2\) and \(2+3=5\). Sorry but this would-be "derivation" is completely wrong.

There are lots of wrong things about the many-worlds muddy thinking and many ways to prove that the world cannot work like that. But I will focus on the question whether these many worlds are "implied" by the postulates.

Let me comment on a conversation between Carroll and Moshe Rozali.

Moshe: My discomfort (“objection” is too confrontational) is somewhat related, so maybe this is an opportunity for me to learn.

After decoherence, the state in the Hilbert space transforms effectively into a classical probability distribution. We have certain probabilities, given by the Born rule, assigned for every possible outcome. Those possible outcomes no longer interfere. In all other cases where this situation occurs in science, we understand those different possibilities as potentialities, but we shy away from attributing to them independent “existence”.

Now, I am not too worried about ontological baggage. I suspect that in the present context ontology as we understand it cannot be made well defined. Rather, I am worried about the epistemological baggage: what does it buy you, declaring that all those potentialities actually “exist”? Is it more than a rhetorical move? And, why make that move here and not, for example, in the context of statistical mechanics?

Note that Moshe's first goal is not to be confrontational and he suppresses his language to achieve this goal. Is that really necessary? Well, Moshe's comment is totally right but I think that he makes the content weaker than it should be, too.

Moshe's point is that the probabilities predicted in quantum mechanics are analogous to probabilities that people had known before quantum mechanics, when they thought that the world was described by classical physics. But in that old era, they weren't obsessed by saying that the other potentialities were "real". They were just potential outcomes. Analogously, there is no justification for claiming that these potentialities are "real" in quantum mechanics.

One reason why Moshe's language is suppressed is that this is not just some vague analogy or an incomplete argument. The probabilities in classical physics are a limiting case of those in quantum mechanics. They are fundamentally the very same thing! For example, when you roll dice, the event is affected by random variations of the neuron impulses that control your muscles, and these random variations depend on quantum phenomena that may only be predicted probabilistically. These variations are amplified when the dice move in a complicated way.

For this reason, the uncertainty about the number shown by the dice is largely driven by the usual uncertainty – probabilistic character – of quantum mechanics. So if the different histories are "real" in quantum mechanics, they must be viewed as equally "real" if you talk about dice in the classical language, too. The probabilities everywhere in the classical world would require all the potentialities to be "real" as well – all these classical probabilities arise in the \(\hbar\to 0\) limit of their quantum counterparts, so they must obviously have the same interpretation.

An unusually good 16-minute critique of MWI for a YouTube channel that has "philosophy" in its name. Adrian Kent is most frequently cited as an anti-MWI authority. I have discussed these problems with MWI (impossibility to assign non-equal and time-dependent relative odds to the branches; impossibility to derive the Born rule from any intrinsically non-probabilistic formalism; inability of MWI to pick any realistic preferred bases) elsewhere; but the video is largely orthogonal to this blog post.

Another reason why Moshe's comment is "diluted" is that one may actually show that there can't exist any consistent way of defining "how many times the world is split into several worlds" and how it happens. If the world were "splitting" too rarely, one would still face the superpositions in situations when someone may see that the outcomes are "sharp". If the world were "splitting" too often, it would effectively mean that a measurement is being made – and the interference is being erased – too often which would contradict experiments. The truth is "in between" – but where it exactly is depends on the observer. If he's convinced he's able to perceive an observable, it must be on the "classical side" of the Heisenberg cut. Everything else may be on the "quantum side".

But I don't want to go in this direction. Let's continue with the discussion whether the "real character of the potentialities" is inevitable.

Sean: Moshe – I might be misunderstanding the question, but I’ll try.

Sean is way too modest. He's not misunderstanding just the question. He is misunderstanding all of modern physics and the concept of rational thinking, too. Why?

Sean: I think this is a case where the ontology does matter.

To a physicist's ear, the sentence "ontology does matter" sounds weird. Why? Because it is weird. The word "ontology" doesn't mean anything in legitimate physics because it cannot be defined in any operational way; and any mathematical definition that may be given to the world may be inadequate as a description of Nature.

What does "ontology" mean? If you analyze what all these would-be thinkers say, you will see that "ontology" and "classical physics" is exactly the same thing.

"Ontology" is the idea that something objectively exists and has properties that are objective. All the information about the things that objectively exist may be written in an objective way. Mathematically, the space of possible states is known as the "phase space" and the dynamical laws determining the evolution of the point in a phase space is known as "classical physics".

But for 90 years, physicists have known that Nature simply doesn't obey classical physics – it doesn't obey this framework of (well-defined) ontology. It works differently, according to the laws of quantum mechanics, and Planck's constant \(\hbar\) directly quantifies how much wrong the idea of "ontology" is! Because Planck's constant is nonzero, it can never be quite right to think about Nature in terms of "ontology" or "objective state of physical systems".

Sean: In statistical physics, the theory says that there is some actual situation given by a microstate, but we don’t know what it is.

No, that's wrong, too. Classical statistical physics in no way "demands" that the precise microstate is in principle knowable. The very point of classical statistical physics is that the precise point in the phase space is unknown to the observer but all of classical statistical physics works perfectly if it also assumes that it is unknown and unknowable to Nature (or God), too.

What classical physics allows us to do is to make predictions that assume that some objective reality exists at each point, even before the measurement, and this assumption leads to certain Ansätze for the probabilities. For example, we may always predict the evolution \(A\to C\) by inserting an intermediate moment \(B\) and write\[

P(A_i\to C_f) = \sum_j P(A_i\to B_j) P(B_j\to C_f).

\] One may get from the initial state \(A_i\) in the past to the final state \(C_f\) in the future in many ways but one of the classical states \(B_j\) must be realized at the intermediate moment. To get the probability of getting from \(A_i\) to \(C_f\), we simply sum the probabilities of getting from \(A_i\) to \(C_f\) through any intermediate state \(B_j\).

The formula above is usable in the classical world – the assumption that the probabilities may be written as these sums is the probabilistic reincarnation of the classical notion that an intermediate state exists even if it is not measured. But note that I said that this assumption was classical. It doesn't mean that it's correct. And indeed, physicists have known for 90 years that it is not correct.

\] The sum over the intermediate states \(B_j\) in the first line is perhaps analogous to the classical sum but it is totally inequivalent, too. Why? One first sums the amplitudes and then squares their sum's absolute value, instead of summing the squares. That's why the interference and other quantum phenomena occur. One may derive the classical formula from the quantum formula in a certain limit – using both the usual \(\hbar\to 0 \) limiting procedures as well as decoherence – but one simply cannot derive the second, quantum equation from the classical one – regardless of the identification of the phase space!

Quantum mechanics strongly and unambiguously refutes the idea that the world at the intermediate moment \(B\) (before the initial and final measurement) has some objective features. If you are making this assumption, you are assuming that the world is described by classical physics and you are guaranteed to produce wrong predictions.

Sean: So instead we work with probability distributions; they can evolve, and we can update them appropriately in response to new information. None of this changes the fact that there is a microstate, and it evolves (typically) deterministically once you know the whole state.

No. Again, there doesn't exist any need for the microstate in classical statistical physics to be knowable in principle. In classical physics, one may imagine – and people often find it useful – that a precise microstate actually exists and is known to Nature (or God) even if it is unknown to us. So the \(P_{AC}=\sum P_{AB}P_{BC}\) formula assuming a clear intermediate state may be used, even though we often don't use it.

But in quantum mechanics, one can't even imagine that. The laws of classical statistical physics are compatible with the idea that the precise information about the intermediate microstate (or the state at any moment, for that matter) is known to some perfect agent. But the laws of quantum mechanics are not compatible with that assumption!

So the idea that the "precise intermediate microstate was known to Nature" was an axiom you could add (but you didn't have to add!) into your axiomatic systems for classical statistical physics. But it's an axiom that you simply have to abandon if you want to understand the correct generalization or deformation of classical statistical physics, namely quantum (statistical or otherwise) mechanics. The axiom is simply no longer valid just like the axioms that the spatial geometry is perfectly flat etc. are no longer true in general relativity.

The fact that this axiom "didn't hurt" in classical physics doesn't mean that it doesn't hurt in quantum mechanics. It surely does.

Sean: In QM the situation is just completely different. You don’t have a probability distribution over microstates, you have a quantum state.

No. That's a completely wrong description of the differences between classical physics and quantum mechanics. The fact that some things about the state of the physical system may be known and others may be unknown (or known just in terms of probabilities) is something that holds both for classical physics and quantum mechanics. There is no difference between the two frameworks when it comes to the point that probability distributions may be needed or exploited. And when they're needed or exploited, their interpretation is exactly the same. Both in classical and quantum physics, probabilities we are interested in are probabilities that "a certain statement about observables holds". It's the observables, not microstates, that are found at the root of physics.

But a density matrix may be considered a quantum description of the probability distribution over microstates, exactly what Carr*ll claims to be wrong.

The aspect by which classical physics and quantum mechanics differ are the laws that allow us to calculate these probability distributions. The difference between the two equations with sums above (the sum of products of probabilities in classical physics; and the sum of products of complex probability amplitudes in quantum mechanics) may be viewed as a good symbol of the qualitative difference between the laws of classical physics and laws of quantum mechanics. But again, it's the formulae by which the probability distributions are calculated, that are different in classical and quantum physics. The fact that both frameworks may use or do use probabilities is shared by both. And the probabilities mean the same thing in both frameworks. They always refer to numbers that tell us which outcomes may be reasonably expected in a situation when the outcome is unknown before the measurement, and known afterwards.

Sean: You use that quantum state to calculate the probability of experimental outcomes, but we aren’t allowed to think that the outcome we observe represents some truth that was there all along, but we just didn’t know. That’s what interference experiments (and Bell’s theorem etc) tell us.

Right. It's surprising that these two correct sentences appear in the middle of all the junk. As far as I can say, they directly contradict everything else that Carr*ll wrote.

Sean: The quantum state isn’t just a probability distribution. See also the PBR Theorem.

No, that's a wrong proposition again. The quantum state – pure state or the general density matrix – may be fully described in terms of probability distributions for all observables that it predicts. These probability distributions for various observables are not quite independent from each other but they do exist and if you know all of them, you may reconstruct the full density matrix. What's impossible is to use the classical formulae such as \(P_{AC}=\sum P_{AB}P_{BC}\) to calculate these probabilities. But that doesn't mean that the probability distributions that the quantum state or density matrix encodes are not probability distributions. They are probability distributions. Their interpretation (consequences for our predictions of experiments) is exactly the same as it was in classical physics. It's the laws to calculate them that have been upgraded and qualitatively changed!

Sean: Now, of course you are welcome to invent a theory (a “psi-epistemic” model) in which the wave function isn’t the reality, but just a black box we use to calculate probabilities. Good luck — it turns out to be hard, and as far as we know there isn’t currently a satisfactory model.

No. The correct and complete theory (or framework, waiting for the Hamiltonian etc. to be specified) was found 90 years ago and it is called quantum mechanics. It was a groundbreaking discovery, probably the most important discovery of the 20th century physics, but it wasn't that hard because its founders were very smart.

The philosophical phrases "psi-ontic model" and "psi-epistemic model" are being used by the self-styled philosophers to describe laws of physics governing a point in the phase space or the probability distribution on the phase space (i.e. classical deterministic physics and classical statistical physics), respectively. And to describe Nature using either of these two templates isn't just hard. It is impossible because Nature doesn't obey any laws of classical physics and this fact has been known for 90 years and should be known to everyone who gets at least an undergraduate degree in physics.

Sean: The Everettian says, Why work that hard when the theory we already have is extremely streamlined and provides a perfect fit to the data?

Sean: It's not just an Everettian – it's a sleazy, dishonest, stupid, šitty aßhole who loves to impress others with misleading, superficial, demagogic commercials trying to sell šit as gold. If I were obscene, I would say that it is not the Everettian who said it. It was Carr*ll.

Sean: (Answer: because people are made uncomfortable by the existence of all those universes, which is not a good reason at all.)

Some people are made uncomfortable but other people may also easily show that no such a thing exist. The true reason why people have problems with the intrinsically probabilistic laws of quantum mechanics is that their brain, like Carr*ll's brain, are just too tiny and incapable of thinking beyond classical physics.

Moshe: Thanks, Sean. I suspect this is my own personal misunderstanding, so I don’t want to take too much of your time. Let me try just once again to state my confusion.

"Sean, I am ready to lick your ass as deeply as you want, if you have time." Please, Moshe, is that really necessary or desirable? You have studied physics at quite some level for decades and done some serious research, unlike Carroll, so why are you always starting with this aßlickery dedicated to a bully who doesn't have the slightest clue what he is talking about?

Moshe: I have no problem believing in Everettian Quantum Mechanics, and I certainly see the appeal of getting everything from unitary evolution with no additional assumptions. So we don’t have any real disagreement. But, I am confused about the natural language description of the situation. So maybe it is about ontology, a concept I clearly have some trouble with.

Holy cow. The idea that the unitary evolution is positively correlated with the fairy-tales about many worlds is just a cheap demagogy, and Moshe must have been drunk if he bought it. The actual relationship is opposite. The idea that there is an objective branching into the parallel worlds erases the mixed terms, makes the inference impossible, and it contradicts the unitary evolution.

Moshe: I take it that the crucial part in taking different possibilities as actualities is not in the post-decoherence description. If the world was fundamentally stochastic, simply described by an evolution of a density matrix, not too many people would claim that the different possibilities are more than just potentialities, and most will agree that only one of them is realized. And, this is what I feel uncomfortable about — pre-decoherence it is certainly murky to discuss the world in classical terms and argue on what exists and what not. And post-decoherence we have a probability distribution, for which normally we only believe one situation is realized. At which point are we forced into believing that all branches co-exist?

(Independently, as I complained before, almost everything is physics has continuous spectrum, so “branches” and worlds “splitting” must be only a metaphor).

Exactly. Decoherence in no way implies that the other potentialities become "real". Decoherence just means that the equation involving the sum of products of amplitudes may be effectively rewritten as the equation involving the sum of products of probabilities – as long as we trace over some environmental degrees of freedom. So after decoherence, the probabilities approximately (very accurately) follow the laws of classical physics. But they're the same classical probabilistic distributions we always had in mind when we thought about the world classically. In particular, the other potential outcomes are not "real" anywhere.

The "continuous splitting" is just one particular problem that shows that no functional version of MWI can actually exist at the mathematical level. There are many other ways to see that it can't work.

Sean: Moshe: Of course the world can perfectly well be said to be described by a density matrix, since any pure state determines a density matrix. The real question is, how does the density matrix evolve? We sometimes think of decoherence happening, off-diagonal elements disappearing, and the state “branching.” But that’s only for the reduced density matrix for some subsystem; the full density matrix obeys the unitary von Neumann equation, from which the above description can be derived.

Right, decoherence may only be derived – and it is only true – if there are environmental degrees of freedom that the observer doesn't have access to. But when he doesn't have access to those, he may trace over them, and the decoherence-like calculation is exactly correct to predict everything he has access to.

At any rate, whether one is tracing over something or not, it is totally obvious that the eigenvalues (or diagonal entries) of the density matrix have the same probabilistic interpretation. That also means that if the other potentialities aren't real in one case, they can't be real in any other case, either.

Sean: So you have a choice: you can believe all that, and take the “probability distribution” to be a measure on which branch you find yourself on, like a good Everettian.

This is just a childish visualization for someone who needs to draw pictures but it doesn't make things any more meaningful, quite on the contrary. If two different worlds exist, there is no reason to say that we're in one of them with the probability 64% and another one with 36%. The most sensible distribution would be 50% – 50%. So the very assigning of general probabilities to the "branches" means that we are not really talking about "several worlds that are equally real" but about some asymmetric generalization of this concept (a diagram which is claimed to be the "real thing") which doesn't really make any mathematical sense.

At the end, it's the probabilities that may be calculated in quantum mechanics and verified by measurements and the misleading picture with the potentialities as "actual worlds" doesn't help to make anything meaningful – it really contradicts quantum mechanics as long as one has at least somewhat high standards.

Sean: Or you can — do something else!

No, the right thing to do is to have the courage and do nothing (or to shut up and calculate) and simply accept Nature as it is – and as the founding fathers of quantum mechanics found Nature to be 90 years ago. Everything "else" that people have done – and continue to produce – is worthless, fundamentally wrong stinky šit.

Sean: Change the formalism in some way so that you get to say “but those other parts of the density matrix aren’t real things.” You could invent a hidden variable that points to one particular branch (as in Bohm), or you could explicitly change the dynamics so that the other branches actually aren’t there, or you could invent a completely new (and as-yet unspecified) ontology such that the density matrix simply provides a probability distribution over some different set of variables.

There are about 5 major classes and hundreds of subtypes of this stinky šit that various peabrains have been doing for years.

Sean: But you have to do something — otherwise you’re just stamping your feet and insisting that some parts of the formalism are “real” and some are not, for no obvious reason.

No, we don't have to do anything and a good physicist doesn't do any of these stinking šits because he is not a stinking šithead. Instead, he accepts quantum mechanics as the right description, a theory that unambiguously implies that classical physics is incorrect and everything that the likes of Carr*ll have written about the character of the physical law were piles of crap.

For an observer, his observations or perceptions are the only "truly real" (yet subjective) things, and the laws of physics may be used to (probabilistically) relate them with each other. Saying that anything "else" is "real" is either downright wrong or physically meaningless.

Moshe: Sorry, in "described by an evolution of a density matrix” I meant “described by an evolution of a probability distribution”, which unfortunately changed the meaning quite a bit.

Anyhow, I am not confident enough to have a real opinion on the reality of the wavefunction in the fully quantum regime, or whether this question makes sense. But, I thought that the real force of the “many-worlds” interpretation of Everettian Quantum Mechanics is that you don’t have a choice, and this I don’t see. I see a plausible scenario of how you get classical probabilistic description out of QM, which is quite a bit! But I don’t see why you need to declare the alternatives as “real”, any more than you do for other classical probability theories.

For example, in the fully quantum regime you can take the view so nicely expressed here by Tom. If the question of “what is real” only makes a meaningful appearance post-decoherence, I think you never have a situation with co-existing worlds. And, if the many-worlds part of the interpretation depends on how you interpret the wave-function pre-decoherence, I think the inevitability claim is not that strong.

We've already discussed that. A diluted solution of the truth.

Moshe: But anyhow, thanks for the discussion. I am probably missing some of your points, but it’s been useful for me nonetheless.

Another good moment to vomit.

I can't imagine that this junk will ever go away. The society is being bullied by tons of extraordinarily stupid and arrogant bullies similar to Carr*ll and the likes of people who have some clue, like Moshe, are increasingly manipulated to this role of inconsequential aßlickers who are fading away – even at places whose purpose is to concentrate the minds that should no better.

For a much more sensible (although not flawless, I would say) recent extra-TRF article about MWI, see Too Many Wor(l)ds, via Peter F.

snail feedback (55)
:

reader
RAF III
said...

Lubos - I can't tell you just how much I appreciate posts like this. Not only do you take on the misuse of philosophical concepts with respect to quantum mechanics, you also explain why they are inapplicable. Superb!By the way, I recently came across these comments by Gene (http://motls.blogspot.co.uk/2012/10/in-awe-about-entanglement.html#comment-689277645) and yourself (http://motls.blogspot.co.uk/2012/10/in-awe-about-entanglement.html#comment-689296044) which concisely illustrate your (both of you) understanding of these problems. So what on earth were we arguing about on the QBism thread? (http://motls.blogspot.co.uk/2014/04/david-mermin-on-quantum-bayesianism.html)Beats the hell out of me.Cheers!!!

Lol, Lubos."Sean, I am ready to lick your ass as deeply as you want, if you have time."

Good article, Lubos. And I like the argumentation of Moshe as well. One things about Many Worlds. It does not making the thinking better but one should know how these guys think. They think that the multiple worlds can interact (interference). The David Deutsch "proof" that multiple worlds exist goes something lije this: The world can only be classical. The quantum computer is more powerful than the classical one. Therefore multiple classical worlds need to interact to give the quantum computer its power. Of course the error is in the first assumption.

Dear Lubos, you are so right -- The two "assumptions" are mathematical statements that make absolutely no physical predictions what so ever. Physics is about physicists relating math to data. The words "the world is described" say nothing about which part of the world is related to which part of the formalism. OK, the second assumption contains the word "time", they could argue about that.I'm sure people have tried this analogy before with him, but it's like saying "the world is described by a Riemannian manifold, with the metric satisfying Einstein's equations". This says in fact absolutely nothing about the world without some version of the equivalence principle. My slogan these days: "Physics without observers is as post-empirical as it gets" What oxy-morons! Cheers!

Thanks, Mikael! I agree that there is a mistake in the first assumption but this argumentation has several other critical errors, too.

First of all, a basic assumption made by the usual MWI advocates - one needed to get agreement with quantum mechanics - is that the splitting is irreversible. The worlds are splitting but they are never re-merging. That means that the other worlds, once split away from ours, just can't affect us anymore! So they can't speed up our calculations, either.

This whole linking of "many worlds" with "speed of quantum computers" is irrational for another, albeit related, reason: the splitting of the worlds -which generates many worlds - only occurs at the moment of measurements, or after decoherence (although one cannot really define what is "enough" and what is "not").

But the main necessary feature of a quantum computation is that no measurement and no decoherence takes place during the calculation at all! So the worlds are not splitting during the calculation, so there aren't "many" of these worlds when the calculation ends at all! So even if the large number of worlds helped, which doesn't, as argued previously, Deutsch's argument would still be totally wrong because the "number of the worlds" just cannot go up during a quantum computation (which depends on being coherent)!

Dear Lubos, I have a (probably) silly question about rolling dice. It is claimed that some dice throwers can deliberately roll certain numbers, which they accomplish by throwing them very slowly. If they throw the dice a little too hard they can no longer control the results. Assuming this is actually true, my question is, is there a transition from classical to quantum physics between these two cases?

Dear Luke, I can roll 6 when I drop the die from a very small height almost vertically. Some other people may learn to throw predictable numbers in ways that are less "obviously fraudulent".

As long as one has this degree of control, classical physics is sufficient and the contributions proportional to Planck's constant are negligible.

The quantum randomness may obviously only matter when the result of the rolling isn't clear.

When the die rolls many times etc., it's following a complicated trajectory - classically - on the phase space, and the final number is a very sensitive function of the initial conditions. Classical chaos theory shows that the sensitivity increases exponentially with the number of rotations of the die, more or less.

So if the die rotates e.g. 20 times, the resulting number it shows depends on each femtometer (or shorter) in the initial position, orientation, and velocity of the die. And the uncertainty principle guarantees that all these quantities describing the initial state can't be determined this accurately simultaneously.

So the initial state just can't be accurate enough to allow us a deterministic prediction of what the die rolls. That's why the uncertainty principle inevitable matters. The uncertainty made unavoidable by quantum mechanics (the uncertainty principle) is enough (sufficient) to make the result completely unclear. It doesn't mean that it's necessary. There may be other sources of uncertainty that make the resulting number on the die unpredictable.

Lubos - Thanks. Sorry I was so careless.I have a great deal of evidence, admittedly anecdotal, to show that there is a genetic component to argumentativeness; so perhaps we didn't have as much choice in the matter as we would like to believe.

Dear Luboš,although the phrase "good Everettian" sounds quite scary, I actually do find it useful to occasionally imagine that the universe as a whole has some state that undergoes unitary evolution.You can put your coworker into a well isolated box and let him observe some decay process. In the right moment, your description of the state of the box is |decayed>|experience of knowing the thing has decayed> + |non-decayed>|experience of knowing the thing has not decayed>.I would be tempted to guess that (at that particular point in time) there are two components of the Hilbert space of the box that are "special" with respect to the specific dynamics of the box, one of which harbors state vectors of the "decayed" type, the other one of the "non-decayed" type. I would then find it tempting to say that both variants of my coworker "exist". (You will probably strongly object to such a claim).So I am sort of describing the system that contains an observer (the coworker) and in this external description the act of observing the decaying particle is just an entanglement of the particle with the coworker.I myself could agree to be put in such another, bigger box, described in this very way by another observer. So I can understand why one can be tempted to make it work that way globally, i.e. work with a global picture which contains me, the observer, and trying to explain my experiences as a consequence of this global dynamics.One could try to define such natural Hilbert space components (natural with respect to the actual "whole universe dynamics") and postulate that each experience is supported by a projection of the state of the universe to a particular HS component.The projections which harbor observers who see violations in long term QM predicted statistics have low amplitude and would somehow "happen less", so what "happens more" are projections where the statistics are OK. All this would be emergent. The aim of such view is making the set of postulates more minimalistic.I actually do think such thoughts are interesting, provided they do not claim there is something wrong with Copenhagen interpretation. It should be just different view on the same thing, not something you "believe in".Regarding the "existence" of the other branches: There are spacetime regions which are too far to have influenced us, yet we include them in our description of "reality", too.Just to be sure, I do not think the established way to view QM has "problems", I am fine with it. But I do think some flavours of Everettian thoughts may actually lead to interesting !equivalent! QM formulations. Perhaps the QM of the young universe could benefit from whole-universe wavefunctions?

You wanted to say exactly the opposite, didn't you? It's the philosophers who are thhe hotbed of Bohmian mechanics, MWI, "transactional", and all this stuff, while decoherent histories are done purely by physicists - and good enough physicists.

Dear Pavel, an arbitrarily large physical system - and there is no physical reason not to include the whole Universe - is fundamentally evolving by unitary evolution operators.

But this fact hasn't been discovered by Everett. It was discovered by the founders of QM - Heisenberg, Jordan, Born, Bohr, Dirac, Pauli, and a few others 90 years ago. This is not just a matter of principle.

Within years, already in the 1920s, they already developed quite some theories how quantum mechanics describes metals, paramagnets, Fermi liquids, then superconductors, and lots of other macroscopic materials. It is in no way true that quantum mechanics "only" applies to microscopic objects. And it is also untrue that the founding fathers of quantum mechanics have ever believed that quantum mechanics - and unitary evolution - fails for large enough systems.

It never fails and competent physicists have known that since the 1920s.

The founders of decoherent/consistent histories are physicists, and all meaningful developments of the formalism were done by physicists. That is for sure.

But I was disheartened to see a recent(ish) survey of opinions within the scientific community ( http://arxiv.org/pdf/1301.1069v1.pdf ) showed 0% interest in consistent histories, while it plays a significant role in any typical undergraduat philosophy of physics course, due to its relevance regarding how we construct and evaluate propositions about reality.

This is not a criticism of physicists. Most don't need to care, and can just use the original Copenhagen formulation 'as is' to do their job. It is just a personal desire on my part. I think it has great potential when it comes to outreach and the public understanding of QM.

1) many physicists would say that consistent histories are just an extension of Copenhagen, and they would vote for Copenhagen (and I would probably choose this vote as well, instead of CH or QBayes, believing that the latter two probably make a perfectly orthodox physicist fall into a minority)

2) the conference where the poll was organized was a gathering sponsored by the Templeton Foundation, so instead of saying that the participants represent the physics community, it is more accurate to say that they represent the part of the scientific community willing to be bribed for cheap slogans spreading a certain religiously rooted agenda.

To me the quoted piece was brilliantly stated and very helpful. The reason is that I was often confused by the arguments against the Copenhagen interpretation of QM which started from classical statistical mechanics.

Now I understand those arguments as a simple sophism along the following lines: probabilities in classical statistical mechanics arise from the lack of a detailed knowledge (whatever the reason for lacking it) of the underlying classical mechanics. Thus probabilities in QM have the same cause and reflect our lack of knowledge of some deeper classical-mechanics-like theory.

"Again, there doesn't exist any need for the microstate in classical statistical physics to be knowable in principle. In classical physics, one may imagine – and people often find it useful – that a precise microstate actually exists and is known to Nature (or God) even if it is unknown to us. The laws of classical statistical physics are compatible with the idea that the precise information about the intermediate microstate (or the state at any moment, for that matter) is known to some perfect agent."

I was shocked. I am a bit busy, I couldn´t really go through article, it seems that by using negative probabilities they get born rule for sufficiently coarse grained histories. However I couldn´t see how they derive born rule for observables that doesn´t commute with the ¨real¨ one.

In addition to the excellenttheoretical counter arguments given by Lubos, my main problem with Sean Carroll’s blog on MWI is that he never explains the meaning of the word “world”, most likely,deliberately. If it means Hilbert space, it does not make any sense and doesnot solve any problem. Everybody knows that it is just a mathematical techniqueused to formulate quantum mechanics. When was the last time you walked inHilbert space? If the splitting takes place when the observer decides to makemeasurement, then you are giving too much arbitrary powers to the human beings!If it means different worlds like what we are familiar with, the concept ishighly metaphysical and does not solve any physics issue. If the branching hasalready taken place in heavens and human observer merely chooses the branch, itis even worse. In any case there is not a slightest advance in ourunderstanding of the world from MWI. At the end of the experiment, all theobservers, regardless of race or religion agree on the result!! I posted such comments on his blog. Asexpected, he did not reply. I am personally sympathetic to metaphysics inconnection with religious issues, but would not bring it in physics discussions.Obviously there are questions which physics cannot answer. The amusing thingabout Carroll’s blog is that he engages in tirades against religion andmetaphysics and at the same time advocates MWI which is at this point nothingbut metaphysics!! Why not franklyadmit following Bohr that the quantum world is intrinsically probabilistic andevery time we make a measurement we gain some additional knowledge of thesystem.I should add that multiverse predicted by string theory are ok with me because humans have no control over it!!

The image I get when I read about MWI is that MWs are almost like vectors in Hilbert space.

World is some kind of super-pure vector which is never a superposition of two other vectors, or rather, when measurement happens, having probability P(a) for result a and P(b) for b, then it splits into two vectors where respective observers measure a or b values for the observable because they found themselves living in one of the two new super-pure vectors (worlds).

So every observer lives in a classical (no superposition vector) world, like the probability of a dice showing 3 is a delta function concentrated at value 3.

The term eigenvector would probably be better than super-pure no superposition vector. So the MWI world is a tensor product of an enormous number of eigenvectors, for example one for the spacetime position of each particle in that world.

What do you mean super-pure vector? If you can describe a system as an eigen-vector of some observable(s), it won't be an eigenvector for observables that doesn't commute with this observable. Its just like a change of basis, and you can't have a vector that is never a superposition if you change to some other arbitrary basis. Just think of normal spatial vectors and rotate the coordinat-system.

Or did I misunderstand completely and you mean pure states vs mixed states? Pure states describes the situation when you have a maximum of information that you can physically have about the system, and mixed when you have less.

I think the many-worlds folk want the worlds to split when so many degrees of freedom and interactions with the environment are involved that the classical reasoning becomes approximately valid, but the problem is of course that there never is any "objective time or place" to make the cut. What one observer would consider to be completely decohered, may not be so for some other observer. Their degree of interaction or correlation/entanglement with something isn't the same. Of course the entanglement/correlation of systems is what ultimately ensures agreement between the observers should they happen to measure the same observables, but some other observer may choose to measure something that doesn't commute with their choices, so his description will include super-positions. (assuming this other observer actually can measure this, which become increasingly difficult or practically impossible as the classical limit is approached.)

I do not understand what you mean by" when measurement happens, having probability P(a) for result a and P(b) for b, then it splits into two vectors where respective observers measure a or b values for the observable because they found themselves living in one of the two new super-pure vectors (worlds)."

Who are these respective observers? Do they have eyes and hands like me and do they live in the same classical world we live in?I do not have any problem with classical observers and classical equipment. I see that observers are like me and they handle equipment they see with their eyes and arrange it on a table by their hands!One other problem I am nervous about is that Nobel laureate like Wilczek believes in MWI. If I could correspond with him, I would ask him these questions.

Indulge me a bit, for I had a few glasses of very smooth, yet dry Pinot Noir.

How come that Feynman path integrals never attracted some nutty many-worlds-like interpretations?

In path integral we have a ginormous number of classical worlds interfering with each other, strange worlds where the stone left to fall from the tower of Pisa may make weird loops, may even go to Andromeda galaxy before it hits the ground.

What if some nasty aliens, from the distant reaches of path integral measure space, where everything oscillates wildly, decided to sign a political agreement to oscillate in phase and create a new extremum in the path integral, thereby extinguishing our classical world and establishing theirs?

Those respective observers, according to my understanding of MWI, are two copies of yourself. You have a ginormous number of clones, whenever you performed some sort of measurement. Even the actions while you were asleep and dreaming count, as moving a pillow is a sort of measurement. Your clones are all out there, living their variations of your life, and you are just one branch of them.

Super pure state would have to be something like Laplace's demon state and as soon as you accept non-commuting observables eigenvectors don't help.

Of course, you can always be saved by a hypothesis that in reality nobody manages to measure the position and the momentum of some particle at exactly the same time. Thus your MWI world has an eigenvector of x since somebody just measured it and then the next femtosecond it is an eigenvector if p.

Crazy talk. Let's say that I just wanted to explore the consequences and all I end up with is crazy talk.

Well... you know, if Copenhagen interpretation makes you uncomfortable with the objective reality, there is always this nice MW interpretation where you may be a drunken fool in one Universe, but at least you can be sure you are a King in another ;)

Well, that's still a pretty nice conclusion. You tried to entertain something crazy - the many worlds thing - and got crazy out. I think it would almost be more frustrating if we put crazy in and got sensible out. Then we wouldn't know whether it was best to start with crazy or sensible in the future, hehe

Dear John, one needs to re-transform all the data in new ways to discuss other, generally non-commuting observables .But of course that as long as the underlying theory is equivalent to QM, it may always be done.

The use of negative probabilities isn't new. It goes back to Wigner's quasiprobability distribution in normal QM (really an equivalent way to rewrite a density matrix as an ordinary function of x,p - a clever thing that people working on noncommutative field theories use all the time)

https://en.wikipedia.org/wiki/Wigner_quasiprobability_distribution

which may be locally negative, and this allowed negativity is enough to explain things like violations of Bell's inequalities etc.

The probabilities of allowed statements are always non-negative, however, and this may be said to "explain" the uncertainty principle as well - you are forced to discuss only "large enough" regions of the phase space, and the averaged or integrated Wigner probability over those is always non-negative.

Needless to say, things like the Wigner quasiprobability distribution are virtually unknown to the would-be thinkers working on interpretations of QM these days - even though Wigner quasiprobability distribution and similar insights actually *might be* a way to reformulate QM in a seemingly very different, yet correct, way. But it's already too complex for them.

Dear John, I think that you are not constructing the negatiions of logical propositions correctly.

The way to negate or disprove "every large animal in water must be a fish because it can swim" is to point out that "a whale is not a fish even though it can swim", right? That's enough to prove that the original statement (an implication) was wrong.

But I could also go further and claim that the two axioms do imply that the many worlds do *not* exist.

Dear Kashyap, right, I did touch this problem although my wording was different so that it may have been hard to see we were saying the same thing.

One either means a "world" which only carries some classical information - in that case, it is a metaphysical addition describing *classical* information about the outcomes of measurements, and it may only be used when the information actually behaves classically, i.e. after the decoherence, and these many worlds say nothing about the quantum regime at all.

Or, alternatively, one may describe the many worlds with a Hilbert space as a quantum object.- but then the splitting of the worlds that increases "the number of many worlds" is mathematically impossible because it violates the no-cloning theorem

https://en.wikipedia.org/wiki/No-cloning_theorem

The quantum information simply cannot be "xeroxed" because a xeroxing of a quantum state (into N copies in total) is a degree N (quadratic, cubic...) operation while the evolution of quantum states has to be linear (i.e. N=1).

The linear i.e. N=1 character of quantum mechanics - the very postulate of linearity - really means that there must be N=1 world and not many worlds. This is the most straightforward proof that many worlds are prohibited as long as the dynamics is quantum.

Dear Tony, I agree that one can write down many-worlds-like essays about the path integral that resembles the many-worlds-like nonsense about complicated wave functions.

The only difference is that the path integral is composed (as a sum) of many configurations of the spacetime (histories) while the complicated wave function is a superposition of configurations of space and things in it (one dimension fewer).

But in the path integral, it's particularly obvious that the different histories are just "terms" in an expression that has to be calculated (summed) before one gets any physical prediction.

Incidentally, the Feynman path integral is an integral - and has to be an integral - over *all* histories, even those that one would consider "impossible", like histories with a ball flying to Jupiter and back superluminally. And all these histories actually contribute - and have to contribute - by the same amount (the same absolute value), up to a phase.

The fact that some classical histories look like better (or worse) description of the reality is purely due to constuctive (or destructive) interference!

I think that the path integrals don't attract that many nutty interpretations because the path integrals are simply too complicated for the "philosophers" (including bad physicists) who are doing "interpretations" of quantum mechanics, so they simply ignore it.

Thanks for your answer. I have learned about wigner's quasi-probability distribution after finding this paper, it is mentioned in a paper on negative probabilities by feynman. I have understood it, it is really simple. Gell-mann once said that feynman's path integral approach may be used to generalize quantum theory, maybe something like this was already in his mind. I am interested in this kind of papers because they may direct us to correct (If it is needed) generalization of quantum theory in future. Gell-mann says decoherent histories approach solves the problems that arise in quantum mechanical treatment of cosmology. However I haven't seen any active prominent researcher to talk about this. Is it because they don't know about DH or because they don't consider it as a solution ?

The phrase "good Everettian" appears in the original article, and it smells of politics/ideology, so I just said I consider this phrase scary. I do not consider myself X-ian for any value of X, much less a "good X-ian".

BTW I never claimed that unitary evolution fails, on the contrary. Everyone knows that QI cannot be xeroxed, you don't have to say that.

Do you insist that wave-functions only describe the statistics of measurements performed by some observer external to the system?If so (1), how come you do not object to the idea of a whole-universe wave function? (Given the fact that there can be no observer external to the universe).

If not (2), consider the following:If the whole universe evolves unitarily and I (myself a part of the universe) perform a measurement of some spin in state |0>+|1> onto |0>,|1> , then the state of the universe after the measurement must be |0>|I measured zero> + |1>|I measured one>.So both alternatives are there in the wave-function, which I thought was Everett's motivation to say that both alternatives happened.

Would you claim (1), or (2), or neither?

The goal of similar thoughts is to describe our experience of QM statistics as a consequence of a purely unitary whole-universe evolution, the "worlds" being just a possibly misleading name for wave-functions which, when summed, form the universe wave-function, and which correspond to the possible measurement outcomes as experienced by us.

It aims at providing a global picture from which the statistics of QM measurements can be derived, yet does not inolve the non-unitary wave-function collapse, thus REMOVING one of the postulates (the one about wave function collapse), making the theory more minimalistic.

I do not claim that it can be actually done this way -- all I say is that I understand why one might have the goal to reduce the number of postulates and still have an equivalent theory that correctly predicts observed statistics of measurements, and I think deriving such things from a whole-universe wave function that contains all the measurement alternatives is a plausible way to try.

Hi Tony. I have found the following article yesterday : http://arxiv.org/pdf/1106.0767.pdf

It is written by Gell-mann and Hartle. There they consider quantum mechanics as a classical stochastic system with extended probabilities. Extended probabilities mean that some of probabilities may be out of range [0,1]. They consider all paths from q_i to q_f, just like in the path integral, but this time they consider one of them as real. This real one may have negative probability. If you consider sufficiently large groups of histories, each history will have positive probability (although each group may include histories with negative probabilities). So when you calculate probabilities for this groups of histories, you get born rule (at least they claim so).

I am not sure whether it is correct, because in the article there is no discussion of observables other than position. Since they consider position as 'real' degree of freedom, they must give other quantities in terms of position, and they haven't done this in the article, but they claim that it is possible.

Carroll: In statistical physics, the theory says that there some actual situation given by a micro state, but we don't actually know what it is.

Lubos' reply is spot on. Stat pays does not demand that a micro state exists. Carroll's reasoning relies on pre-industrial age thermo. He seems to have missed Shannon's, Jaynes' and others contribution to the basics of stat mech in the last 60+ years.

You added the comment about the recent decades and I also agree that these developments in statistical physics, and including classical statistical physics, have placed the probabilities closer to the focus, and suppressed all the dependence on the idea that the microstate has to be known in principle. In this way, the modern language of classical statistical physics is more directly compatible with quantum (statistical or otherwise) mechanics.

Dear John, although I like decoherent histories, it's some "extra level of formalism" which isn't really necessary for figuring out the truly physical - mathematically nontrivial - aspects of the dynamics.

So even though it's a "correct" twist on foundations of QM, unlike many other things discussed in the foundations of quantum mechanics, most high-energy physicists and similar physicists are just not interested in it. Even DH contradict the "shut up and calculate" dictum.

So I think that if you did a poll among true working physicists in HEP or condensed matter physics etc. etc., decoherent histories or quantum Bayesianism would get just a few votes because most people don't even know what those things mean! It's much like scientists' desire to be agnostics - as Weinberg said, their interest in religion is so low that they don't even become atheists! ;-)

In the same way, active researchers' interest in discussions about "interpretation of quantum mechanics" is so low that they don't even get familiar with the question what DH mean.

BTW I am skeptical about your/Gell-Mann's comment that the path integral "generalizes" quantum mechanics. It's great to study theories with gauge symmetries etc. but at the end, I think that the theories described by path-integral QM are a "special class" i.e. "subset" of quantum mechanical theories. For example, they need to have a well-defined classical limit because the path integral is a construction revolving all about the classical limit (integral over classical histories). More general quantum theories don't have to have a classical limit at all!

Otherwise the integral over some histories is an integral over histories. The integrand should better be a pure phase, exp(iS/hbar), which is probably needed for unitarity or probability conservation. So I don't think that there is much room to generalize anything here.

I had no idea that Carroll also had opinions on the second law. But it does seem apparent that when someone has outdated views on one topic it generally extends beyond. In stat mech, it is still (painfully) common for people to fundamentallymisunderstand subjective probabilities and argue that these probabilities are "just math obfuscating what everyone knows is really there". Of course, when someone labels something as "just math" you know that something ugly is lurking underneath ;)

A great generalization, Steve. I completely agree. In many and many different contexts, people dismiss something as "just maths" which is just another way of saying that they are not even willing to consider it or learn what is inside and prefer their - non-mathematical - misconceptions instead of seeing what the physics tells us in the clear, robust, mathematical way.

Carroll believes that Lochschmidt's paradox still shows the inconsistency of physics and the only solution is that the cosmology (some really technical deformations of the model building in cosmology) must be profoundly changed to produce the past-future asymmetry.

I hope you agree it is wrong - the difference between the past and future works for any macroscopic system, not necessarily just the whole Universe, and may be derived "totally locally" - the irreversibility has no specific relationship to cosmology or the Universe.

The laws for probabilities in the presence of incomplete information are simply past-future-asymmetric: the probability of evolution from ensemble A to ensemble B is a sum over the (final) microstates in B, but an *average* over the (initial) microstates in A, and because the "sum" and "average" are not the same and differ by the factor of 1/N, the evolution prefers the evolution from low-N (low-entropy) initial state to high-N (high-entropy) final states, and nothing else (surely no special dynamic information about the cosmological history) is needed to understand this asymmetry. Do you agree?

He also believes that physics as we know it predicts that we must be Boltzmann brain - random thermal fluctuations that just happen to look like us (locally) because there will be many such Boltzmann brains in the future and each of them is equally likely as we (evolved "nicely" from the Big Bang via evolution). It's wrong to assume that the Boltzmann brain somewhere should be "equally likely" to be us - there is no reason to believe such an assumption, so there is no justifiable basis for the claim that physics predicts that we should almost certainly be Boltzmann brains. Do you agree?

So a few things:1) I had never heard of Boltzmann brains. Perhaps this speaks to my proper education. However, having briefly looked them up, on first impression it sounds like creationism masquerading as physics to me.2) I completely agree with your (Bayesian) time arrow argument. What we know about a system now is necessarily larger than or equal to what we knew in the past. Thus our posterior is sharper, and, thus, the entropy of our posterior is lower. Said differently, knowing nothing but macroscopic constraints known from physics principles (average energy) and logic (normalized probabilities) we start with a broad posterior distribution that can only sharpen as we gather more information.

Let me add something to the time-arrow argument because it is so misunderstood (in part because the name, "time-arrow" is misleading). Entropy (as conventionally defined in thermo) does not increase in time! It does not depend on time!

Dear Lubos, that comment of Gell-mann was included in a interview with a Gell-mann you posted some months ago, maybe not in that video but in same interview. He says something like that, "Feynman was upset that he didn't invented his own theory but maybe his path integral approach may be used to generalize quantum mechanics". Of course usual path integral formulation isn't different than quantum mechanics.

I am aware of the fact that there are quantum theories which doesn't have classical limit. I really want to learn about them, but I guess I need to learn much more before.

I am aware of the fact that DH gives same results for all experiments in the usual sense, so there is nothing really new here. It does give a treatment of closes systems, but of course that doesn't give any new prediction.

The reason I am interested in generalizing quantum mechanics is this talk given by Arkani-Hamed in perimeter :

In his words, there is no obvious exact observable in cosmology. You can't make exact measurements (there is radiation from edge of universe, you can't make an infinitely large measurement apparatus), you can't do measurements infinitely many times. Even in principle. You probably know this arguments. So I think we should be open minded. I know a lot of people have problems with quantum mechanics for completely wrong reasons, but that shouldn't make us overconservative.

Gell-mann says that DH solves problems in cosmology. In the paper of I quoted above he and Hartle say :

"Decoherent histories quantum mechanics(DH) is logicallyconsistent, in agreement with experiment as far asis known, applicable to cosmology, consistent with therest of modern physics including special relativity andquantum field theory, and generalizable to include quantumgravity."

They have a few papers about this. Now, I think Gell-mann is one of smartest people on the planet and people should consider what he says seriously. Of course he can be wrong, but it seem like nobody cares.

I've looked through the paper just now. From what I can tell, the EPE-DH formalism cannot uniquely determine a "real fine-grained history", even in principle. The more modest claim that is being made is that the formalism shows QM is consistent with the existence of a unique fine-grained history.

Even this modest claim seems quite provocative. It's not clear to me how this formalism avoids the charge of being a "local hidden variables" interpretation.

Psi-epistemic advocates are fond of invoking what they call a "true state of affairs" λ(t), which results in all sorts of inconsistencies. I'll have to do more reading to see how Gell-Mann's real history q(t) is different from λ(t).

Landau also said similar things about Lochschmidt's paradox. It must be at then end of chapter of 1 or 2, volume 5. I think the fact that big bang wasn't discovered/accepted at that time was a reason for this. In an eternal universe entropy should be maximum already.

Let me add something else which I think is relevant. When someone speaks about "entropy increasing in time" in the context of Loschmidt's paradox, they are conceiving of stat mech as a dynamical (mechanistic) explanation for how degrees of freedom re-arrange themselves in time to create a maximum entropy state. Unfortunately, stat mech has nothing to say but any of that. Perhaps it is best to call the "arrow of time" the "inverse arrow of information".Steve Presse

Based on my reading of Landau's Statistical Physics Vol. 5 I am pretty sure you are incorrect about two small things.

First, I want to show that your claim "Classical statistical physics in no way "demands" that the precise microstate is in principle knowable" makes no sense.

How?

Well the very assumption that there even exists a phase space in which you can set up your probability distribution necessarily implies that an exact sol'n for the EOM of all particles must exist (in principle). Remember 'classical statistical physics' is constructed in the phase space of a mechanical system by assuming every possible state is accessed many times (Landau P. 3), you are just exploiting the fact that many states are accessible over a long time period.

You seem to be skipping a logical step when you talk about a classical phase space and the probability distributions on those spaces, if we could not theoretically find the precise accessible microstates then we can never talk about the set of accessible microstates over which our probability distributions are set up. It doesn't make any sense to even pose the question if you are not exploiting the fact that exact solutions to the EOM exist, right? This is the foundation Landau builds classical statistical physics on, why are you even calling it physics if you are not exploiting this fundamental link???

All 'classical statistical physics' does is to say that the degrees of freedom do everything that is possible instead of doing specific things in a specific way (by solving the EOM exactly). If the exact microstate was not in principle knowable then the set of all possible microstates would not be knowable either. By leaving the degrees of freedom unspecified we can set up these statistical laws, but they must "cease to have meaning when applied to mechanical systems with a small number of degrees of freedom" (Landau P. 1). You couldn't even set up the phase space if those exact EOM solutions didn't exist.

Second I want to challenge you on your claim "It's pretty much guaranteed that people don't understand a topic if they dismiss it as "just mathematics"." :) I want try to make the argument that "classical statistical mechanics" is actually "just mathematics" in light of the existence of quantum statistical mechanics :)

Reading Landau section 7 (p. 25) he talks about how if we are strict, entropy ln(dpdq) should logically have dimensions of "logarithm of action" and so a fundamental physical concept like entropy should be unit-dependent classically. This implies only differences of entropy matter. However in QM entropy is a well-defined unit-independent quantity. If classical statistical mechanics is actually physics you are saying that entropy suddenly becomes unit-dependent in the quasi-classical approximation, and suddenly entropy differences are all that matters, but in the quantum regime entropy is well-defined. I think this shows that 'classical statistical mechanics' is actually just mathematics, because the quantum variable of entropy basically breaks.

It made sense because it was built in a phase space and reproduced quantu Boltzmann statistics basically by accident because of the discreteness of phase space. But the shift in meaning of entropy seems to me to show 'classical statistical physics' is literally just math.

No, your comment is just repeating the misconception I criticized. Classical statistical physics in no way demands or implies that the precise point in the phase space is known (and Landau never contradicts what I say).

After all, the phase space exists in quantum mechanics as well, and one may express the density matrix as a Wigner quasidistribution on it, but particles/objects demonstrably cannot be located at specific points of the phase space (thanks to the uncertainty principle), not even in principle.

In statistical physics, one cannot *prove* that it is impossible but one cannot prove it is possible, either.

"Remember that a classical statistical physics is constructed in the classical phase space of a mechanical system by assuming every possible state is accessed many times."

Few known Hamiltonians are actually ergodic. In fact, it is not essential for a physical system to visit phase space many times. Actually, I can even write down partition functions with sums over states that I will never observe but that must be incorporated in the sum as a matter of logical consistency.

This is exactly the reason why good theoretical physics (and physicists) is endangered these days:

There are tons of arrogant self-centered aggressive loudmouths who, despite their having no clue what they are talking about, feel entitled to publicly attack and patronize good and correct physics. Just because this style of "science pupolarization" sells with the dumb masses and brings the trolling loudmouths fame among the laymen and money...

At the same time, too many people in the know who exactly know what they are talking about avoid "being confrontational" and behave submissive towards people who are much less knowledgeable but are just loud and seeking popularity with the laymen.

I dont understand why it has to be like this.

Lumo seems to be among the rare exceptions, who can stand and vigorously defend their ground ...