Cosmology at MaxEnt in Ghent

It seems I am an invited speaker at MaxEnt 2016, the 36th Workshop on Bayesian Inference and Maximum Entropy Methods:

As well as talking about some of my own research I’ve been asked to include a bit of a review in my presentation. I haven’t quite decided what background stuff to include so thought a bit of crowdsourcing was called for. Anyone got any suggestions for important Bayesian/Maximum Entropy developments in cosmology?

59 Responses to “Cosmology at MaxEnt in Ghent”

There are some Ockham examples, ie mass of graviton or neutrino or 10th planet is zero versus mass is nonzero and to be estimated from the data. Or replace “mass” by “strength of 5th force” or by “cosmological constant in Einstein’s field equations”.

There is the chance of clearing up some of the misunderstandings and multiple meanings regarding the multiverse. I need a Bayesian primer on that myself.

“There is the chance of clearing up some of the misunderstandings and multiple meanings regarding the multiverse. I need a Bayesian primer on that myself.”

I recommend Max Tegmark’s book Our Mathematical Universe. Even if you don’t agree with Max’s ideas, he states them clearly and also distinguishes clearly between mainstream and really far-out. Also, even if you don’t like his terminology, he explains it well and uses it consistently.

About the multiverse I want to learn, but if Tegmark doesn’t mention the many worlds interpretation of quantum mechanics only to dismiss it, and instead runs with it for a while, then I’d want to be told where to restart reading. Also I need a primer by someone who understands that the arguments of probability are propositions and that the laws of probability are a consequence of the laws of Boolean algebra of those propositions; while probability itself is the extent to which one proposition implies another. I am committed to that view of the subject (and firmly believe it is the only sensible one, although I’d rather not argue it on this thread).

In the 1980s I recall a certain review article on entropy. Steve Gull commented tellingly that the trouble was that the author believed them all (despite their incompatibility, and the fact that a razor already existed for distinguishing sense from nonsense about entropy). This is not to accuse Tegmark of doing the same, but I am looking for a path through the darkness, as it were.

If Tegmark is a many-worlder in the interpretation of QM, before any thought of cosmology, then I’ve no time for his book.

There is more than one thing wrong with many-worlds-QM, but here is the strongest argument against it. A measurement involves an interaction between the measuring apparatus and the system, so the apparatus and system could be considered quantum-mechanically as a joint system. There would be splitting into many worlds if you treat the system as quantum and treat the apparatus as classical, but no splitting if you treat them jointly as quantum. Physicists are free to analyse the situation in these differing ways, but they would then disagree about whether splitting has taken place. That’s inconsistent and therefore unacceptable, even in a gedanken experiment.

I’ve much to learn about the meaning of “multiverse” but I’ve set my views about the Many Worlds interpretation of QM and anybody who wants me to rethink it is going to have to knock down the above argument against it. You’ve reviewed Tegmark’s book, so does he do that? If so then how, please?

OK, I’ll bite. “Many worlds interpretation” covers a multitude of sins, but if you follow Everett closely you would say

1) Physicists are *not* free to analyse measurement with the apparatus treated as classical, because there is no such thing as a classical apparatus in a quantum world.

2) A measurement apparatus worthy of the name gives macroscopically different final states when presented with different eigenstates of the operator it is supposed to measure. Classic example is measuring magnetic moments with a Stern-Gerlach magnet and a detector screen. Therefore, when presented with a superposition of eigenstates, the quantum solution for system+apparatus is a superposition of different macrostates. Specifically, for N eigenvalues the final state consists of N wavepackets in both configuration and momentum space which are sufficiently separated that no subsequent experiment could detect interference effects between them (as discussed in the extensive literature on decoherence). “Splitting” is simply a description of this tendency for the wave function to fragment.

Now, you probably know all that, so I’m guessing that your objection is that the splitting is illusory because there are an infinite number of bases in Hilbert space in which this so-called split state is one of the basis vectors, and you could define “observables” to match. Alternatively, you could be worried about the case when the observable has a continuous spectrum. I can counter both, but I’d rather not second-guess your position.

That’s unusually sloppy language from you Anton: obviously the label you stick on the apparatus has no effect on how it works, in particular whether or not the Schroedinger equation describes its operation. If you think there is a theorem that requires a classical apparatus to allow a measurement then I submit that you are using unwarranted premises, because there are at least two interpretations of QM that don’t require it (the other being Bohm’s).

I haven’t parodied the many-worlds interpretation, and the absurdity of which you (Paddy) rightly speak is ingrained in it. Such absurdities are why I reject it.

Bohm’s pilot-wave intrpretation is exactly that, an interpretation, because it doesn’t predict differently. Fair enough, but does it aid understanding? It is nonlocal but it is causal. However the Lorentz non-invariance of the order of measurement on the two particles, in some Bell-type experiments, implies acausality.

The many-worlds interpretation also claims to make no different predictions about any observable phenomena than standard QM, so it is an interpretation in the same sense as Bohm’s. I mentioned Bohm only as another counter-example to your claim that measurements cannot be understood consistently from within QM. This blog margin is certainly too small to get into a general discussion of causality in QM, but see John Bell’s “Speakable & Unspeakable…” collected papers…a delight to read if you havn’t already (& youo will find a fellow many-worlds skeptic there though I suspect for completely different reasons).

Nothing I said above strikes me as absurd, or at least, not as absurd as pinning your understanding of QM to non-existent classical apparatus!

OK, I’m going to have to ask you to define a measurement. It’s an interaction between a system and a measuring device, but if you treat the two jointly as a quantum system then how do you extract a single value of a measurement from that?

I’m not very willing to take lessons on absurdity from an advocate of a zillion unobservable universes being created every time a measurement is made!

For causality in Bell experiments, please see my paper expounding Bell’s theorem as a chain of Bayesian reasoning, from the observation on one particle, to a variable internal to that particle, to the internal variable of the second particle via their correlation, to the outcome of an observation on the second particle. The point is that this reasoning is incapable of reproducing the data, and since the only physical assumption is that the result of an observation on a particle is governed by a variable internal to it (ie, local, causal), this assumption is false. See: “Bell’s theorem and Bayes’ theorem”, Foundations of Physics 20, 1475-1512 (1990).

I think nowadays we would say that Everett’s definition could be improved a bit, for instance most real-life measurements don’t preserve the original eigenstate of the “system”, and he should have emphasised that the measurement interaction should be irreversible in the sense that it leads to decoherence between the different “branches”.

I didn’t point you at Bell’s writing to learn about his inequality theorem: he has a lot more interesting things to say than just that.

That John Bell managed to derive and understand two key results relating to quantum mechanics while adopting a confused and mistaken view of probability only increases my admiration for his insight, but I’ll take his analyses and put my own words around them (re hidden variables) – or David Mermin’s, in Mermin’s 1993 Rev Mod Phys article “Hidden variables and the two theorems of John Bell”.

Please would you answer my question in your own words? A measurement is an interaction between a system and a measuring device, but if you treat the two jointly as a quantum system then how do you extract a single value of a measurement from that?

I have it on my bookshelf and I read it some time ago; what I have written to you is a summary of something I wrote privately which was based explicitly on my dissatisfaction with it. So it won’t refute what I’m saying, in my understanding at least, and if you think it does then I need a different phrasing of why. I did my homework before commenting and I therefore assert, as argued above, that the many-worlds view is internally inconsistent.

Although I hadn’t lost any sleep over QM, it is still puzzling to some extent. I didn’t favour the many-worlds interpretation until I read David Deutsch’s The Fabric of Reality, which I think makes a good case for the MWI.

What I haven’t yet done here (Paddy) is provide a view of the classical limit when interaction is termed measurement. For that I go with Nico van Kampen, as in eg “Ten theorems about quantum mechanical measurements”, Physica A153, 97-113 (1988).

OK, here’s Everett’s criterion, from p53: An interaction H is a measurement of A in S_1 by B in S_2 if H does not destroy the marginal information of A and if furthermore the correlation {A,B} increases toward its maximum with time.

Now, why should such a special type of interaction cause splitting into as many worlds as there are eigenstates of A whereas other interactions don’t?

And, where the spectrum of eigenvalues of the operator being measured includes a continuum, what of the many worlds view?

Per an earlier answer, “splitting” refers to the case where the joint wavefunction is irreversibly separated into (FAPP) non-overlapping wavepackets. Irreversibility requires that at least one of S_1 or S_2 is sufficiently complicated, let’s call it macroscopic athough we are talking about information loss through decoherence which is much faster than conventional entropy. If a measurement has taken place, then the different wavepackets are correlated with the different eigenstates of the measured property (A) of S_1, and there is some property (B) of S_2 (e.g. the position of a meter needle) which is correlated with A. Everett’s formulation encapsulates this in the requirement that the correlation stays put as t tends to infinity. (Compare the counter-example of a Stern-Gerlach experiment which separates different spin states but then brings them together again, wiping out the correlation between position and spin: not a measurement because there was no interaction with a macroscopic system which sealed the deal).

This could have been formulated better, I guess. You might ask what happens if you turn off the meter or burn the chart recording paper, so property B no longer has the required value. This answer is that the information about the measurement still exists due to correlations with innumerable particles, e.g. the photons reflected from the needle when the measurement was made, and it is not possible (FAPP) to unpick.

Everett is not claiming that measurements in the conventional sense are the only way of splitting psi; every time macrostates depend even marginally on a quantum superposition this happens, even if there is no humanly-accessible record of the result.

Continuous eigenvalue spectra: First response is to query whether any such actually exist: you need an infinitely large box for a free particle, but our universe has an event horizon. And even if the particle is free, a meter will not have a genuinely continuous infinity of possible needle positions; the constraints will give a quantized spectrum.

Second answer is that the point of the many worlds interpretation is to understand how the wavefunction could describe the world as perceived by observers, in which macroscopic superpositions never appear. If there was a genuinely continuous eigenvalue spectrum of B, then the resulting S_1 + S_2 wavefunction is spread along a 1-D thread in N-D configuration space (N macroscopic). This is much more like a collection of separate wavepackets than the naive might think. Almost every direction in ND space is orthogonal to the thread, so in practical terms whichever way you go, the wavepacket is confined to a microscopic region. The linking direction after macroscopic dynamical evolution is FAPP impossible to find. Only points on the thread representing microscopically-close states can ever influence each other (Huygens principle applied to psi). So the thread represents a continuous distribution of observers each of whom seem to have made a fairly sharp measurement and who have FAPP no way of re-merging their macrostates with parts of the line where the measurement came out macroscopically different.

PS I think this conversation has gone on too long for this forum. Do you have a blog we could continue on?

The coulomb potential is an idealisation. What if the universe is finite but unbounded? What if it has an event horizon around each observer, as in the current LCDM models? Frankly, that’s outside my area of expertise, I don’t even know if it can be answered absent a full theory of quantum gravity.

I regard van Kampen as having a consistent and compelling resolution of the “regress problem” of observers/measurers observing observers…. observing a system. No splitting is needed. What Everett can’t do is say WHEN this splitting supposedly occurs, given the greyness about the tailing off of the interaction between the system and the apparatus. His criterion for what sort of interaction constitutes a measurement imports all sorts of axioms involved in information theory, thereby mingling epistemology and ontology.

The notion of a free particle is an idealisation; the Coulomb potential is an idealisation; yes of course it is. Physics makes progress by looking at simple cases, and an analysis that makes no sense in a simple case, such as Everett’s interpretation, isn’t going to make sense in a complex one.

What you see as weaknesses, I see as strengths. In MWI, measurements are not primary parts of the ontology, they are just physical processes; very complicated ones because they require irreversibility. Defining measurements has to involve epistemology because a measurement is not a measurement unless we can use it to learn about the world. “Splitting” is not primary, it is just a description of the normal evolution of psi. Residual quantum interference effects fade out as the system gets more complicated, and can be recovered slightly better as our technology improves, the boundary depends on signal-to-noise. But recovery becomes exponentially more difficult as the dimensionality of configuration space increases. There is a very close relation, more than just an analogy, with the way statistical mechanics leads to classical thermodynamics.

On idealisation, my point is that any argument which depends on an idealisation being exactly true is going to be unconvincing.

I’ll dig out van Kampen’s paper if I find the time. Personally although I find MWI attractive I wouldn’t claim that it is the only possible way to interpret QM, and there are aspects of it that bother me, which is why I’m interested in debating it. Proliferation of universes is not one of those aspects: the physical universe is already ridiculously larger than it needs to be so why worry?

That splitting into many worlds makes it easier for *some* people to understand (it makes it harder for others!) is not an ontological reason for it to happen. It’s an epistemological one, making it a category error. And it doesn’t address the question: Why does splitting supposedly happen in some interactions but not others? Criteria about the classical limit are arbitrary; you called me on the fact that ultimately there is no such thing, so it’s not proper to use them here.

Just what was Everett smoking? I find Father Christmas and the tooth fairy more plausible. They aren’t observable either.

As I keep saying, splitting is not something imposed on the theory, it is a property of solutions of of the Schroedinger equation. Are you disputing that? You complain that it involves arbitrary judgements, but a microscopic description of any macroscopic phenomenon is inevitably a bit arbitrary: what is the atomic description of a bowl of cornflakes? Those specks of cornflake dust: are they part of it or not? What about the water vapour from the milk? Even the ceramic bowl is continually losing and gaining atoms.

An interpretation of QM (or any theory) has to explain our subjective experience of a macroscopic world more-or-less described by classical physics. In MWI that emerges as a large-N limit, with no violation of the TDSE, exactly as one would hope. What I can’t accept is requiring an irreducibly classical “apparatus” to which TDSE does not apply. I also find it redundant to modify the theory just so that it cuts off other branches shortly after they disappear over the current horizon of observability, although that does have the virtue that it might be testable if we can push the horizon a bit further out.

Whatever Everett was smoking, a fairly large number of physicists are on these days, including at least a plurality of cosmologists – maybe because they are immune to the subconscious belief that physics is something that only happens in the laboratory.

“Proliferation of universes is not one of those aspects: the physical universe is already ridiculously larger than it needs to be so why worry?”

Indeed. It might even be infinite.

On the other hand, in an APS interview, Maarten Schmidt was asked what he would have done differently had he created the universe. After trying to get out of answering the question, he said “make it bigger”, pointing out that with a piece of glass which fits in a (large) room, one can see objects almost at the particle horizon.

“splitting is not something imposed on the theory, it is a property of solutions of of the Schroedinger equation. Are you disputing that?”

Please define splitting. I take it to mean the splitting of our universe into a zillion mutually unobservable universes, one corresponding to each eigenvalue of the observable undergoing measurement. I dispute that as a useful way to look at the evolution of the overall system and how to segregate it into system and apparatus. I obviously do not dispute the mathematics of the evolution and of projections onto the pace of the eigenfunctions. I dispute the former on many grounds, but above all on the fact that some interactions supposedly induce splitting but others don’t. (Never mind whether these are measurements or not.) Other objections are the impossibility of stating when this happens and how you specify the splitting when the eigenvalues of the relevant operator includes a continuum. (Invoking the rest of the universe is as much a red herring as it is when doing the maths.)

If this drivel has lulled a lot of physicists, more fool them. So did Copenhagen for a generation and no doubt people who objected then were told the same.

I googled van Kampen’s paper. If you believe what he is ostensibly saying, then I can see why you are frustrated with the many luminaries who have adopted apparently absurd interpretations. But there is something worse than absurdity, namely believing two contradictory statements to both be true. van Kampen’s easy way out is logically incoherent in exactly this sense, and he was called out on it by John Bell in his 1990 article “Against `Measurement'” (online here: http://www.tau.ac.il/~vaidman/IQM/BellAM.pdf)…which came up as the second item in my search.

I was grateful to find that van Kampen’s “theorems”, are mostly not mathematical theorems but just slogans, most of which are perfectly sensible. They include

“Theorem IX: The total system is described throughout by the wave vector Psi and has therefore zero entropy at all times.”

From his discussion the “total system” is macroscopic and includes the measuring apparatus and such entities as cats, and presumably humans. Everett, Bohm, and Bell (but not Bohr) could all sign up to this.

also

“Theorem VII: The collapse of the wave function of the object system is a consequence of the Schroedinger equation for the total system (i.e., object system and measuring apparatus together).”

The analysis of measurement that leads to this is completely consistent with Everett’s. At this point, van Kampen means by “collapse” something quite different from the collapse of von Neumann and Dirac: it is not the sudden replacement of one wavefunction for the total system by another (which would violate the Schroedinger equation), but the correlation of the object eigenstates with apparatus eigenstates. The total system Psi is still a superposition at this point, albeit one in which the cross-terms in |Psi|^2 vanish. So van Kampen’s collapse is *relative* to the apparatus…he could practically be quoting Everett.

But then:

“Theorem VI: The wave function of a system of a macroscopic number of particles gives, on measuring macroscopic quantities, results that can be described in terms of classical probabilities.”

van Kampen emphasises the difference between classical probabilities, which quantify our subjective uncertainty, and quantum probabilities, “equal to |Psi|^2 by definition”. But Theorem VI is crucially ambiguous. In what sense does the wave function “give…results”? Conventionally, we calculate |psi|^2, i.e. quantum probabilities. But, per Theorems XI & VII,
these do not quantify our ignorance about the true state, they (along with the phases) characterise the state itself. Everett would be quite happy with Theorem VI too, but in his picture it is consistent with the other theorems because, after the experiment, all the non-interfering components of the superposition still exist, and the subjective uncertainty is only that we don’t know which branch we are in until we look at the apparatus. Van Kampen describes the MWI as a “mind-boggling fantasy” so this can’t be his solution. John Bell charitably assumes that van Kampen’s get-out is that the total quantum state Psi is not in fact a complete description of reality, and that we need supplementary variables a la Bohm, which pick out one branch of Psi as being the “real” one. But van Kampen gives no hint of this, and I can’t help reading his comment about “complicated and contrived” theories “carefully constructed to reproduce the known results” as a dig at Bohm.

So van Kampen is trying to have it both ways: as John Bell puts it, “and” becomes “or”; “as if it were so” becomes “it is so”. Of course, nothing in the mathematics reflects these shifts of meaning. If you are not prepared to put up with this verbal sleight-of-hand, then you are left with options that seem at least one of absurd or contrived.

I agree that “theorems” is not a good word for van Kampen’s 10 claims. In fact I disagree with him about probability and I disagree with him that Copenhagen is adequate (I corresponded with him a couple of decades ago.) But neither of those disagreements invalidates his argument against Many-Worlds in my opinion. He published other papers advocating his view but I regard the one I quoted as his best summary, and I’ll read Bell’s counter-argument tomorrow (thank you for it; I’ve just got back from a long weekend).

In the meanwhile, though, I have the following question for many-worlders: You said on March 6th at 4.35pm that

when presented with a superposition of eigenstates, the quantum solution for system+apparatus is a superposition of different macrostates. Specifically, for N eigenvalues the final state consists of N wavepackets in both configuration and momentum space which are sufficiently separated that no subsequent experiment could detect interference effects between them (as discussed in the extensive literature on decoherence). “Splitting” is simply a description of this tendency for the wave function to fragment.

Now, many-worlders assert that the splitting is ontologically real, even if unobservable. It is not merely a mental crutch, ie epistemological. How do you decide whether the “N wavepackets in both configuration and momentum space… are sufficiently separated” for the splitting phenomenon to take place? Today’s apparatus might not be up to the detection of interference effects, but what about tomorrow’s? Does not the absence of any such objective physical criterion make the view untenable?

I’m not quite sure why you are banging on about the fuzziness of branching. Everett’s answer would be that only the wave function is ultimately real; everything else from atoms to cats to “worlds” is just a description of the structure of the wave function. The branching of psi in configuration space, like the branching of a path in the woods or the branching of a tree, is intrinsically fuzzy. Just because there is no mathematical point where the branching happens doesn’t mean that branches don’t exist. As you say, in QM branching can always in principle be reversed, and people are getting better at doing it. But it gets exponentially harder as more degrees of freedom are brought into play. No-one is ever going to get fringes out of a live/dead cat, or even a bacterium, and in particular not out of a human brain perceiving two different results at once. Everyday appearances are pretty safe. But if we *could* realise a Mr Tompkins fantasy and actually show interference between macroscopic states, that wouldn’t be a disaster for MWI, it would pretty much prove that it was correct (well, that or de Broglie-Bohm).

I recall Bell’s “Against measurement” appearing in Physics World unhappily soon before his death. He is correct, of course, that “it would be possible to find a sum of [cross-terms of] very many terms, with different amplitudes and phases, which is not zero”. But he admits that “interference between macroscopically different states is very, very elusive.” That is all van Kampen is saying. Bell doesn’t particularly like FAPP (for all practical purposes) arguments but he can see their rationale; and who now takes seriously the Ghirardi-Rimini-Weber scheme he advocates instead?

I do disagree with Bell’s suggestion that van Kampen has smuggled in a change of the theory, ie of the physics, by speaking in advance of Schrödinger’s cat being “alive” or “dead”. At the microscopic level you can’t even speak of a “cat” – and this criticism applies to all interpretations, so it is unreasonable to single out van Kampen’s. In fact van Kampen turns this problem into a strength of his view. You can’t do without FAPP! Here’s an analogy: you might as well say that there is no such thing as pressure, because it is not impossible saccording to Hamiltonian dynamics that all of the air molecules in a 1-litre container will fly to one end of the container and stay there. But FAPP it is impossible and so we admit the notion of pressure.

As for many-worlds, according to its followers the splitting is a real physical phenomenon, even if unobservable. It is not merely a mental crutch. How then do you decide whether, in your own words (Paddy) the “N wavepackets in both configuration and momentum space… are sufficiently separated” for the splitting phenomenon to take place? Today’s apparatus might not be up to the detection of interference effects, but what about tomorrow’s? Does not the absence of any objective physical criterion make the view untenable?

In your interpretation, at what point does the quantum superposition turn into a single definite experimental result? Correlation with a macroscopic apparatus is not instantaneous. Early on, we can (and sometimes do), interrupt the measurement and instead demonstrate interference between the terms in the superposition. The exact point at which this becomes impossible in practice is just a matter of technology. Van Kampen’s “collapse” as initially described, still has the system+apparatus wavefunction as a superposition. But at some mysterious point the other terms in the superposition disappear. Evidently, this cannot be described by a linear Schroedinger equation, so he is just wrong in claiming the opposite. This is the Copenhagen “shifty split” and it is problem for you and van Kampen, not for me and Everett. For Everett, branching is a real macroscopic phenomenon, like pressure, but, like pressure, meaningless on the microscale.

As for Bell, he is not, of course, against taking limits, if they are seen as limits, but against arguments which are equivalent to “FAPP water is a continuous fluid, *therefore* there is no such thing as a water molecule”. Please don’t quibble about whether FAPP is correct in this case (it certainly was at the time of Democritus); the point is that the premise does not justify the conclusion.

GRW still has supporters, including Ghirardi. As Tony Leggett says, honest physicists should be rooting for real-collapse models because they are testably different from QM, and advances towards quantum computing are really constraining such theories.

In your interpretation, at what point does the quantum superposition turn into a single definite experimental result?

There is no instant, of course; the sum over the cross-terms is simply too small to be observed, compared to what is observed.

I’d say that physicists should be rooting – in fact seeking – for hidden variables, because they are testably different from QM. And if they find them then many-worlds, like Copenhagen, dies overnight. The difference would be that Copenhagenites, who are essentially agnostics in the face of competing interpretations of QM, would not have red faces. Everettians would have.

Allow me to “bang[…] on about the fuzziness of branching.” some more. I pointed out that many-worlders assert a real, ontological (if unobservable) splitting, and asked exactly when during the interaction between apparatus and system this occurred. Your answer is that this is “intrinsically fuzzy”. But you can’t prove that. It might occur at an instant, say when the interaction is at its strongest; or when it is reduced to 10% of its maximum value. Or 15%. it might occur, not at an instant, but as a process, which you are suggesting. But you know nothing – and can know nothing – about the dynamics of the process. To say it is fuzzy is as presumptuous as to say it is instantaneous and occurs at the moment of strongest interaction. Saying it is fuzzy is projecting your own uncertainty about what is going on, onto the system. That is a category error. It is the thinking that is fuzzy here!

We are going round in circles now. I’ve already answered your question several times: the only dynamics in MWI is Schroedinger, about which we do know something. You have failed to articulate why you have no problem with having a certain arbitrariness in connecting micro with macro concepts in some fields of physics, like thermodynamics, but you can’t accept it in quantum mechanics.

But thanks for answering my question. If the cross-terms are still there, even if unobservably small, then obviously all the terms in the superposition are there as well. They (eventually) describe different macrostates of the apparatus, each correlated to different macrostates of any recording system, each correlated to different macrostates of any human observer, each of whom perceives a particular result from the measurement; and the chain of correlation extends to the entire observable universe. This narrows the options down to pure Schroedinger or Schroedinger supplemented by hidden variables.

The “hidden” in hidden variables implies that they exactly duplicate standard QM, so are not amenable to testing. But we agree that we should be looking for the theory to break down. Mostly that means looking for non-linearities (as in GRW) that destroy superposition…if you can find a way to have a superposition-preserving theory that is testably different from QM I’d be interested to hear it, because I would have thought that you would really only be testing the specific form of the Hamiltonian, which of course is exactly what they do all the time at CERN.

You have failed to articulate why you have no problem with having a certain arbitrariness in connecting micro with macro concepts in some fields of physics, like thermodynamics, but you can’t accept it in quantum mechanics.

If I thought that, I’d be dissatisfied. The arbitrariness is in the insensitivity of the apparatus.

But thanks for answering my question. If the cross-terms are still there, even if unobservably small, then obviously all the terms in the superposition are there as well. They (eventually) describe different macrostates of the apparatus

They’ll be around for a while, but eventually washed out by interaction with the environment, ie decoherence.

The “hidden” in hidden variables implies that they exactly duplicate standard QM, so are not amenable to testing. But we agree that we should be looking for the theory to break down

By ‘hidden variables’ I mean variables that allow me to ask whether the NEXT electron in my series Stern-Gerlach apparati will show spin-up rather than spin-down, not merely the statistics of the observations. That would render many-worlds an obviously untenable way to look at the situation. But if we find these variables and learn how to manipulate them then we can indeed violate QM.

The arbitrariness is in the insensitivity of the apparatus.
I really have no idea what you mean by that.

They’ll be around for a while, but eventually washed out by interaction with the environment, ie decoherence.
I was talking about the terms in the psi superposition, which are not washed out: you are confusing them with the cross-terms in |psi|^2. People don’t invoke wave-function collapse for fun, you know. Linearity ensures that TISE alone can’t get rid of terms in psi.

Paddy: For any measured eigenvalue of the system there are generally many degrees of freedom in the Hamiltonian of the apparatus, so that the density of states of the apparatus is high. (Herein the arbitrariness.) Consider the apparatus variable that flags the result of the measurement. In the sum over states giving the expectation value of this variable, there are very many cross terms between quantum states of the apparatus corresponding to different eigenvalues of the system. These cross terms are not generally correlated in amplitude or phase, so they average out when the expectation value is taken, in accordance with the law of large numbers. Even when this is not the case they are usually washed out by interactions with the environment, because you cannot in practice isolate a system perfectly. That is called decoherence, and quantum-computer effects and nonlocality become visible only when you prevent it.

The wavefunction when the system is not interacting simply clicks forward each infinitesimal timestep according to the Schroedinger equation. The notion of cross-terms is relevant only when we choose a basis.

The idea of an unobservable infinity of universes is perhaps the most spectacular violation of Ockham’s (qualitative) razor principle ever invented. If you find hidden variables capable of beating QM then the idea goes straight into the bin.