Pages

Thursday, June 28, 2018

How nature became unnatural

Naturalness is an old idea; it dates back at least to the 16th century and captures the intuition that a useful explanation shouldn’t rely on improbable coincidences. Typical examples for such coincidences, often referred to as “conspiracies,” are two seemingly independent parameters that almost cancel each other, or an extremely small yet nonzero number. Physicists believe that theories which do not have such coincidences, and are natural in this particular sense, are more promising than theories that are unnatural.

Naturalness has its roots in human experience. If you go for a walk and encounter a delicately balanced stack of stones, you conclude someone constructed it. This conclusion is based on your knowledge that stones distributed throughout landscapes by erosion, weathering, deposition, and other geological processes aren’t likely to end up on neat piles. You know this quite reliably because you have seen a lot of stones, meaning you have statistics from which you can extract a likelihood.

As the example hopefully illustrates, naturalness is a good criterion in certain circumstances, namely when you have statistics, or at least means to derive statistics. A solar system with ten planets in almost the same orbit is unlikely. A solar system with ten planets in almost the same plane isn’t. We know this both because we’ve observed a lot of solar systems, and also because we can derive their likely distribution using the laws of nature discovered so far, and initial conditions that we can extract from yet other observations. So that’s a case where you can use arguments from naturalness.

But this isn’t how arguments from naturalness are used in theory-development today. In high energy physics and some parts of cosmology, physicists use naturalness to select a theory for which they do not have – indeed cannot ever have – statistical distributions. The trouble is that they ask which values of parameters in a theory are natural. But since we can observe only one set of parameters – the one that describes our universe – we have no way of collecting data for the likelihood of getting a specific set of parameters.

Physicists use criteria from naturalness anyway. In such arguments, the probability distribution is unspecified, but often implicitly assumed to be almost uniform over an interval of size one. There is, however, no way to justify this distribution; it is hence an unscientific assumption. This problem was made clear already in a 1994 paper by Anderson and Castano.

The standard model of particle physics, or the mass of the Higgs-boson more specifically, is unnatural in the above described way, and this is currently considered ugly. This is why theorists invented new theories to extend the Standard Model so that naturalness would be reestablished. The most popular way to do this is by making the Standard Model supersymmetric, thereby adding a bunch of new particles.

The Large Hadron Collider (LHC), as several previous experiments, has not found any evidence for supersymmetric particles. This means that according to the currently used criterion of naturalness, the theories of particle physics are, in fact, unnatural. That’s also why we presently do not have reason to think that a larger particle collider would produce so-far unknown particles.

In my book “Lost in Math: How Beauty Leads Physics Astray,” I use naturalness as an example for unfounded beliefs that scientists adhere to. I chose naturalness because it’s timely, as with the LHC ruling it out, but I could have used other examples.

A lot of physicists, for example, believe that experiments have ruled out hidden variables explanations of quantum mechanics, which is just wrong (experiments have ruled out only certain types of local hidden variable models). Or they believe that observations of the Bullet Cluster have ruled out modified gravity, which is similarly wrong (the Bullet Clusters is a statistical outlier that is hard to explain both with dark matter and modified gravity). Yes, the devil’s in the details.

Remarkable about these cases isn’t that scientists make mistakes – everyone does – but that they insist on repeating wrong claims, in many cases publicly, even after you explained them why they’re wrong. These and other examples like this leave me deeply frustrated because they demonstrate that even in science it’s seemingly impossible to correct mistakes once they have been adopted by sufficiently many practitioners. It’s this widespread usage that makes it “safe” for individuals to repeat statements they know are wrong, or at least do not know to be correct.

I think this highlights a serious problem with the current organization of academic research. That this can happen worries me considerably because I have no reason to think it’s confined to my own discipline.

Naturalness is an interesting case to keep an eye on. That’s because the LHC now has delivered data that shows the idea was wrong – none of the predictions for supersymmetric particles, or extra dimensions, or tiny black holes, and so on, came true. One possible way for particle physicists to deal with the situation is to amend criteria of naturalness so that they are no longer in conflict with data. I sincerely hope this is not the way it’ll go. The more enlightened way would be to find out just what went wrong.

That you can’t speak about probabilities without a probability distribution isn’t a particularly deep insight, but I’ve had a hard time getting particle physicists to acknowledge this. I summed up my arguments in my January paper, but I’ve been writing and talking about this for 10+ years without much resonance.

I was therefore excited to see that James Wells has a new paper on the arXiv

In his paper, Wells lays out the problems with the lacking probability distribution with several simple examples. And in contrast to me, Wells isn’t a no-one; he’s a well-known US-American particle physicist and Professor at the University of Michigan.

So, now that a man has said it, I hope physicists will listen.

Aside: I continue to have technical troubles with the comments on this blog. Notification has not been working properly for several weeks, which is why I am approving comments with much delay and reply erratically. In the current arrangement, I can neither read the full comment before approving it, nor can I keep comments unread, so as to remind myself to reply, what I did previously. Google says they’ll be fixing it, but I’m not sure what, if anything, they’re doing to make that happen.Also, my institute wants me to move my publicly available files elsewhere because they are discontinuing the links that I have used so far. For this reason most images in the older blogposts have disappeared. I have to manually replace all these links which will take a while. I am very sorry for the resulting ugliness.

I just want to register my support. I've been finding some of the arguments about naturalness a bit off and your observations offer great insight. I find it odd that in all the work being touted as possibly leading to a unified theory, I don't find anything that might explain the second law of thermodynamics. Randomness is as natural as anything else. What you are illustrating is not exactly the same thing, but I see the concern over close mindedness and it resonates with me. I really do wonder where the second law of thermodynamics is hiding when I read about whatever the latest claim or boast or promise comes from the field of theoretical physics.

"Now that a man has said it..." Smolin has been saying this more than 10 years and no one listens to him either. I know this experience is frustrating but compared to me, you are beloved and respected even by your peer community :)

Hi Sabina, Anderson & Castano talked extensively about probability distributions required for rigorous naturalness assessments in their 1994 paper (hep-ph/9409419) and they have 200 citations (i.e., very well known), including from me, where I highlighted their approach. It was of course talked about by experts extensively before that, and maybe written up elsewhere prior to 1994. As you say, that part of the discussion is neither deep, nor was it unrecognized.

I suggest someone credible asks the author to fix the value of the Z boson mass (in section 1 of the paper). In essence Mz = 91.1176 ± 0.0021 GeV as mentioned is ugly. The reference (PDG) state Mz = 91.1876 ± 0.0021 GeV which I find beautiful and natural...

"The standard model of particle physics, or the mass of the Higgs-boson more specifically, is unnatural in the above described way, and this is currently considered ugly."

Could you expand on or explain this a bit (to a non-physicist)? What about the recent Higgs was unnatural? Are there other elements of the TSM that are unnatural?

I often hear that its ugly, messy, that when you incorporate gravity it has arbitrary limits (I'm thinking of Carroll & Wilzcek's Core Theory) ... But I haven't heard these issues raised in the contexts of naturalness.

"...scientists make mistakes – everyone does – but that they insist on repeating wrong claims, in many cases publicly,... "Not much different from other human endeavors. Exmpl, the advocates for various forms of socialism, communism, or other collectivist fantasies.In fact, I expect some commenters below will reply that "communism is great, but it just hasn't been done properly".

Hi Dr. Hossenfelder,let me say initially I read your book and thought it was very good.I'm quite happy to see that a few more people are acknowledging reality, I fear it won't be enough. I set out to be a theoretical physicist more than 30 years ago, perhaps unfortunately I began studying mathematics (in the formal sense of functional analysis) in graduate school. A lot of the math seemed sort of dodgy to me (Feynman path integrals specifically), as I learned more math I learned that I was right and the math in common use was either unproven, or provably incorrect in all but a few very special cases - which might or might not be consistent with the actual universe. At that point, when I utterly failed to interest anyone in this problem, I switched to engineering where I have been reasonably happy since. I was very disappointed when I gave a talk on this in a graduate seminar and was reprimanded by the head of the physics department for wasting everyone's time with "mathematical technicalities of no importance". Unfortunately this isn't a very new problem, and being a man doesn't help much with convincing people it is a problem. Perhaps as the older generation dies off a new generation will be more open to examining these issues. Also, I am sad to report, that a friend of mine who switched from physics to genetics and evolutionary biology reports that biologists are just as deluded.

Sabine,Lee Smolin's attitude towards string theory reflects your attitude about naturalness & academia. In his book "The Trouble with Physics" he devotes a couple of hundred pages describing how string theory fails us and how academia promotes it.

Would the Superconducting Super Collider - the one in Texas that was cancelled 25 years ago - and its planned 20 TeV energy have been enough to reach beyond what the LHC is able to achieve and reveal more information? Obviously, yes, the energy would be higher, but from what I've been reading the LHC isn't even showing any hints that there's something beyond the Higgs boson. To coin a bad analogy, the LHC is like a small BMW that got as far as it could go and here we are, while the SSC would have been more like a large BMW that could have taken us further down the road. Would going down the SSC road have been worth it, or have the LHC results left us in a place where it appears there's nothing reasonably down the road anyway, so it wouldn't have mattered anyway?

In other words, are we missing the capabilities of the SSC right now or does it appear that it wouldn't have made much difference? I'm looking at this in the context of physicists now setting out their arguments for a larger collider.

Thanks, I just found this blog last week. I'm an engineer, not a scientist, but I have followed advances in this field for quote a long time even though I'm not a physicist.

The problem you raised with academic research is not confined to your particular branch of research. The attitude seems to be, if the maths say that, it must be right, without any questioning of the premises behind the maths. I am always amused at the assertion that hidden variables are impossible. If you assume a hidden variable is one you cannot measure directly, but only infer from its effects, then it seems to me that potential energy qualifies. You cannot directly sense it in many cases, but it is a very useful concept.

I would say that it is definitely un-natural, ugly even that Germany has been eliminated from the World Cup. So your theory of an ugly universe may be accurate. Absolutely at this point in time in the world of sport.

The question on what we mean by naturalness or beauty takes us into a domain of aesthetics. In the history of science, physics and astronomy the idea has appeared in a number of cases. Ptolemy put together a lengthy book the Arabs called the Almagest, “The Magestic One,” that was thought for centuries to capture the supreme beauty of the universe and the very mind of God. The geocentric model it contained turned out to be false. Adler published a collection of books on great thinkers, and one of them is Ptolemy. It is a very difficult book to read. There have been other ideas and Kepler came up with a couple. He modeled the solar system as a system of inscribed Platonic polytopes and another idea he had involved music. Kepler's three laws came later and were a bit of almost an after thought on his part.

The idea that existence is beautiful is very old. It also intersects with what we learn here and there. However, just as man proposes and Allah disposes a lot of beautiful conjectures about the nature of things turned out to be false. It is then not a reliable guide in of itself. It is similar to Pirsig's issue in “Zen and the Art of Motorcycle Maintenance,” where quality is something we may recognize, but we can't define it in a precise way. What is beauty in nature is something nature decides; what we from a purely hypothetical perspective call beauty in nature is our proposal of such. Why does nature choose to appear in one beautiful form and not another? We have no way of answering that question.

There has been a lot of disparagement of philosophy on the part of physicists of late. I see this as somewhat unfortunate. Not that I think philosophy informs us about physics that much, but it may open our senses about matter of aesthetics and what we mean by the world having certain nice properties we appreciate. We can also think about whether a great beauty of nature is at all accessible to us in our generation, where in 50 years things may come together in some spectacular way. Along these lines is the religious idea of beauty, and the architecture, art and music of the more religious past clearly reflects a passionate notion of God as having been the great creator of beauty. However, God seems to be a conjecture that is unneeded and further God is also a sort of supernatural form of Orwell's Big Brother. So philosophy is a weak reed to lean on and religion even weaker.

So we really have no logical idea why we appeal to beauty or so called naturalness. There is no formal protocol with it, but we tend to do it anyway.

I'm wondering if this type of "probability without a probability distribution" argument appears in other areas and even sounds convincing to those who are caught unaware. You wrote several times about the argument that we live in a computer simulation. Do you think those arguments also use this trick? After all, if we are living in a computer simulation then all of our experiences could be controlled by a Cartesian demon. We can't rely on observation to conclude whether or not we are living inside of a simulation.

“In all my years of making wheels I have learned that if I go too easy, they fall apart. If I go to rough they will not fit together.Yet if I am neither too easy nor too rough they come out right. The work is what I want it to be. People will say that this is a good wheel made by Phien the wheelwright of Duke Huan of Khi.” -- Merton, T., The Way of Chuang Tzu. (pp. 82-83)

Perhaps the same thing is true of universes. I guess by definition we have a natural universe and it certainly has some oddities of construction. If it goes together to easily does it fall apart? If it has some unaesthetic edges do they serve a global purpose? Is there a fitness test for what endures and how would that happen if the beginning did not know the end?

Sabine, I find your English to be well-developed, and the flow of your writing to be excellent. As you seem to sense, both are strong suits for you,

You post article moves naturally from concept to example, but always then returning to concept before making the case. And it is all done with a self-disciplined flow of process logic, which is where so many fail. Much about a person's thinking process seems to be revealed in communication, especially writing.

More importantly, the scientific case is cogently made. You understand perhaps, that there are at least some of us who enjoy your writings because we too, are already persuaded about much of what you say. For myself, I'm just satisfied to hear an 'establishment scientist' acknowledge that which is scientifically self-deprecating, at long as it is at least scientifically self-evident.

Hearing 'Hey I think YOU are bullshitting', is quite different from hearing 'Hey, I think WE are bullshitting'. And even the public understands this, or at least it can be brought to understand this, eventually.

"This paper has two objectives. The first is to describe more graphicallythan has yet been done the elegance of the ancient technique of epicycleon-deferent.The second is to expose as erroneous an implicit contention ofseveral historians - namely, that the inadequacy of Ptolemaic astronomy issomehow connected with its formally weak computational equipment.' Whencompared with the powerful calculational devices of our twentieth century,epicycle-on-deferent astronomy apparently comes out as a laughably primitiveattempt to predict planetary perturbations. But to reason thus is fallacious.Actually, these objectives will be achieved simultaneously - if they areachieved at all. To see the comprehensive theoretical power of this ancientgeometrical device just is to see its elegance."

One problem -- I haven't yet seen a decent definition of naturalness. Other than the requirement that physics be expressed in quantities roughly equivalent to 1, there are no metrics that adequately express the notion. For example, Planck's constant is incomprehensibly small in units we're familiar with, yet it must be "natural" since it's ubiquitous in physics.

is string theory a viable candidate of QG if string theory is not compatible with deSitter space? (in addition to all the other problems)

in light of all the recent results with SUSY and now deSitter space and KKLT being mathematically disproven, it time for QG theorists to give up on string/M-theory and work on Loop quantum gravity or AS?

I am stuck somewhere between lament and cynicism and hope to eventually advance to not giving a fuck ;)

Yes, part of the problem is that scientists isn't currently well-organized. But there is also, I am afraid, a culture in physics where men go around and speak about their supposed insights into nature's beauty. I made fun of this in this song, but in hindsight I think most people who watched the video didn't get it. According to YouTube stats, 97% of those who watch my videos are male.

Yes, I have seen it. It's not a new problem of course. A pretty good recent summary is here. Well, for all we currently know the universe is approximately de Sitter. But you don't need an eternal de Sitter space-time to get a universe that's de Sitter for sufficiently long to look like ours. So that's a way out. I have no doubt that string theorists can make this work. Because theoreticians will find a way to make anything work if you give them enough time. If we discover tomorrow that the universe is Bianchi Type VII, I am sure string theorists will find that that's exactly what you get in string theory. Best,

Naturalness looks connected to the old Humean idea of the mistaken reliance on the uniformity nature. My guess is scientists of today aren't much interested in this because it veers too close to philosophy and even worse, taboo philosophical scepticism! Why scientists of today have drifted so far from philosophy compared to their successful for-bearers I think has much to do with A: the narrowing of university education into smaller subject niches and B: the issue of careers and research grants.

Remember, Einstein credited Hume's treatise as one of his single biggest inspirations for relativity theory. I believe most university scientists today however, have never even read any philosophy. Not even Popper. I can't be sure, but that's my guess.

What about using the naturalness of a thousands other natural-looking equations to obtain the probability distribution instead of lumping the whole theory of every single thing in the whole universe into one bucket?

It is not hard to come to an estimate on the stability of the de Sitter spacetime that approximates the observable universe. The curvature from the cosmological constant is Λ = 10^{-52}cm^{-2}. The stress energy is T = 8πGΛ/3c^4 which for our ballpark purposes is about 10^{-98}j/m^3. The FLRW equation of motion for the scale factor a is

(a'/a) = 8πGρ/3c^2 = Λ/3,

for ρ the density of the vacuum energy and a' = da/dt. The scale factor defines a momentum p ~ a' and the wave function can be approximated as Ψ ~ exp(-pa)exp(ika). Assume the tunneling is across a Planck length with a huge vacuum (false vacuum) energy a Planck unit of energy E = E_p. The transition function is then T ~ exp(-2E_pa) = exp(-2e^{sqrt{Λ/3}E_p}. Now input some number to get the approximate

T ~ exp(-2*exp(10^{-26}}.

Logarithmic conversions don't change much so we get the probability in a Planck time the tunneling of the de Sitter spacetime into Minkowski spacetime as P ~ 10^{-10^{10^26}}}, or for that matter within about 10^{10^{10^{26}}} years the dS spacetime vacuum tunnels into another configuration.

Now look towards the bottom of the first chart on https://en.wikipedia.org/wiki/Timeline_of_the_far_future to see the estimated time for the end of the dS vacuum is 10^{10^{10^{56}}} years. This and my number are absurdly large, and there is not much point in quibbling over the difference. The low value of the cosmological constant is what makes for stability. This argues for why the cosmological constant for a real cosmology must be very small and why the vast number of these cosmologies in the multiverse are unstable and may only exist as off shell fluctuations

The string has a one space plus time world sheet, and the bosonic string has a negative zero point energy. This type I string is essentially an AdS_2 spacetime. AdS_n spacetime has negative energy, and these are not bounded below in the their spectrum. As such quantum states or radiation emerges from them endlessly. F. Dyson wrote about how this could happen with QED if we made charge an imaginary number, and ignoring a lot of technical details that is in a way what this is. Quantum gravity lives in AdS spacetime, and the dS spacetime we observe was maybe produced this way from AdS_5.

All of this is really good news. The dS vacuum us unstable, but for our concerns it is certainly stable enough. I would worry a whole lot more about dying by asteroid impact than the dS vacuum tunneling such that the universe we know ends. I don't lose sleep over asteroids.

@Uncle Al. Pondering what you mean by "mathematically codified physics is empirically irrelevant." Primarily because if it's not empirical, it is irrelevant, although perhaps interesting to consider.

My favorite video game is Alpha Centauri (https://en.wikiquote.org/wiki/Sid_Meier%27s_Alpha_Centauri) in which one of the characters says: "A brave little theory, and actually quite coherent for a system of five or seven dimensions -- if only we lived in one."

Which mirrors my thoughts exactly for theories that are not empirical.

I have not ruled out that we live in a system of multiple dimensions. I just won't believe it until the facts support it.

If I did, I'd be religious, not scientific. Which I am about some things, but not physics.

Even if there is no guiding principle to intuit future physics we should still invest in a higher energy accelerator, as it's in the nature of our species to explore the unknown. It's a shame that the tunnel dug in Texas for the Superconducting Supercollider was backfilled, after all the expense of its excavation. Surely, it's a golden opportunity for our Washington politicians to stop fussing with each other, unite behind a common, apolitical cause, and put that SSC back in the budget. Who knows, probing a much higher energy regime, than the current record, might lead to another "Who ordered that" episode when Isidor Rabi expressed his puzzlement to his fellow diner at a Manhattan restaurant about the newly discovered muon in 1936.

I just finished your book and loved it. My opinion of physics has been colored by the recent theoretical promises (SUSY, dark matter, large extra dimensions, neutrinoless double ...) versus the lack of experimental results. The book seems very clean and well proofed.

Will you have significant impact? I don't know. You are ruffling too many feathers to be easily accepted, I am betting. Kuhn's book had some impact, but your critique is more immediate and operational in terms of current practices.

I wish you good luck and I hope to be around long enough to see some positive effects.

I think Cern LHC is an albatross now, unless it can be repurposed. It’s not too bright to look for LARGER particles any more. Look for the fine structure. We know so much about the black box. Throw away the stuff that doesn’t seem logical or “natural” and brainstorm what lies below. I think it is not all that hard for 10,000 physicists to figure out. Heck, it’s probably already been tweeted by an amateur.

@Uncle Al. I didn’t understand what you said. I hope to some day, just not there yet!

However baryogenesis is not that hard to imagine if there were only two fundamental particles coupled kinetically and electromagnetically and there was an asymmetry that inflated the orbit of one particle while shrinking the orbit of the other. I mean there you have it. A spacetime matrix. A source of higer order particles. Just imagining, but that is one heckuva lot easier to understand than QFT and paradoxes. Or all the ungrounded ideas from WITHIN the walled garden of physics.

@ Peloton: The LHC has ALICE, A Large Ion Collider Experiment, which is potentially where the future lies. A colorless triplet state entanglements of two gluons is analogous to, in fact has the same quantum numbers as, a graviton. The study of QCD glueballs has all sorts of interesting physics from IR confinement to analogues to quantum gravity.

I wasn't going to get into this, and Sabine may very well moderate my comment, but what I would like to see is a whole lot less science fiction and a whole lot more science (and that's directed at Crowell)! Specifically, I would love to see the experimental community perform the modified Young experiments outlined in William Tiller's books and papers. And I mean complete with controls and Tiller's Intention Host Device. There is a great deal more empirical data supporting the rational for these experiments than there has been for the SUSY stuff and they're a hell of a lot cheaper! Plus, Tiller's Psycho-Energetics IS a model - unlike String "Theory," whatever that turns out to be, and experimentally nailing down the alpha variable would effectively make Tiller's model a theory!

So here, I believe, is the problem. Back in March, Heinz-Dieter Zeh left an informative comment on Peter Woit's blog. The part that I found stimulating was:

"[O]f course, everybody is free to propose new concepts and theories, but I have decided to wait until such author presents some empirical success (what else can I do?).

[S]o my conclusion from Tim’s three choices are that the first two ones are well possible (though not more), but we have to wait for empirical support."

So I sent Dr. Zeh an email through his website, linking to all of those pre-stimulus response experiments (yeah, Crowell, the ones I linked to on your FQXi essay a few years ago) and Tiller's paper outlining the Young experiments and I asked him, "Why haven't these seemingly simple experiments been conducted already?" Less than a month later, Dr. Zeh was dead. What, do you suppose, is the probability that he killed himself over this silly shit? I would wager that it is pretty damn high and that makes me sad.

"The more enlightened way would be to find out just what went wrong". I've done it, see the physics detective. I'm afraid to say there's rather more wrong than you might think. Quantum field theory went wrong from the off. See Oppenheimer’s 1930 note on the theory of the interaction of field and matter. He said “the theory, is, however, wrong, since it gives a displacement of the spectral lines… which is in general infinite”. In 1931 Lev Landau and Rudolf Peierls wrote their extension of the uncertainty principle to relativistic quantum theory. They talked of absurd results and the complete failure of the theory, and said “it would be surprising if the formalism bore any resemblance to reality”. On page 690 of the Oxford companion to the history of modern science, Silvan Schweber said the problem caused most of the workers in the field to doubt the correctness of QFT, and the many proposals advanced in the 1930s all ended in failure. Then came subtraction physics which turned into renormalization, but it didn't address the issue. Instead it made it worse. John Duffield..Apologies, I can't sign out of this google account called "The Universe".

You write that "A lot of physicists, for example, believe that experiments have ruled out hidden variables explanations of quantum mechanics, which is just wrong (experiments have ruled out only certain types of local hidden variable models)."

I'm curious which type of local hidden-variable models you think are still tenable. One cannot, of course, rule out nonlocal hidden-variable models, as we have an explicit one that can reproduce quantum mechanics, namely Bohmian mechanics. But local hidden-variable models are as dead as anything can be in science.

Superdeterminism and retrocausality, but that wasn't the point of my statement. The point was that a lot of physicists believe hidden variables models have been ruled out period, without the qualifier "local" (with which it would still be a wrong statement, but forgivably wrong for we can debate just exactly what is meant by local).

Sure, that is just a mistake, but I'm taking issue with your statement that Bell experiments only rule out "certain" local hidden-variable models. It sounds as if there is some tenable local hidden-variable model left. As far as I know, retrocausality is just a vague proposal that hasn't been turned into an actual model by anybody, and superdeterminism is plain ridiculous. It goes against the basic notion of science, as it can reproduce not only quantum correlations, but any correlations at all. How can one hope to learn anything via experiment, if the system you are probing can change arbitrarily depending on which test are you going to make?

I have heard these "objections" many times. But saying it's "ridiculous" isn't a scientific argument. Your claim that superdeterminism "goes against the basic notion of science" has no basis. Go and try to write it down. Go, do it. Lead a proof. Retrocausality is a vague proposal because few people works on it. That doesn't mean it's ruled out, it means you want it to be ruled out. You are jumping to conclusions there, highlighting the very problem I was just pointing out.

I'm not jumping to conclusions. I have read dozens of papers about Bell's theorem, and wrote some myself. I spoke with 't Hooft about his superdeterministic model, and spoke with Leifer and Spekkens about their hope for a retrocausal model.

One argument against superdeterminism is that it can reproduce any correlation at all. A theory that can explain anything is not a scientific theory.

Another problem is that it postulates that the physical state of the system, \lambda, depends on the setting choices x and y (or, equivalently, that the setting choices depend on the physical state of the system). But the setting choices can be human choices, or light from far-away quasars, or pseudo-random numbers, all generated (as far as we know) independently from the physical system at hand. But superdeterminism postulates anyway, by fiat, that they are correlated. Without proposing any mechanism to show how or why they should be so correlated. This is just ridiculous.

If one would take seriously the superdeterministic explanation, it would stop you from making conclusions from any experiment. Are you trying to test whether smoking causes cancer? Well, maybe your decision to make the experiment is what made these smokers get cancer. Are you trying to measure the anisotropy of the CMB? Well, maybe the photons just change their frequency based on where you point your telescope to. Are you trying to read what I wrote? Well, maybe the letters change based on which word you are trying to read.

Any theory with a Hamiltonian evolution can produce any correlation. All you have to do is take the present state and evolve it backwards. This will give you an initial state from which you get whatever you observe. Your criticism is hence unscientific itself. The relevant question you should ask is whether the theory has explanatory power. I am not a fan of 't Hoofts model for he fails to document that, but your argument doesn't hold water. You could raise the same criticism against any deterministic theory, super or not.

Also, let me note you didn't have any argument against retrocausality. Best,

No, you cannot have any correlation you want. You'll never beat Tsirelson's bound, for instance. And "superdeterminism" is a misnomer, it has nothing to do with the determinism. It is about the dependence of the system that you are measuring with with the choice your measurement. It can be true for for theories with randomness, and false for deterministic theories.

You have simply not understood what superdeterminism is. In the case of the Bell test done in the US, it is claiming that the state of each photon emmited by the source depended on which bit of "Back to the future" was going to the used in the measuring station.

I haven't raised a specific problem with retrocausality because there is no concrete model to examine. But I'm afraid that any retrocausal model will suffer from the problem that it can explain anything at all.

I think a definition of "hidden variable" is also required, as is "local". First, why isn't potential energy a hidden variable, as you can only infer it? You cannot, in general, measure it. Also, locality cannot be precise for anything moving, through the Uncertainty Principle.

One way to look at this that I mentioned previously is to consider that probability distributions on parameters can, in theory, be derived from occam's razor. Unfortunately I cannot formalize this idea in a real case, and do not think the argument can be demonstrated precisely without a large amount of work. Perhaps it can be done for toy scenarios. So, rather than insert new extra-empirical assumptions into theories, one can only formulate them as consequences of ones that are already there. Unfortunately, most physicists are not well aware of the foundations induction, so it is difficult to get this point across without covering ground that is closer to philosophy. Then again, it seems to me that when using these kinds of criteria one is inevitably doing philosophy anyway, just not explicitly.

What you say is just wrong. As I explained above, take any state Psi_e with any correlation and evolve it back to Psi_i. That state Psi_i evolved forward will give you the correlation you say isn't possible. Works with any Hamiltonian evolution. (There are of course states that are not possible because of consistency conditions like eg normalization.)

Thanks, I know what the claim is of superdeterminism. You were trying to prove it's incompatible with science. You failed. Hope that settles it.

I enjoyed your book, Lost in math, very much. Reading it on a kindle-app makes it easy to go back and search.

Most unnatural to me is the many worlds interpretation.Is it correct to assume that every chemical reaction includes several realizations of quantum probabilities? In all the copies of our universe?In that case the number of copies in a short time would have multiplied to a number exceeding the number of elementary things in our universe. That is unnatural to me.

In my opinion, as a professional chemist, the many world interpretation offers nothing to chemistry other than real difficulties. Chemistry certainly works on quantized action in stationary states, and there are some interesting issues, such as in an SN2 reaction, are the electron pairs acting as bosons, but the realization of quantum probabilities should not be one of the issues, unless you really are getting lost in more than maths. Some reactions do have the potential to make a number of different products, but that is because there are a number of potential activated states, and the random nature of molecular collisions means different parts of a more complicated molecule can receive the activation energy. It has nothing to do with quantum probabilities, but definitely the more classical collisional probabilities.

Maybe you should then contact Tsirelson to tell him why his theorem is wrong? I'm sure he would be very interested. Or better, why don't you post an experimental proposal on the arXiv on how to produce any correlation at all?

Or maybe you should consider the possibility that you haven't proven the whole quantum information community wrong with a single blog comment, and are simply missing something. The problem is that if you if start with a non-quantum correlation Psi_e and evolve it backwards, at least one of two things must happen: you'll either need unitaries acting on space-like separated regions to not commute, which contradicts basic QFT, or you'll end up with a state Psi_i where the quantum state used for each round of the experiment is correlated with the detector settings, which is impossible to achieve in reality.

I didn't say the theorem is wrong, I said that your claim is wrong. Your claim being that superdeterministic theories are not scientific because they "can explain anything." They "can explain anything" in the same way that any other deterministic theory can (or can't). The argument is hence absurd. I do not know why you repeat it. (We both know it's not originally yours.)

Theorems are only as good as their assumptions.

"you'll either need unitaries acting on space-like separated regions to not commute, which contradicts basic QFT, or you'll end up with a state Psi_i where the quantum state used for each round of the experiment is correlated with the detector settings, which is impossible to achieve in reality."

Ah, see, you can make precise statements if you only try. That's much better. As I said above the problem with superdeterminism isn't that correlations come from the initial state - that's a perfectly fine assumption. The problem is to show that the theory has explanatory power. Of course the statement "it's impossible to achieve in reality" is nonsense. It's difficult to achieve that the two are not correlated. That's why folks bend over backwards to come up with silly ideas to close the "free will loophole". And in any case, you don't "achieve" initial states, initial states are what they are. All you do is calculate what happens to them.

In any case, I hope that in the future you'll no longer repeat the false statement that superdeterminism isn't scientific.

Again, "superdeterminism" has nothing to do with determinism, the name is misleading you. I prefer it to simply call it "conspiracy" in order not to confuse people. Any deterministic theory that doesn't postulate a necessary correlation between what you are measuring and the choice of your measurement will not be "superdeterministic". And pretty much all deterministic theories ever proposed are not superdeterministic (except of course for the explicitly superdeterministic ones), and they cannot explain all correlations. In fact, the classical ones cannot even violate a Bell inequality!

"Of course the statement "it's impossible to achieve in reality" is nonsense. It's difficult to achieve that the two are not correlated."

I'm still anxiously waiting for your experimental proposal on how to violate Tsirelson's bound. You are confusing the logical impossibility of closing the "free will loophole" (which leads to all these contortions from the experimentalists trying make it less and less plausible), with actually using the "free will loophole" to fake a Bell violation. It can't be done. Just consider the most straightforward Bell setup, where the settings are generated by local QRNGs after the photon is emitted by the source. How on earth are you going to make the state of the photon depend on the output of the QRNGs?

In any case, I hope that in the future you'll no longer repeat the false statement that superdeterminism is scientific.

While Sabine may decry beauty as a criteria for advancing our understanding of nature's inner workings, I must say her book "Lost in Math" is beautifully written. Because of our week long oppressive heat wave reading the book has been slow. But yesterday found out something from the book I was not previously aware of - the Koide Formula. I immediately googled it and saw that quite a few papers followed on from Yoshio Koide's original discovery.

This empirical relationship that yields 2/3 from a relatively simple math expression involving the masses of the three charged leptons is curious in that quark charge comes in increments of 1/3rd e, and there are three generations of quarks and leptons. Indeed, this discovery is reminiscent of the era when empirical formulas were devised for black body radiation, prior to Max Planck's correct formulation and revolutionary introduction of the quantum of action. About 40 years passed between when Gustav Kirchoff first defined black body radiation and Planck's solving the black body radiation puzzle. Now 37 years have passed since Yoshio Koide's discovery. Perhaps a 21st century Max Planck will come along to illuminate this curious relationship from deeper principles, and break the current deadlock in the Standard Model.

@David Schroeder, I would think a heat wave would make progress on Bee's book faster, rather than slower. It did in my 111F world today. I am up to 1/3 way through "Lost in Math". I am only starting to understand the complex set of factors leading to Bee's passion for understanding and challenging of how physics got to this rather bizarre state it is in now.

Re Koide. I too am fascinated. I have been working on a very physical TOE that goes back 100 years, with first principles only, and symmetry with other nature we see at different scales (orbits, spin, charge, spacetime, etc). I am drilling in top down from a vision unjaded by the last 40 years of physics (I am not in the field). I am not adopting Copenhagen. I am requiring a kinetic or electromagnetic implementation for each characteristic, two fundamental particles, etc. Anyway, it is working out quite well at this point, and it seems to be heading to a physical explanation for Koide.

Superdeterministic theories are deterministic. I am pointing out that your claim that they are not scientific has nothing to do with the additional ingredient (that being the correlations between prepared state and detector) but with the determinism per se. Hence, your argument that superdeterminism is not scientific also applies to all deterministic theories. It is hence absurd. You should no longer make it.

I have no idea why you think I want to violate some bound. You brought up the example of the bound and proclaimed that it can't be violated. When prompted, you indeed managed to recall that this depends on assumptions about the initial state. That's correct. Your own conclusion should therefore demonstrate to you that your argument is simply wrong.

"How on earth are you going to make the state of the photon depend on the output of the QRNGs?"

The whole point of superdeterminism is that they do depend on each other. You don't have to make them depend on each other, they do. They do, by assumption. Please note that I am in no way advocating superdeterminism as a useful interpretation of quantum mechanics, it does have its problems, but the problems are not the ones you (and many others) think they are.

Just because 't Hooft defined his model like he did. There is nothing fundamental about it, I can easily define a non-deterministic "superdeterministic" theory.

"I am pointing out that your claim that they are not scientific has nothing to do with the additional ingredient (that being the correlations between prepared state and detector) but with the determinism per se."

And I am pointing you that you're wrong. I claim that they are not scientific because of the postulated correlations between state and measurement, and that has nothing to do with determinism.

"Hence, your argument that superdeterminism is not scientific also applies to all deterministic theories. It is hence absurd. You should no longer make it."

No it doesn't. Do I need to repeat that classical mechanics is not superdeterministic, and hence cannot violate Bell inequalities?

"I have no idea why you think I want to violate some bound. You brought up the example of the bound and proclaimed that it can't be violated."

You claimed that you can generate any correlation at all. I pointed out that Tsirelson's theorem implies a bound on the correlations that can be generated via quantum mechanics. Therefore your claim is nonsense, and if you are serious about it, you should actually post a paper on the arXiv about how to violate Tsirelson's bound, and revolutionise the field of quantum information.

"When prompted, you indeed managed to recall that this depends on assumptions about the initial state. That's correct. Your own conclusion should therefore demonstrate to you that your argument is simply wrong."

Yes, it depends on assumptions about the initial state which are correct. You cannot violate them. If you seriously believe that you can, please post it on the arXiv. But look at it from the sociological side: do you really think that Tsirelson would waste his time proving a bound if he thought it could be easily violated? And not only Tsirelson, several other people (including me) have worked hard to extend Tsirelson's theorem and characterize precisely the set of correlations that can be generated by quantum mechanics. Do you think we are just crazy people talking about how many angels can dance on a pinhead?

""How on earth are you going to make the state of the photon depend on the output of the QRNGs?"

The whole point of superdeterminism is that they do depend on each other. You don't have to make them depend on each other, they do. They do, by assumption. Please note that I am in no way advocating superdeterminism as a useful interpretation of quantum mechanics, it does have its problems, but the problems are not the ones you (and many others) think they are."

I'm not talking about superdeterminism, I'm talking about reality. You claimed that you could generate any correlation at all by correlating the state of the photon with the detector settings. So you are no longer defending this claim?

"You ivory tower intellectuals must not lose touch with the world of industrial growth and hard currency. It is all very well and good to pursue these high-minded scientific theories, but research grants are expensive. You must justify your existence by providing not only knowledge but concrete and profitable applications as well." CEO Nwabudike Morgan "The Ethics of Greed"

primarily because at the time, I was working desperately to block Enron from my state (successfully, although not difficult to convince coal barons that Enron was selling things we didn't need, albeit for widely disparate reasons) and the game quote reminded me of the things that Enron said to me then. Although I wasn't an academic, but a government regulator.

Ok, so you seem to think that superdeterministic theories are not necessarily deterministic. That's bizarre to say the least.

As to what you said or didn't say, here's the quote

"superdeterminism is plain ridiculous. It goes against the basic notion of science, as it can reproduce not only quantum correlations, but any correlations at all"

I told you (several times now) that any deterministic theory can reproduce any correlation (provided the state is in the configuration space to begin with of course). This is not specific to superdeterminism. For any deterministic theory you can put the correlation in the initial state, simply be evolving the correlated state back in time.

What you say above is not the problem with superdeterminism.

You now write

"I claim that they are not scientific because of the postulated correlations between state and measurement, and that has nothing to do with determinism."

You always postulate the initial state. There is nothing unscientific about this. Again, you are missing the point.

"Do I need to repeat that classical mechanics is not superdeterministic, and hence cannot violate Bell inequalities?"

You just made the same mistake as you did before. It's easy enough to get an initial state that results in any correlation you want to measure.

"I'm not talking about superdeterminism, I'm talking about reality. You claimed that you could generate any correlation at all by correlating the state of the photon with the detector settings. So you are no longer defending this claim?"

Huh? I never said anything about "generating" correlations. I merely told you that for any correlation that you measure at a final time, you can evolve the state back in time and you will have an initial state that generates the correlation. Hence, there's an initial state that will produce the outcome. You seem to think that you somehow know which initial states are and aren't possible, but that's a postulate. Your whole "argument" comes down by postulating that the correlations which you don't want to be there aren't there. You're putting in what you want to show.

"Ok, so you seem to think that superdeterministic theories are not necessarily deterministic. That's bizarre to say the least."

I don't seem to think this. I know this is the case. That you don't know it only shows that your knowledge of superdeterminism goes little beyond the name. Superdeterminism is about the correlations between state and measurement, and they can exist independently of whether the theory is deterministic. What I have been talking in the previous posts about the initial state necessary to violate Tsirelson's bound is precisely a superdeterministic version of quantum mechanics, which is obviously not deterministic.

"I told you (several times now) that any deterministic theory can reproduce any correlation (provided the state is in the configuration space to begin with of course). This is not specific to superdeterminism. For any deterministic theory you can put the correlation in the initial state, simply be evolving the correlated state back in time."

And I told you, several times now, that this is nonsense. These correlations in the initial state would be precisely the superdeterministic correlations between state and measurement. You can't actually prepare such initial states. Are you aware that what you are saying would imply that classical theories can violate Bell inequalities?

"You just made the same mistake as you did before. It's easy enough to get an initial state that results in any correlation you want to measure."

Indeed, you seem to be aware! Do you realise the enormity of this claim? Do you realise that it would contradict decades of research in quantum information theory? Forget Tsirelson's bound, I'm much more interested in your experimental proposal on how to violate Bell's inequalities with classical theories.

"You seem to think that you somehow know which initial states are and aren't possible, but that's a postulate. Your whole "argument" comes down by postulating that the correlations which you don't want to be there aren't there. You're putting in what you want to show."

Indeed, I do know that these states are impossible. If you claim otherwise, please tell me how to prepare them. I'd just like to point out that I'm not the one postulating this. Newtonian mechanics, electrodynamics, relativity, quantum mechanics, all of them were already developed before I was born, and none of them have these ridiculous correlations between state and measurement.

I find Sabine's writing style superb. Her book is not only highly informative, but quite entertaining too; it's a fun read for a layperson like myself. As I make my way through the book I'm also appreciating much more why she (and others) find fault in the path the physics community has chosen to further the Standard Model.

Managed to google your theory-model, and it's quite intriguing. It's sounds far more ambitious than my own rather modest efforts. I've only read the abstract, not having opened the link yet. Interestingly, your two fundamental particles share the same names as the particles in a model I conceived back in the 90's, but there the comparison ends.

Without air-conditioning the full brunt of the heat wave was inescapable, and doing any physical or mental activity just made me hotter. It sounds like you live in the desert southwest, where the temp in Phoenix will be 111 today. But down there everybody has AC, so people are forced to stay indoors. Dozens of people in Quebec, hundreds of miles north of me, died from this latest heat wave. Unlike the desert areas our humidity is very high (high 90's %), and combined with our 102 Fahrenheit peak temp during the event made it not just unpleasant, but dangerous healthwise.

"Superdeterminism requires a nonlocal correlation between the prepared state and the detector..."

That quote is from a paper I wrote, in 2014. That's just to show how ridiculous your ad-hominem attacks are. But these correlations are a corrollary. It's what is required if you want to replace quantum mechanics with a deterministic theory. Of course the theory should then indeed be deterministic. That you think it may not be shows you don't even understand the whole point of the discussion.

As to the rest of your comment. I already told you why what you state is simply wrong. Scroll up to find it.

I am still relatively new in following Sabine, but I think I am starting to gain an appreciation for her approach and writing style: Facts, honesty, no deference to ANY ivory tower. I may be projecting to a degree. I am/was like that in my life/career - it can be highly successful, if a bit off-putting to others who's oxen you may gore. The established people being interacted with need to feel psychologically safe, otherwise the turbulence of facts and honesty can make them fear risk to their position, funding, knowledge, etc. It is all much easier if we speak with finesse and diplomacy isn't it? I could never do that well, nor did I want to, although I am not sure which came first.

I am at the 50% point in "Lost in Math", shortly after Weinberg walks out. I loved how Bee just powered through his attempts to curtail the interview. I see the book, so far, as a wakeup call for the siren call of QFT. I am hoping that in the remainder of the book is a recommendation for the agenda to sail towards more promising land. Interestingly, as an amateur, I have found that physics is a walled garden, where amateurs can peek over, but are not to be interacted with. Yet, if Bee is correct, then amateurs may have a few advantages. We still know what a mirage is and will walk straight toward one in the desert, if that is the most promising path forward. And perhaps because we are not so cowed by the math and history, we are more willing to look for physical mechanisms that could fit the math and data.

I'm located near San Diego and we just got a new A/C. We are going on our third day over 100 F. Looks like it will be in the high 90's all week. Stay cool and hydrated!

I don't see the pointing of quoting yourself here. Are you trying to convince me that you know what you're talking about? There is nothing nonlocal about superdeterministic correlations. They are produced in the causal past of the experiment, and as such are perfectly local.

"It's what is required if you want to replace quantum mechanics with a deterministic theory."

Indeed. Or nonlocality, but that is another subject.

"Of course the theory should then indeed be deterministic."

That's a non-sequitur. People that assume superdeterminism indeed do that because they want a deterministic theory, but that doesn't imply that superdeterminism itself implies determinism. It is about the correlations between state and measurement, and they can also be postulated in a non-deterministic theory.

"As to the rest of your comment. I already told you why what you state is simply wrong. Scroll up to find it."

No you haven't. You are repeatedly claiming that quantum mechanics (and even deterministic theories!) can produce any correlation you want, simply by evolving the state backwards. I'm claiming that this is nonsense, because this initial state is precisely one that has the superdeterministic correlations, and this cannot be done in reality. I'm then asking you to say how you would actually prepare such a state, and to that you have never responded, presumably because you know it is impossible.

This practice of assuming a probability distribution without any data started long ago, in the 18th century with the Bayesian theorem. Sometimes the probabilities are based on statistical data but more often there’s no data. The probabilities are just educated guesses. Mathematicians knew about this problem long ago. Lancelot Hogben FRS wrote a popular math book in the 1930s (I have a reprint in my bookshelf) where he discussed this problem. He said this practice of assuming a probability distribution without data is sheer nonsense. But everybody likes pretending they know something when they really don’t. So 80 years later we have the multiverse, naturalness and other wild guesses that are not even wrong.

Yes, I see the need to quote my own papers because you keep making remarks like "You have simply not understood what superdeterminism is" and "your knowledge of superdeterminism goes little beyond the name".

You do this in the attempt of reassuring your greatness to yourself while at the same time you go around declaring that superdeterministic theories may not be deterministic, demonstrating that you don't understand why people are even trying to close the free-will loophole, and now that you don't know in what sense superdeterministic correlations are non-local. They may be come about from local interactions (possibly in the future), but hey are nonlocal in the sense that Bell used the word.

"You are repeatedly claiming that quantum mechanics (and even deterministic theories!) can produce any correlation you want, simply by evolving the state backwards. I'm claiming that this is nonsense, because this initial state is precisely one that has the superdeterministic correlations, and this cannot be done in reality. I'm then asking you to say how you would actually prepare such a state, and to that you have never responded, presumably because you know it is impossible."

The reason we are having this discussion is that *you* are the one claiming that superdeterminism is "silly," repeating an often made statement that has no substance. You are the only who is claiming that something is not possible about superdeterminism. You are the one who should be making an argument. You don't. You merely say it's silly. That's not an argument, that's pathetic.

You have first proclaimed that quantum mechanics, or classical mechanics, and so on cannot produce these or that correlations. I told you why that's wrong. You even agreed on it. But you still didn't get the point. You do not, and no one ever does, "generate" any initial states. The universe is in one state and the initial state of any experiment is whatever it is. Choosing an initial state for the mathematical model is part of the prediction.

Look, that you cannot show these states are impossible is the reason people have made these CMB-triggered tests and so on. (Not that it actually rules out superdeterminism.)

You are shifting the burden of proof. You are the one making this absurd claim that any correlation at all can be produced. It's your job to support your claim by showing how, not my job to prove you wrong. And let me note that you again have failed to do so, presumably because you know it can't be done. Have you even written down the initial state that would be required? Do you understand what would it take to produce such a state?

"Look, that you cannot show these states are impossible is the reason people have made these CMB-triggered tests and so on. (Not that it actually rules out superdeterminism.)"

The purpose of this test was to push back the time when these correlations must have been produced close to the Big Bang. It is about restricting superdeterministic models, not testing quantum mechanics. That quantum mechanics does not postulate superdeterministic correlations is blindingly obvious.

"Yes, I see the need to quote my own papers because you keep making remarks like "You have simply not understood what superdeterminism is" and "your knowledge of superdeterminism goes little beyond the name"."

I see, so the point was not the precise sentence you quoted, but the fact that you wrote a paper about superdeterminism. In this case a link to the actual paper would have been more helpful. I presume you mean arXiv:1401.0286, or maybe arXiv:1105.4326? I'm not impressed. You are claiming to test superdeterminism, which is fundamentally impossible, as you admit yourself, and instead propose a test of a contrived model you concocted yourself. It will come as no surprise to me if it turns out that nobody bothered to do your experiment.

"You do this in the attempt of reassuring your greatness to yourself while at the same time you go around declaring that superdeterministic theories may not be deterministic, demonstrating that you don't understand why people are even trying to close the free-will loophole, and now that you don't know in what sense superdeterministic correlations are non-local. They may be come about from local interactions (possibly in the future), but hey are nonlocal in the sense that Bell used the word."

I'll let the insult pass. Of course I understand why people are interested in the "free-will loophole" (which is fundamentally impossible to close, by the way), and that the motivation for assuming superdeterminism is a desperate attempt to retain determinism and locality. This doesn't change the fact that the assumption of superdeterminism itself doesn't imply determinism, which you have already understood but obstinately ignores. And the correlations are not nonlocal in the sense that Bell used the word. That applies to correlations of the form p(ab|xy). The correlations p(\lambda|xy) are perfectly local, even though they can be used to produce Bell-nonlocal correlations. You are just being sloppy with language.

"Do I need to repeat that classical mechanics is not superdeterministic, and hence cannot violate Bell inequalities?"

I think that it is not true. Some classical theories, field theories like classical electromagnetism (CM) are superdeterministic as they do not allow the states of distant systems to be independent.

In order to justify this let us describe a Bell test from the point of view of CM.

So, we have three systems:

1. S (the source of entangled particles) - for CM is just a system of charged particles (electrons and quarks)2. A (Alice and her detector and whatever she uses to set that detector) - another system of charged particles (electrons and quarks)3. B (Bob and his detector and whatever he uses to set that detector) - another system of charged particles (electrons and quarks)

Now, in order to determine the state of each system we need to calculate the electric and magnetic fields acting at the location of each particle that is contained in that system. It will be a very complicated function taking the position and momenta of all particles in the combined system (S+A+B) as parameters.

So:

the state of S will be a function of position/momenta of S+A+Bthe state of A will be a function of position/momenta of S+A+Bthe state of B will be a function of position/momenta of S+A+B

So, a change in the state of A implies a change in the states of B and S in order to maintain consistency with the laws of CM. A change of B implies a change of A and S and a change of S implies a change of A and B. Or, to put it differently, a change of A without a corresponding change of B and S leads to states that are incompatible according to CM, therefore the states are not independent (statistical independence requires the absence of incompatible states).

In other words, CM, if taken seriously and not subjected to the normal approximations (macroscopic objects are electrically neutral so there is no EM interaction) is a superdeterministic theory.

It is true that in general changing some arrangement of charged particles will create some stray electromagnetic fields that will influence the position of other charged particles, but this does not make classical electrodynamics a superdeterministic theory.

First of all, in the loophole-free Bell tests, the generation of the detector settings was done with a space-like separation to the generation of the photon, so by relativity there couldn't possible be any stray electromagnetic fields perturbing each other.

This does not need bother us, though, as since CM is a deterministic theory, we could just locate the change in the states of S, A, and B in the intersection of their past light cones, so that relativity doesn't pose any obstacle there.

To generate the settings, we cannot use a quantum random number generator, as we are in CM, so let's use a pseudorandom number generator instead. So in this causal past we have some seeds for the pseudorandom number generators, that will determine the settings of Alice and Bob. But here the weirdness start. S must somehow have access to these seeds, even though they are in Alice and Bob's computers, and the electromagnetic fields produced by these seeds outside the computers are extremely weak. Even more, CM allows for arbitrarily good shielding of electromagnetic fields, so we could put S inside a Farady cage, so that the fields that come from Alice and Bob's computers are effectively zero.

But let's say that somehow S does have access to the seeds. Now it must somehow calculate how the seeds will produce the sequences of 0s and 1s that Alice and Bob are going to use, even though as far as we know S is just an antenna, not a classical computer.

But let's say that somehow S has access to the sequences of 0s and 1s. Now it must produce four different electromagnetic waves, depending on which combination of settings Alice and Bob are going to measure in each round: 00, 01, 10, or 11. Which is rather weird, as in a Bell test the source is configured to always emit the same state, and in the CM case as far as we can tell the antenna is in a fixed configuration, always emitting the same electromagnetic wave. But this is what it takes to make CM a superdeterministic theory.

It gets even worse if we want this superdeterministic CM to reproduce a violation of a Bell inequality compatible with quantum mechanics. Now it needs to emit four different electromagnetic waves that would precisely reproduce the quantum statistics, even though in principle it could produce any statistics at all. And as far as we know this is just an antenna, that neither knows quantum mechanics nor that we are doing a Bell test.

"You are shifting the burden of proof. You are the one making this absurd claim that any correlation at all can be produced."

You continue to assign statements to me I have never made. And that's despite me having told you several times I never said anything like that.

We are having this exchange because YOU came here and made a claim: that superdeterminism is "silly". You then attempted to give a reason for it. I told you why this is wrong. Good, that should settle the case. Now why do you claim I have to "prove" something? I wasn't the one making the claim.

As to your bizarre statement that I do not know how to "produce the initial state" - I don't produce initial states, neither do you, unless you think you are God. You merely chose an initial state as part of your mathematical model to make a prediction. I have told you how to chose the initial state so you can predict whatever is the correlation. You evidently still haven't understood that you can always do this with any deterministic theory (as long as the state is in the configuration space to begin with) and hence is not a criterion by which you can decide whether the theory is scientific (or correct, for that matter).

"I see, so the point was not the precise sentence you quoted, but the fact that you wrote a paper about superdeterminism..."

You made false statements about what I know and do not know. I used a reference to a published record to show your statement is false. Stop making false statements.

" You are claiming to test superdeterminism, which is fundamentally impossible, as you admit yourself, and instead propose a test of a contrived model you concocted yourself."

You cannot test concepts. You can only test models. I note in the passing that you attempt to publicly degrade my work without even a hint of scientific argument.

So now you complain that on the one hand superdetermisim is "a desperate attempt to retain determinism and locality" but on the other hand it may not be deterministic. Do you even notice that you constantly contradict yourself? Best,

I just want to quote this from your reply to Andrei because it shows where your argument goes wrong:

"It gets even worse if we want this superdeterministic CM to reproduce a violation of a Bell inequality compatible with quantum mechanics. Now it needs to emit four different electromagnetic waves that would precisely reproduce the quantum statistics, even though in principle it could produce any statistics at all. And as far as we know this is just an antenna, that neither knows quantum mechanics nor that we are doing a Bell test.

This is why I say that superdeterminism is just plain ridiculous."

You should try to quantify your statement about statistics. You will notice quickly that you can't do this without making assumptions about the probabilities of initial states which end up being assumptions about the initial state of the universe. It's the same mistake you already made above. You believe you know something that you cannot know. Best,

"You continue to assign statements to me I have never made. And that's despite me having told you several times I never said anything like that."

You have never said anything like that? Then what did you mean with these sentences:

"Any theory with a Hamiltonian evolution can produce any correlation. All you have to do is take the present state and evolve it backwards. This will give you an initial state from which you get whatever you observe.", "As I explained above, take any state Psi_e with any correlation and evolve it back to Psi_i. That state Psi_i evolved forward will give you the correlation you say isn't possible. Works with any Hamiltonian evolution.", "I told you (several times now) that any deterministic theory can reproduce any correlation (provided the state is in the configuration space to begin with of course). This is not specific to superdeterminism. For any deterministic theory you can put the correlation in the initial state, simply be evolving the correlated state back in time.", "It's easy enough to get an initial state that results in any correlation you want to measure.", "I merely told you that for any correlation that you measure at a final time, you can evolve the state back in time and you will have an initial state that generates the correlation. Hence, there's an initial state that will produce the outcome.", "You have first proclaimed that quantum mechanics, or classical mechanics, and so on cannot produce these or that correlations. I told you why that's wrong."

Are you claiming that these correlations can be produced or not? What I am interested in is in a real laboratory experiment, are you claiming that they can be produced or not?

If you are claiming that they can be produced, they you are wrong. Show how to do it or stop making these false statementes. If you are not claiming this, then I don't know what you are talking about, but my argument in any case holds: a superdeterministic theory can produce any correlation at all, but in reality you can not.

"As to your bizarre statement that I do not know how to "produce the initial state" - I don't produce initial states, neither do you, unless you think you are God."

I definitely can produce initial states. With my divine powers I can put a BBO in front of a laser to make an entangled state, and put waveplates in the path of the photons to change their polarisation.

I meant with these sentences exactly what they say. You want them to mean something else. I don't know why.

I find it outright bizarre that you quote a whole bunch of sentences that supposedly show I said something I didn't, but none of the sentences (needless to say) contains what I didn't say. What do you hope to achieve with that? That a reader who can't read accidentally thinks you make sense?

"I definitely can produce initial states. With my divine powers I can put a BBO in front of a laser to make an entangled state, and put waveplates in the path of the photons to change their polarisation."

All of which was encoded in the initial state of the universe, some billion years before you came around to proclaim you had something to do with it.

Look, I don't want to argue with you about the meaning of the word "produce" - that seems a waste of time. Possibly I use it in a different way than you do. You seem to use it to mean that you (or the experimenter) is part of the configuration. Or something like that, your statements don't make much sense to me, sorry.

So let me assume that's what you mean. If you "produce" initial states in the sense of being part of the experimental arrangement you don't have to "produce" correlations. According to superdeterminism, the correlations are just there - have been there since the beginning of time. I understand you don't like that (and most other people besides possibly 't Hooft). But it takes more than saying that's "silly" to rule out the option.

If you want to rule out superdeterminism, you have to show these correlations are *not* there. As I have told you above, it's trivial to see that these initial states do exist (contrary to what you claimed earlier). You aren't delivering an argument for why these are not the right explanation for our observations.

And to repeat this once again, I am not advocating that superdeterminism is correct, I am merely saying that the argument you have brought up does not work against it. Best,

"You should try to quantify your statement about statistics. You will notice quickly that you can't do this without making assumptions about the probabilities of initial states which end up being assumptions about the initial state of the universe. It's the same mistake you already made above. You believe you know something that you cannot know."

I don't think you are actually interested in the numbers, but I'll provide them anyway as a matter of courtesy: let's say that the antenna is trying to mimic the statistics of the standard Bell scenario, where Alice and Bob share (|00>+|11>)/sqrt(2), and are measuring the observables A0 = X, A1 = Z, B0 = (X+Z)/sqrt(2), and B1 = (X-Z)/sqrt(2). The probabilities for the setting 00 are p(00) = (1+1/sqrt(2))/4, p(01) = (1-1/sqrt(2))/4, p(10) = (1-1/sqrt(2))/4, and p(11) = (1+1/sqrt(2))/4, so the source must be able to generate four different states that produce these outcomes.

The source then generates a number between 0 and 1 using MATLAB's rand(), and if the result is between 0 and (1+1/sqrt(2))/4 it generates the electromagnetic wave that will produce the result 00. If the result is between (1+1/sqrt(2))/4 and 1/2 it generates the electromagnetic wave that will produce the result 01. If the result is between 1/2 and (3-1/sqrt(2))/4 it generates the electromagnetic wave that will produce the result 10. If the result is between (3-1/sqrt(2))/4 and 1 it generates the electromagnetic wave that will produce the result 11.

I'm not going to describe what to do for the other settings, as it should be rather obvious. But where did I need to make assumptions about the probabilities of the initial state of the universe? We're not generating universes here, we are just describing how a deterministic theory could mimic Bell correlations. Or are you worried about how to seed MATLAB's rand()? By default it just uses the seed "0", no probabilities here. If you want slightly more interesting statistics you could seed it with the local time, but you can't get much better than that. We are talking about a deterministic theory after all.

Your picture of superdeterminism is indeed ridiculous, but it is nothing like I am arguing for.

When I am claiming that classical electromagnetism is superdeterministic I am not speaking about a new theory, but about the same theory of Maxwell and Lorentz. So, let us agree on the following points:

1. The experimental setup has to be translated into the "language" of the theory. So, there are no computers, random number generators (quantum or classical), cats, free-willed humans, etc. because there is no way we could plug such entities into the theory's equations. So, you can have everything you want as a device to set up those detectors, but as long as it is based on atoms I will describe it as a system of electrons and quarks, regardless of its macroscopic appearance. So, a human brain, a radioactive material, a computer are nothing but different configurations of electrons and quarks.

2. There is no such thing as a "stray field". The only things that exist are:

-position and momenta of charged particles-electric and magnetic fields produced by those charged particles-effects (forces) exerted by the electric and magnetic fields upon the charged particles

The only information that is transferred between A, B and S is in the magnitude of those fields.

So, the evolution of your random number generator is fundamentally described as the motion of its constituent particles, and all the relevant information is available at S in the form of the magnitude of electric and magnetic fields generated by them. This is implied by the theory, not postulated by me.

" CM allows for arbitrarily good shielding of electromagnetic fields, so we could put S inside a Farady cage, so that the fields that come from Alice and Bob's computers are effectively zero."

This only works for a uniform (continuous) charge distribution. It so happens that in our universe the charge is quantized so there is no way you can get an instantiation of that, otherwise correct, mathematical result.

"It gets even worse if we want this superdeterministic CM to reproduce a violation of a Bell inequality compatible with quantum mechanics. Now it needs to emit four different electromagnetic waves that would precisely reproduce the quantum statistics, even though in principle it could produce any statistics at all. And as far as we know this is just an antenna, that neither knows quantum mechanics nor that we are doing a Bell test."

My claim was that CM is superdeterministic in the sense that the assumption of statistical independence required by Bell's theorem contradicts the formalism of the theory. If we agree on this point we may discuss further if the formalism of QM can be recovered.

I felt the need to quote yourself because there is confusion about what is it that you are claiming. Saying that "I meant with these sentences exactly what they say." is not helpful. It would be helpful if you give me a straight answer to my question "Are you claiming that these correlations can be produced or not? What I am interested in is in a real laboratory experiment, are you claiming that they can be produced or not?"

I'm not asking about whether they can be produced by a superdeterministic theory, it is obvious that they can, I'm asking about reality. Or quantum mechanics, or classical electrodynamics, as you wish.

"Look, I don't want to argue with you about the meaning of the word "produce" - that seems a waste of time. Possibly I use it in a different way than you do. You seem to use it to mean that you (or the experimenter) is part of the configuration. Or something like that, your statements don't make much sense to me, sorry."

I mean produce in the sense that they are routinely produced in Bell tests. Hopefully there is confusion about that.

"All of which was encoded in the initial state of the universe, some billion years before you came around to proclaim you had something to do with it."

So there is no free will, and no quantum randomness? Is that what you are claiming? Again, I'm talking about reality, not superdeterminism.

"If you want to rule out superdeterminism, you have to show these correlations are *not* there. As I have told you above, it's trivial to see that these initial states do exist (contrary to what you claimed earlier). You aren't delivering an argument for why these are not the right explanation for our observations."

Please do write down the quantum state that you think can violate Tsirelson's bound, hopefully it will then be clear what is wrong with it.

And for the umpteenth time: my argument is that superdeterminism can explain any observation at all, and is therefore not a scientific explanation. You may disagree with the argument, but you don't get to claim I'm not delivering an argument.

Sabine, What would exclude dependence on «collecting data for the likelihood of getting a specific set of parameters» and to form a natural science, it is not necessary to build the desired "picture" of the universe as a "mosaic" of certain known facts and generally accepted generalizations. It is necessary to build a picture of the universe from the elements of the "puzzle", each element of which consists of a set of interrelated facts and generalizations based on analogues and natural principles. The form of a "puzzle" is that set of related facts in which a fragment of the universe is reflected, but which can be understandable, and perhaps not understandable at first sight. Then naturalness, elegance and beauty will be visible in a visual "image" of the overall "picture" of the universe. Then we can purposefully search for the missing element of the "puzzle" in the "picture" of the universe. Thus, it is possible to minimize the dependence of the research findings on the probability of obtaining a certain set of parameters. To use only natural principles, it is necessary to introduce rigid selection for non-natural properties. For example, we all know that in the universe there cannot be ideal properties of matter and fields. Nevertheless, we use ideal properties for the elegance of laws, often without even realizing it, or simply because we do not know the real properties and laws. Relation to ideal properties, this is the main criterion of the naturalness of the theory and is a "sieve", through which it is necessary to sift all theories and generalizations. In this case, a lot of not enough substantiated generalizations that confuse our thinking and create not a natural science will be eliminated. But, unfortunately, these are the statements, after the sifting of insufficiently justified generalizations, are considered a "conspiracy". Supporters of superdeterminism understand that the need for sieving has long since matured, because natural science has become unnatural. One person knows how to do, another person knows how not to do, the third knows how to express this by abstractions of mathematics. The theory created by each such person is erroneous in its own way. But the probability of creating a superdeterministic theory increases significantly with the involvement of people with different thinking. Therefore, our collective work is needed to create a realistic and natural theory - without an alternative theory (according to Einstein) and a superdeterministic theory (according to Bell).

As much as I enjoy the schadenfreude of two bare knuckle fighters, I have to admit that I haven’t invested the time nor mental energy to understand either of your arguments. I haven’t invested time nor energy because I think it is a moot point. Here is why:

Bell said, “In a theory in which parameters are added to quantum mechanics to determine the results of individual measurements, without changing the statistical predictions, there must be a mechanism whereby the setting of one measuring device can influence the reading of another instrument, however remote.” - Wikipedia

Let’s unpack the meaning of the phrase, “without changing the statistical predictions“.

I understand this to mean ‘without changing Heisenberg’s uncertainty and the Copenhagen interpretation.’

Let’s agree that Heisenberg’s uncertainty is correct.

So there is another option to super-determinism. Copenhagen could be wrong.

If Copenhagen is false, then there is another mechanism for uncertainty.

And this reduces to the fact that transfers of momentum and energy are lossless.

The losslessness of these transfers leads directly to the emergent laws of conservation.

To make a measurement, some energy has to be transferred. Energy from a photon for example.

i.e., the act of measuring position changes momentum and vice versa.

I think there is a mathematical proof here based on perhaps Maxwell’s equations, but I am not yet “Lost in Math” on my journey, and I don’t plan to be either.

"It is true that in general changing some arrangement of charged particles will create some stray electromagnetic fields that will influence the position of other charged particles, but this does not make classical electrodynamics a superdeterministic theory."

I do not know what "stray" fields are supposed to be. The beables of CM are the charges and the electric and magnetic field vectors. And we have the folowing rules:

-a charge generates an electric field of unlimited range-a moving charge generates a magnetic field of unlimited range-a changing magnetic field generates an electric field-a changing electric field generates a magnetic field-an electric field generates a force upon a charged particle on the direction of the field-a magnetic field generates a force upon a moving charged particle perpendicular on the direction of the field

So, not only "changing some arrangements of charged particles" will create some "stray" fields. Each electron and each quark that is part of the system A, B or S creates electric and magnetic fields and those fields determine the motion of each electron and quark in the systems A, B and S.

"First of all, in the loophole-free Bell tests, the generation of the detector settings was done with a space-like separation to the generation of the photon, so by relativity there couldn't possible be any stray electromagnetic fields perturbing each other.

This does not need bother us, though, as since CM is a deterministic theory, we could just locate the change in the states of S, A, and B in the intersection of their past light cones, so that relativity doesn't pose any obstacle there."

Sure Maxwell theory is a fully relativistic theory so there is no issue here. The distance or relative speeds between A, B and S is irrelevant for my argument.

"To generate the settings, we cannot use a quantum random number generator, as we are in CM"

Here you beg the question. My point is that CM could explain QM so, be my guest, use whatever random number generator (RNG) you want.

"So in this causal past we have some seeds for the pseudorandom number generators, that will determine the settings of Alice and Bob. But here the weirdness start. S must somehow have access to these seeds, even though they are in Alice and Bob's computers, and the electromagnetic fields produced by these seeds outside the computers are extremely weak."

CM does not care about the macroscopic appearance of your RNG. It only deals with the fundamental entities, the electrons and quarks that your computer is made of. S does not "know" about the seeds per se, but it does know about the position/momenta of the particles that your computer, including its memory is made of. That information is contained in the electric and magnetic fields at S that are generated by A and B.

"Even more, CM allows for arbitrarily good shielding of electromagnetic fields, so we could put S inside a Farady cage, so that the fields that come from Alice and Bob's computers are effectively zero."

In principle, yes, shielding should be possible if you can uniformly spread a charge over a surface. In our universe, however, charge is quantized so it cannot be done.

"But let's say that somehow S does have access to the seeds. Now it must somehow calculate how the seeds will produce the sequences of 0s and 1s that Alice and Bob are going to use, even though as far as we know S is just an antenna, not a classical computer.

But let's say that somehow S has access to the sequences of 0s and 1s. Now it must produce four different electromagnetic waves, depending on which combination of settings Alice and Bob are going to measure in each round: 00, 01, 10, or 11. Which is rather weird, as in a Bell test the source is configured to always emit the same state, and in the CM case as far as we can tell the antenna is in a fixed configuration, always emitting the same electromagnetic wave. But this is what it takes to make CM a superdeterministic theory."

See above.

"It gets even worse if we want this superdeterministic CM to reproduce a violation of a Bell inequality compatible with quantum mechanics. Now it needs to emit four different electromagnetic waves that would precisely reproduce the quantum statistics, even though in principle it could produce any statistics at all. And as far as we know this is just an antenna, that neither knows quantum mechanics nor that we are doing a Bell test."

Let's for the moment stop to my original claim that CM is superdeterministic. After we agree on that, meaning that CM cannot be ruled out by Bell, we could shift to the more radical claim, that it actually can provide an explanation of the full QM formalism.

Andrei

PS I have posted another answer but it has not been uploaded. I am not sure if it was a problem with my browser, or our host did not have the time, but, anyway, I wrote another one.

I have some issues with the comment feature. In addition I am traveling, so it just takes me a while to get around to have a look at the comments. It doesn't help that this is a phase in which I get a lot of spam comments. This happens periodically, then Google updates their filter and it gets better for a while, then they come back, etc. This is just to say, sorry for the delay, it has nothing to do with you. Best,

"It would be helpful if you give me a straight answer to my question "Are you claiming that these correlations can be produced or not? What I am interested in is in a real laboratory experiment, are you claiming that they can be produced or not?"

I am saying that you do not produce correlations, neither do I, or anyone else. Correlations either exist or they don't. You can not "produce" correlations any more than you can change the initial conditions of the universe. The answer to your question is hence, no you cannot produce them. But that's not the relevant question. The relevant question is whether you can measure them. To which the answer is obviously yes. Indeed, you do measure them.

Regarding your statistics. First, superdeterministic theories have hidden variables, the variables seem to be missing in your elaboration. Having said this, you write

"The source then generates a number between 0 and 1 using MATLAB's rand()..."

No it doesn't. Superdeterminism is deterministic. It doesn't produce random numbers. The numbers have some value, that generically depends on mentioned hidden variables.

"where did I need to make assumptions about the probabilities of the initial state of the universe?"

I find it a bit offensive that you ask me to quantify my statistics and do not even read what I wrote. MATLAB's rand() is a pseudorandom number generator, and as such perfectly deterministic. This is why I was discussing how to seed it.

"First, superdeterministic theories have hidden variables, the variables seem to be missing in your elaboration."

This is just nonsense.

"I am saying that you do not produce correlations, neither do I, or anyone else. Correlations either exist or they don't. You can not "produce" correlations any more than you can change the initial conditions of the universe. The answer to your question is hence, no you cannot produce them. But that's not the relevant question. The relevant question is whether you can measure them. To which the answer is obviously yes. Indeed, you do measure them."

That's far from a straight answer. What you are claiming is that the correlations are already there or not, and the only thing we can do is passively measure them? So no free will, no quantum randomness, no hope to setup our own experiments?

But anyway, you keep avoiding the most important point. You and I agree that there is actually a quantum state that can generate correlations violating Tsirelson bound. I claim that this state doesn't exist in Nature, whereas you claim that it does. Please just write down the quantum state you have in mind so that we can finish this discussion.

"I don't think you are actually interested in the numbers, but I'll provide them anyway as a matter of courtesy"

well, actually, providing something to someone that you believe is not interested on it is not courtesy but arrogance, patronizing behavior.(sorry, just for the sake or precision or because I'm a dick)

"What you are claiming is that the correlations are already there or not, and the only thing we can do is passively measure them? So no free will, no quantum randomness,"

Exactly.

" no hope to setup our own experiments?"

Depends on just exactly how you define "setting up", but I don't think that's a insightful discussion.

"But anyway, you keep avoiding the most important point. You and I agree that there is actually a quantum state that can generate correlations violating Tsirelson bound. I claim that this state doesn't exist in Nature, whereas you claim that it does."

I never claimed any such thing. I merely said that you haven't provided a reason for why it doesn't exist. I did this (as I explained a few times now) to show that your argument is logically wrong.

As to the RAND, I was assuming you want this to stand in for a fundamentally random process. In any case, this just moves the question to why do you think this is the right pseudo-random sequences.Best,

"I never claimed any such thing. I merely said that you haven't provided a reason for why it doesn't exist. I did this (as I explained a few times now) to show that your argument is logically wrong."

You definitely claimed several times that from any correlation at all, we can just evolve back the state displaying this correlation to get an initial state that will produce it. This is true. What I'm asking you is to write down a quantum state that will generate a correlation violating Tsirelson's bound. It is not hard to do that, I've just written one in a piece of paper in front of me, took me all of five minutes. As soon as you actually propose a state, I can tell you why it doesn't exist in Nature.

"Depends on just exactly how you define "setting up", but I don't think that's a insightful discussion."

I don't think there is any difficulty with that. Just like we set up our Bell tests. You know, using our illusion of free will, preparing a laser, a SPDC, the QRNGs that we deliriously believe to be producing random numbers to orient our detectors. Is there really any trouble here? My question is whether you are claiming that in such a misguided experiment we will be able to observe any correlation at all.

You seem to have a hard time parsing what I am saying. The states "exist" in a mathematical sense. That does not imply that I claim it is likely to exist in reality, which is an entirely different thing. The answer to this depends on the theory you are considering. Let me repeat this once again that *you* were the one who brought up this whatever bound and the claim that it can't be violated.

Also what the state looks like depends on the time-evolution operator which depends on the theory that you are considering. There just is no general answer to your question. Also, why the heck do you want me to prove a point that you were trying to make?

But, yeah, sure, go ahead, tell us why you think you know the backward evolved state doesn't "exist" in quantum mechanics. I'm not sure why you think it's relevant as we were discussing superdeterminism, or so I though we were. Also, I hope you will manage to do that without making assumptions about the initial state of the universe.

"I don't think there is any difficulty with that."

I did not say there is a "difficulty" with that. Look, you seem to be unable to correctly read a single sentence that I write. It has never before happened to me that I had to endure an argument with someone who so constantly misquoted everything I wrote. Stop it. It's not helping your case. Best,

You were the one claiming that any correlation at all can be produced. I'm trying to make you back up your claims with some actual calculations, or at least make them precise.

"The states "exist" in a mathematical sense. That does not imply that I claim it is likely to exist in reality, which is an entirely different thing. The answer to this depends on the theory you are considering."

So you are not claiming that they are likely to exist in reality, and there is still no answer about whether any correlation at all can be produced in reality? The theory I'm considering here is quantum mechanics.

"Let me repeat this once again that *you* were the one who brought up this whatever bound and the claim that it can't be violated."

Well, forget the bound then, just give me a general recipe for calculating the initial state that will produce any given correlation. I'm just trying to make your life easier here, as a single state and correlation that violates Tsirelson's bound is enough to make the point. But I find nevertheless shocking that you don't seem to know what Tsirelson's bound is, and nevertheless believe that you are qualified to write about superdeterminism and Bell inequalities.

"Also what the state looks like depends on the time-evolution operator which depends on the theory that you are considering. There just is no general answer to your question. Also, why the heck do you want me to prove a point that you were trying to make?"

The time-evolution operator doesn't really matter, as long as it is local. Just choose the simplest one that does the job. I'm not asking you for a general answer, just a single example. And again, no shifting the burden of proof. You were the one with the argument of reversing the time evolution, and you were the one claiming that any correlation at all can be produced.

Sabine, this is your blog. Mateus came here with counter arguments. You made your point several posts ago. I am not a physicist, but I comprehend your arguments. Just leave off responding to Mateus. It's not worth your effort.

This is slightly off topic, but I'd like to point out that you cannot always evolve back in time from an arbitrary final state. It depends on the equations. For some the back in time initial value problem is ill posed and my not have any solutions. This is true for parabolic type equations. So saying "pick any state evolved it back in time and use that as you starting point, this way you can get to whatever final state you want" sometimes doesn't make sense.

"You were the one claiming that any correlation at all can be produced."

At this point I can only conclude you want to deliberately misquote me in a desparate attempt to win an argument that's unwinnable. What I have said several times in various forms, to quote myself is "any deterministic theory can reproduce any correlation (provided the state is in the configuration space to begin with of course)". You continue to want to turn this into a statement about what some experimenter can do in their lab. I never said anything about this. You did.

"So you are not claiming that they are likely to exist in reality, and there is still no answer about whether any correlation at all can be produced in reality?"

No, I never said anything about whether any state exists in reality. You did. I have asked you several times to justify your claim. You didn't. How often do I have to reiterate this so that you understand it?

"just give me a general recipe for calculating the initial state that will produce any given correlation

I already did this. For any theory with a reversible time-evolution take whatever endstate you want and evolve it back in time. How hard can it be to understand?

(As Space Time says, there are some pathological cases where this doesn't work, but I don't see how that's relevant here.)

"As soon as you write down the damn state I'll do it."

|\Psi>

Hope that makes you happy.

Btw, I am about to take tomr's advice. You are not being sincere in your argument and you are wasting my time. I have better things to do. It's clear that you cannot back up your (unoriginal) claim that superdeterminism is "silly". (Unsurprisingly so, because it's wrong.)

"At this point I can only conclude you want to deliberately misquote me in a desparate attempt to win an argument that's unwinnable. What I have said several times in various forms, to quote myself is "any deterministic theory can reproduce any correlation (provided the state is in the configuration space to begin with of course)". You continue to want to turn this into a statement about what some experimenter can do in their lab. I never said anything about this. You did."

I'm desperately trying to get any precise claim out of you. In my last 10 comments or so I'm asking what is it that you are claiming. So please, can you finally tell me whether you are claiming that some experimenter in their lab can produce any correlation they want? This is the only thing that matters. It is obvious that mathematically you can get a state that will produce any correlation you want. The problem is that these states are never going to show up in a real Bell test, or any similar experiment designed to test correlations.

If you are not claiming that these correlations can be produced in a real experiment, then my argument against superdeterminism stands.

"I have asked you several times to justify your claim. You didn't. How often do I have to reiterate this so that you understand it?"

And I have said, several times, that I will do so as soon as you write down the relevant quantum state. It is not meaningful for me to propose a state only to shoot it down afterwards; you can always claim that you had something else in mind, that some other thing might do the job. I don't want to put words in your mouth, as you already accuse me of doing. Just to be clear: I know exactly which kind of quantum state is necessary to produce these correlations, and I know exactly what is the problem with it.

""As soon as you write down the damn state I'll do it."

|\Psi>

Hope that makes you happy."

No, it doesn't. Are you taking this at all seriously? Do you actually know what kind of state is necessary? I find it rather irresponsible to base your whole argument on the existence of some state which you don't seem to understand what is.

"You are not being sincere in your argument and you are wasting my time."

Now what do you base this insult on? I am dead serious. This is my research field.

PS: Indeed, my claim is not original: almost everyone that has studied the subject agrees with me. Is this somehow a point against me? And by the way, if you want to quote me, my words are "plain ridiculous" and "unscientific". You are the one saying "silly".

" just give me a general recipe for calculating the initial state that will produce any given correlation. I'm just trying to make your life easier here, as a single state and correlation that violates Tsirelson's bound is enough to make the point."

Let's say that the results of a Bell test are:

++-+++--+--- for Alice+++---+++--- for Bob

You have 50% same result, each has 50% of + and - respectively.

A superdeterministic theory would state that the particle source generated pairs of particles with the following spins (on the measurement axis): ++, ++, -+, +-, +-, +-, -+, -+, --,--,--.

So, the initial state of the particle source is one that necessarily determines the emission sequence above. The exact description depends on the formalism of the superdeterministic theory. The initial state of the particles is the one above and is the same as the final one.

You would both do well to carefully read Bell's "Free Variables and Local Causality", since it discusses exactly how treating certain variables in an experimental situation as "sufficiently free for the purposes at hand" (NB: it is the variables that are treated as free, not people, so this has nothing to do with free will) is an essential aspect to determining whether or not a theory *explains* or *predicts* a certain phenomenon rather than merely *allowing* it. Trying to understand Bell's point would improve the tenor of this debate, which has indeed degenerated to a rather unenlightening dynamic.

For the record, Mateus is right that admitting superdeterminism (AKA conspiracy theory or hyperfine tuning) violates basic premises of the scientific method and would undermine good scientific practice. As Mateus has pointed out, by hyperfine tuning one can undermine even what is considered the gold standard of evidence for a causal connection, viz. randomized double-blind experiments. Statistically significant experimental outcomes in such a setting can be "explained" without any causal efficaciousness via hyperfine tuning. Accepting hyperfine tuning as a legitimate form of scientific theorizing would therefore undercut essentially all scientifically accepted claims about causation and causal effectiveness. It would be suicidal for scientific researchers to recognize such hyperfine tuning claims as scientifically acceptable.

As I see, you are still speaking about "hyperfinetuning" without explaining what you mean by that. I told you earlier what's the problem with your claim, but apparently not with much success, so here we go again:

It's perfectly legitimate to chose the initial conditions that contain correlations that you observe at the final time. This is *not* the problem with superdeterminism, or with any other deterministic theory that tries to do the same. (The example I gave earlier was creationism or cosmology.) The problem is that such a theory will not also, in general, have a time-evolution that simplifies the final state. (You might say it "explains" it, but we already noticed earlier that you use the word in a rather fuzzy way that I have my problems with.)

Just in case you missed the relevant point: The issue is *not* the initial state, the issue is that the time-evolution acting on this initial state may not also match the intermediate times. To put this differently once again. Consider you have an endstate and you roll it back with any deterministic time evolution to an initial state. This will trivially result in the final state when evolved forward (as I tried to explain to Mateus). But it will not, in general, work for all intermediate times.

Creationism example: Put all dinosaur bones into initial state of Earth 10,000 years ago, deny evolution happens. Consequence: You'll not be able to explain natural selection among bacterial strains.

If you want to argue that we really only need the final state, you'll have to speak of the simiplification that you get from the time evolution. Example: cosmology. Let us forget for a moment that particle interactions come from QFT, and only take GR. That's a deterministic theory (leaving aside black holes). We observe the CMB and galactic structures today. You can evolve these observations back with *any* time evolution to an initial state which then will produce what we see. The reason we pick a specific evolution (as with those particular parameters for LCDM) is that it greatly simplifies the initial state. This simplification is quantifiable. (Certainly not claiming I am saying something new here.)

So there's what you have to do if you want to have an argument against superdeterminism: You'll have to show that time-evolution doesn't simplify ("explain") anything. Best,

I apologize for jumping into your discussion but I just cannot let go this chance of providing you with some explanations to your points against superdeterminism.I will start with the generic argument that superdeterministic theories are not scientific because they would force us abandon medical tests or something like that. Two points here:1. Superdeterminism is just one feature of a theory, just like non-locality is. It does not define the theory completely. A hypothetical superdeterministic interpretation of QM would not simply say that everything is correlated in every way one can imagine, just like a non-local interpretation does not say that about non-local interactions. More to the point, such a theory would impose some constraints on the way a particle is emitted so that the emission event is correlated with the absorption (detection) event. There is no justification in concluding that this implies that doctors with correlate with some patients, etc.

2. Accepting a superdeterminist interpretation of QM means of course an acceptance of QM as a correct theory (QM is recovered in some limit) so as long as the medical tests are compatible with QM they would also be compatible with a superdeterministic local realistic theory that reproduces the predictions of QM.

My second point relates to the claim that superdeterminism implies fine-tuning. Sure, one can posit that the observed correlations are caused by a “ hyperfine tuning” of the initial conditions at the Big-Bang. But of course, this is not the only way one can obtain correlations between emission and absorbtion events. Such a counterexample has been provided by me in our past exchanges and also here in this thread. One can take the theory of classical electromagnetism seriously and observe that the emission event should depend on the electric and magnetic fields acting at that location and that those fields are uniquely determined (I can justify this if you want me to) by the position/momenta of all charges (regardless of the distance and relative motion). And because the states of the detectors are also described by the position/momenta of the charged particles inside them one can see that the states of the detectors and the state of the source cannot possible be free parameters.

In our past discussion you pointed out that the above argument applies to microstates, and one should see if the correlations survive when one averages over all possible microstates. This is true, but I think that the burden of proof should be on you to show that the statistical independence is restorred at macroscopic level. Assuming it without a justification based on the theories’ formalism would be an example of hyperfine tunning from your part.

I don't understand why you keep repeating that I have no argument. I have written it, several times already. Having an argument that you don't agree with is different than having no argument. But I'll repeat it once again:

Superdeterminism is not scientific because it allows for any correlation at all to be observed, whereas in reality this is not the case.

Do you dispute the latter statement? If you are not claiming that any correlation at all can be produced in a real experiment, then my argument stands, and the discussion about the initial quantum state that you refuse to think about is irrelevant.

If you do think that any correlation at all can be produced in a real experiment, then it is a problem for my argument, and I have to show you that you're wrong.

(There is a general principle against "theories" that allow anything to happen, this is not particular to superdeterminism.

The theory that the planets move around the sun because angels are pushing them around is unscientific because the angels could push them in any arbitrary orbits, so there is no explanation for the orbits we actually see.

The theory that we have the animal species that we have because the gods chose to make them this way is unscientific because the gods could have made any species at all, there is no explanation for the species we actually see.

(You might object that angels and the gods are the problem here. Just replace them with any other entity that is currently fashionable, e.g. we live in a simulation and the simulators just programmed it this way).)

My comment was mainly aimed at you, but I suppose I need to expand on it. You have said a few times, including in your last post, that in a deterministic theory you can explain any end state by simply choosing the appropriate start state. How? Evolve back in time the end state and whatever state you get take that for your start state. Now that is not as trivial as it seems. And it is not a matter of pathological examples of rare equations or states. This is the norm for many equations (take parabolic equations).

Here is an example. The initial conditions for the heat equation can be very non-regular, say functions with discontinuities or distributions, you have a unique solution which will be infinitely smooth at any time in the future, no matter how non-smooth the initial state. That means that in this deterministic theory you can never get a final state which is not an infinitely smooth function. And you cannot take such a function and evolve it back in time.

I don't know what states with what correlations you two have in mind, and which equations (presumably Schrodinger), but you cannot say that any of those final states can be explained with choosing the initial conditions. The argument to evolve them back in time to see what the initial conditions need to be, needs justification. Simply stating it, is as good as wishful thinking.

You are the one arguing all the time about this initial state, and "I'll not do the work for you.", as if it somehow my job back up your argument with actual calculations. But nevermind, I'll do that, in the hope that you'll learn something from this discussion, if you at least try to calculate the state in a simpler scenario (not as dramatic as a Bell scenario, but the general problem shows up here as well):

Consider that you are producing a photon in a laboratory. It doesn't really matter how, as long as your setup is fixed, so that you believe that in each round the photon is in the same state. Now you do measurements on the polarisation of this photon, on either the Horizontal/Vertical or the Diagonal/Anti-Diagonal basis (corresponding to the Z and X Pauli matrices). The choice of basis can be made by a quantum random number generator, a pseudorandom number generator, or a human.

The straightforward quantum prediction is that if you always get result H in the H/V basis then the result of the measurement in the D/AD basis will be completely random.

Now I claim that in such an experiment you will never observe polarisation H whenever you're measuring in the H/V and polarisation D when you're measuring in the D/AD basis. Do you claim otherwise? Please give me a straightforward yes or no.

Nevertheless, your argument about starting with final correlation and playing it back in time to find a quantum state that will generate this correlation applies here. So what is the initial quantum state that can give p(H|H/V)=1 and p(D|D/AD)=1? Please sit down with pen and paper for five minutes and actually think about it.

Thanks for trying, it makes me happy to see that somebody here cares about actual physics and detailed arguments.

The example does not answer my challenge, though, as the interesting thing about Sabine's argument is that you can reproduce the statistics with an actual quantum state, without having to go for deterministic hidden variables.

Also, you did not mention the settings for which round, which are crucial for the argument, and these statistics you have shown are not forbidden by Tsirelson's bound, so you can get them from a regular quantum experiment, without having to go through the funky business I want to illustrate.

Maybe you want to try your hand on the simpler scenario I proposed above?

Hyperfine tuning is just what it sounds like: an extreme version of fine-tuning the initial conditions of a theory in order to get a particular phenomenon. Please explain if you have any question about exactly what that means.

To be more precise in this case (I am assuming that you did not read Bell), Suppose we have a particular run of the GHZ experiment in which all of the apparatuses were set to measure X-spin and the outcome was in accord with the GHZ predictions. You offer some local dynamics that you claim explains why the GHZ correlations obtain. Let's further suppose that the dynamics is deterministic. (If it isn't, we are done: the indeterministic local dynamics implies that any one of the three outcomes could have been different without any of the others being different, and hence does not assign probability 1 to the GHZ correlations.)

OK, so we use your deterministic local dynamics to evolve the correct outcome state back in time to an initial state. Good. Now since we are treating the settings of the experimental apparatuses as free variables, we ought to be able to change those settings to any of the eight possible global experimental situations, leaving everything else the same, and use your dynamics to evolve the state forward. And if the dynamics is local, we know that at least one of those settings will then yield a result forbidden by the GHZ predictions. So you are reduced to saying that it was just luck that the bad set of settings was not chosen, or else that the state with the bad settings is just forbidden by the theory. The first option concedes that there is no explanation given by the theory for the fact that the GHZ correlations always obtain: it is just a matter of lucky chance. The second option removes the luck by physically prohibiting the bad initial states. That is hyperfine tuning, and as Mateus has correctly argued allowing yourself the luxury of appealing to such a method of explanation undercuts all of scientific method.

I never said a word about the intermediate states, and explained both what hyperfine tuning is and why it is scientifically unacceptable. If you dispute this claim, state precisely why.

"A solar system with ten planets in almost the same orbit is unlikely. A solar system with ten planets in almost the same plane isn’t. We know this both because we’ve observed a lot of solar systems, and also because we can derive their likely distribution using the laws of nature discovered so far, and initial conditions that we can extract from yet other observations."

I don't understand how this example supports your point. If we hadn't observed any other solar systems, and we didn't understand the process of planetary formation and the conservation of angular momentum, but we had observed that all planets lie in the same plane, a scientist might think "hmm... that's a curious fact which seems to demand an explanation". And they would be right! In this case, the lack of statistics is not an obstacle to identifying an insight that hints at unknown theories.

Isn't it a similar situation with the Higgs mass? Some physicists feel that the Higgs mass is a curious fact that seems to demand an explanation. Of course, they could be wrong. But your argument that this line of thinking is wrong BECAUSE of lack of a probability distribution... I don't get it.

In my understanding, your solar system example actually supports the HEP use of naturalness arguments.

To put it another way. A scientist who doesn't know the distribution of solar systems can't say whether a planar solar system is probable or improbable (likewise, we can't say if a low Higgs mass is probable or improbable). But their ignorance has no bearing on the fact that they can seek theories that make the known facts probable.

So yes, perhaps it's wrong to say that the Higgs mass is "unnatural". But that doesn't mean that we shouldn't seek theories that make it natural.

"Now since we are treating the settings of the experimental apparatuses as free variables...

We don't, of course. This is superdeterminism, recall?

"Hyperfine tuning is just what it sounds like: an extreme version of fine-tuning the initial conditions of a theory in order to get a particular phenomenon. Please explain if you have any question about exactly what that means."

I was trying to give a generic definition, but let me give the particular instance here. I hope this will be completely clear.

The "phenomenon" we are interested in explaining in this case is just the GHZ behavior: that whenever we do what is called "preparing a triple of particles in the GHZ state" and then subsequently perform either an X-spin or Y-spin "measurement" on each, the results display this regularity: when we happen to measure XXX, we get an odd number of "up" results, and whenever we measure YYX or YXY or XYY we get and even number. I call this a "phenomenon" because it is plainly observable behavior: in principle, it could have been noticed by someone just fooling around in a lab, completely unaware of quantum theory. Of course, it is also a prediction of quantum theory, so one could also notice that without every doing a single experiment. That is closer to the historical case here. Furthermore, it is the case that quantum theory makes this prediction irrespective of how the settings of the three experimental apparatuses came about. Similarly, in principle our ignorant experimentalist could have verified that the same pattern of outcomes obtains no matter how the lab is set up to select the settings: whether by use of a computer calculating the parity of the digits of pi as expressed in base 37, or on the basis of the current value of the Dow Jones average, or by a flip of a fair coin.

In general, I mean by a "phenomenon" any such plainly observable behavior that can be described without reference to any microphysical theory. "Phenomenon" is from the Greek "phaino" meaning "to appear", and means "that which appears". If you like, you could replace that term generally with "macroscopic behavior".

"Fine-tuning" a theory is what happens when it is necessary that either some parameter in the theory or the set of possible initial conditions that are used to determine what is physically possible according to the theory are restricted in some extreme way in order that the theory predict some phenomenon. One can fine-tune either the "constants of Nature" (e.g. the relative masses of the various quarks, or the value of the fine-structure constant, etc.) or the allowable initial conditions of the theory. The more severe the restriction on the mathematically possible values, the more strictly the theory is fine-tuned. Fine-tuning that is extreme in some way is "hyperfine-tuning".

Are you just trying to make me angry? I've just typed down my argument against superdeterminsm again, are you pretending you didn't see it?

You do need to answer me whether you are claiming that any correlation at all can be produced in a real experiment. If you're not, then my argument stands. If you are, then I have to show you're wrong.

A theory that is "hyperfine-tuned" for a phenomenon could be so because the range of allowable values that recover the phenomenon is so small (e.g.: in order to get the formation of stable molecules at all, rather than just the formation of atoms, the ratio of the masses of the up and down quark has to be close to a given ratio to within one part in one hundred trillion trillion trillion. In that case, we would say that the theory has to be hyperfine tuned with respect to that mass ratio for molecules (hence stars and planets and life) to exist. Or, it could be hyperfine tuned for a phenomenon because the range of values required to recover the phenomenon is oddly gerrymandered, like a fractal set.

When is fine-tuning or hyperfine tuning objectionable? Not always. For example, consider the phenomenon "Croatia makes it to the finals of the World Cup in the year 2018 of the Julian calendar". That is an actual phenomenon, as we all know, and the initial conditions of the universe had to be hyper-hyper-hyper-hyper fine-tuned for it to occur. Move the location of a single hydrogen atom 10 billion years ago, and that phenomenon does not happen according to physics as we understand it. Is that an objection to physics as we understand it? Of course not! But we also regard the fact as one that happened, in an obvious sense, by chance. As Aristotle already observed long ago, saying that something happened by chance is denying that it has a certain sort of scientific explanation. By "certain sort" I have something fairly specific in mind here, but it would take too long to spell it out. So just take this example: If someone wins the lottery, we say they were lucky and won by chance. We would not say that physics either required or even made likely that they would win. In a deterministic physics, it required a very atypical initial condition, in an indeterministic physics it required both that and in addition a lucky outcome to various stochastic processes.

Note: even if the physics is deterministic, there is a clear sense in which the winner was lucky. It is true that one can trace the selection of the winning numbers (both by the player and by the lottery machine) to some earlier state of the universe, and so on back in time. But that just transfers the luckiness of the outcome to the luckiness of the initial conditions. It does not get rid of the luck.

Now (and this is the key point) there is no scientific field of trying to explain phenomena that happen by chance or by luck. It was just luck that that particular person won the lottery. But we would not consider it acceptable to say that it is just by luck that the children of blonde parents are statistically much more likely to be blonde than the children of dark-haired parents. That is clearly a phenomenon, but unlike Croatia making it to the World Cup finals we would never accept a theory that implied that the phenomenon was just by chance or by luck. The phenomenon is so widely observed and regular that we demand a scientific explanation that renders it not due to chance, such as genetic theory. This was the point of my play "Waiting for Gerard", which can be found here:

https://www.facebook.com/tim.maudlin/posts/10155641145263398

The GHZ phenomenon is like the children of blonde parents tending to be blonde, not like Croatia getting to the finals of the World Cup. It is the sort of striking generic phenomenon that requires a scientific explanation, and cannot be fobbed off as "mere chance". I claim that hyperfine tuning of initial conditions (A.K.A. "superdeterminism") simply is not the requisite sort of scientific explanation. Never in the history of science has such a restriction on the allowable initial conditions been accepted as a scientific explanation of such a phenomenon. Ever. So anyone driven—as they must be!—to hyperfine tuning in order to maintain a local physics is acting unscientifically.

Please see my response to Sabine and also the Bell piece "Free Variables and Local Causality". If you think that between those two your comments have not been satisfactorily answered, let me know. The observation that the electro-magnetic situation at a particular region of space is, strictly speaking, dependent on the positions and momenta of all charged particles everywhere (which is true) is neither here nor there. That would only be relevant if the phenomenon we are interested in (e.g. whether some radiation is emitted by a atom in some direction) is sensitive, at the appropriate level of epsilonics, to every one of those positions and momenta. In general, of course, it won't be and can't be. If it were, the emissions we observe would seem to us to be completely random, and not governed by any law at all.

Let me understand what you are asking. The Tsirelson bound (CHSH) for classical mechanics (without superdeterminism) is 2, for QM is 2*sqrt(2) and the maximum value is 4.

Now, my example was ment to show that you can get a quantum-like correlation from a superdeterministic theory. You seem to agree on this as you say that those results can be obtained " from a regular quantum experiment". It is true that I did not give you the settings but those can be whatever you want them to be, the only constraint being that the measurements on the same axis must always be anticorrelated. So, only the pairs 3,4,5 and 7 could correspond to same detector settings.

But it seems to me that you are asking to provide a quantum state that would violate the bound for QM. That is obviously impossible for logical reasons. You could in principle violate the QM bond with a superdeterministic setup but that would be pointless because it would not describe the physics in our universe.

Now let's discuss your proposed experiment with the photon:

"Consider that you are producing a photon in a laboratory. It doesn't really matter how, as long as your setup is fixed, so that you believe that in each round the photon is in the same state. Now you do measurements on the polarisation of this photon, on either the Horizontal/Vertical or the Diagonal/Anti-Diagonal basis (corresponding to the Z and X Pauli matrices). The choice of basis can be made by a quantum random number generator, a pseudorandom number generator, or a human."

So let's say that the photon is prepared in H eigenstate. A H/V measurement will only give H. A D/AD measurement will give you each result with 50% probability.

There is no quantum state that will always give you H for a H/V measurement and D for a D/AD measurement. If Sabine made such a claim then she is wrong.

But, a superdeterministic theory might give you such results. However, such a superdeterministic theory would not reproduce QM and would be wrong.

Space Time, mention of Poisson's Heat equation brings back happy days of engineering heat-transfer calculations. I think that Dr. Hossenfelder's answer will be that the assumptions that heat always flows from hot to cold and entropy from low to high are statistically based, not true physics time-evolutions. Usually, hot electrons hit colder electrons (or come close enough to be repulsed by them) and transfer energy to them. But in the basic physics equations of those collisions, it is possible for a lower-energy electron to transfer energy to a higher-energy electron, and thus for heat to travel from cold to hot. Poisson's equation does not allow for this, so it is not time-reversible, but the actual physics is--theoretically.

Let us… take GR. That's a deterministic theory…You can evolve these observations back with *any* time evolution to an initial state which then will produce what we see. The reason we pick a specific evolution… is that it greatly simplifies the initial state. This simplification is quantifiable.

In a deterministic universe, couldn’t we choose any Cauchy surface we like and evolve all the prior and later history of the universe from it (neglecting singularities, etc)? If so, then all the information for the entire history of the universe is encoded in the state of each individual Cauchy surface, so in what sense is one Cauchy surface “simpler” than another?

"You do need to answer me whether you are claiming that any correlation at all can be produced in a real experiment. If you're not, then my argument stands. If you are, then I have to show you're wrong."

Look, I have told you like a dozen times that I never claimed such a thing. Why do you keep repeating this? I don't even know which "argument" you think you have "standing". Why don't you just write down your supposed argument rather than asking me to spend my weekend on doing calculations whose outcome is totally irrelevant to the discussion.

"Fine-tuning" a theory is what happens when it is necessary that either some parameter in the theory or the set of possible initial conditions that are used to determine what is physically possible according to the theory are restricted in some extreme way in order that the theory predict some phenomenon... The more severe the restriction on the mathematically possible values, the more strictly the theory is fine-tuned. Fine-tuning that is extreme in some way is "hyperfine-tuning".

In order for a deterministic theory to predict the phenomenon that is the present state of the universe you need to hyperfinetune the initial state (according to your definition - just for the record, I don't think that's how most physicists use the word). Hence all deterministic theories are hyperfinetuned according to your definition.

You can of course define whatever you want, but it doesn't make much sense to me. Also, you didn't explain why this is something I should worry about, you simply claimed it is worrisome, which is a rather strange claim to make given that it's the case for all presently used theories.

In the hope that it accelerates the discussion the usual definition of finetuning employs a probability distribution over initial states (or parameter values) and evaluates the question how likely it is to get a state that's close to the present state. You cannot make any statements about finetuning without such a probability distribution. You are implicitly assuming that the initial state is evenly distributed over all "mathematically possibly values" (whatever that means), and that brings up several further questions.

"I claim that hyperfine tuning of initial conditions (A.K.A. "superdeterminism") simply is not the requisite sort of scientific explanation. Never in the history of science has such a restriction on the allowable initial conditions been accepted as a scientific explanation of such a phenomenon."

Let's leave aside for a moment that you haven't explained what's wrong with hyperfinetuning, and assume that you were actually speaking of initial states that are unlikely according to some distribution that you postulated (as most people you seem to think that uniform distributions are somehow better than others). You have also not provided evidence that superdeterministic theories are hyperfinetuned. How do you think you can even make such a statement without knowing the time-evolution, not to mention the distribution (which you have shoved into the "restriction" on mathematically possible values)? Best,

"Look, I have told you like a dozen times that I never claimed such a thing."

That's the first time, but who cares, now you have no objections against the argument I have been repeating since the beginning. Let me copy and paste the latest post in which I typed it, maybe this time you will see it:

"I don't understand why you keep repeating that I have no argument. I have written it, several times already. Having an argument that you don't agree with is different than having no argument. But I'll repeat it once again:

Superdeterminism is not scientific because it allows for any correlation at all to be observed, whereas in reality this is not the case.

Do you dispute the latter statement? If you are not claiming that any correlation at all can be produced in a real experiment, then my argument stands, and the discussion about the initial quantum state that you refuse to think about is irrelevant.

If you do think that any correlation at all can be produced in a real experiment, then it is a problem for my argument, and I have to show you that you're wrong.

(There is a general principle against "theories" that allow anything to happen, this is not particular to superdeterminism.

The theory that the planets move around the sun because angels are pushing them around is unscientific because the angels could push them in any arbitrary orbits, so there is no explanation for the orbits we actually see.

The theory that we have the animal species that we have because the gods chose to make them this way is unscientific because the gods could have made any species at all, there is no explanation for the species we actually see.

(You might object that angels and the gods are the problem here. Just replace them with any other entity that is currently fashionable, e.g. we live in a simulation and the simulators just programmed it this way).)"

I understand that you are busy, and have limited time, but if you are not willing to carefully read a post directed to you, then you can do us the favor of not replying to it either. I took some time to write a fairly long and detailed explanation, and either you did not read it at all or you did not read it with any attention.

You write:"In order for a deterministic theory to predict the phenomenon that is the present state of the universe you need to hyperfinetune the initial state (according to your definition - just for the record, I don't think that's how most physicists use the word). Hence all deterministic theories are hyperfinetuned according to your definition."

No, you completely missed the point—which might explain why your attempts to defend superdeterminism are so lacking. The issue is not merely predicting some phenomenon, which has the rather trivial character you keep pointing out in a deterministic theory when you want to predict the exact, completely detailed state of the universe at some time. To predict which phase point presently describes the entire universe you would need to invoke some past completely detailed phase point. But this sort of prediction does not constitute the type of *explanation of a generic phenomenon* that is what we seek from our theories.

No physical theory in the history of the universe ever has or presumably ever will seek to *explain* why Croatia got to the finals of the World Cup this year in anything like the way Newton's theory explains (with the appropriate caveats) Kepler's Laws. These are generic phenomena—behaviors that many, many different systems display—and they require a generic rather than specific explanation. That means that the phenomenon must be robust under many, many changes of specific detail. Merely deriving the present precise state of the universe from a past precise state does not provide such an explanation for anything.

I took some trouble to explain how certain phenomena—like Croatia and the World Cup—are not thought to have any such explanation at all: they are matters, to a great extent, or luck and coincidence. Of course they arose from a previous state of the universe, and the relevant features of that state that gave rise to the result are equally matters of luck and coincidence. "Explaining" what seems to be a lucky coincidence by reference to an earlier lucky coincidence is not explaining anything: it is just passing the problem from one state to another.

In order to understand what is going on, you have to think about the nature of scientific explanation. This is not a topic that was covered in your training, and so either you would have to read widely and think deeply on your own or else—as is almost certain—you have adopted some naive and easily refutable view about the nature of explanation such as the hypothetic-deductive model.

If you actually read my previous post with care, you will find that I would never, ever say that the entire present state of the universe is the sort of thing that admits of the relevant sort of explanation, and therefore would never conclude that present physics must be hyperfine tuned. What I would say is that the present state of the universe—in all its gory detail—does not admit of the relevant sort of explanation at all, and so no theory is hyperfine tuned to provide such an explanation. If you actually read the post and then have some clear question about it I will answer them.

"Now, my example was ment to show that you can get a quantum-like correlation from a superdeterministic theory. You seem to agree on this as you say that those results can be obtained " from a regular quantum experiment"."

Yes, this is true, but nobody is disputing this. The interesting thing is how to get to the forbidden CHSH value of 4 using a quantum state.

"But it seems to me that you are asking to provide a quantum state that would violate the bound for QM. That is obviously impossible for logical reasons."

Actually, it is possible by using a dirty trick. I'm trying to make Sabine understand what this dirty trick is.

"There is no quantum state that will always give you H for a H/V measurement and D for a D/AD measurement. If Sabine made such a claim then she is wrong."

She did make such a claim, and she is right. However, you are also right that there is no quantum state of the photon alone that will give you these results. How to reconcile these claims?

"But, a superdeterministic theory might give you such results."

Yes, and that is the key point. You can model superdeterminism within quantum mechanics, by postulating a quantum state with correlations between the photon and the choice of basis, and that will reproduce the impossible correlations. Now, which state is this?

Perhaps the laws of the universe dictate that Arun has no choice in the matter and must necessarily repeat what Sabine says without thinking?

First of all, I'm not saying superdeterminism is silly (this is Sabine's word), I'm saying it is unscientific and plain ridiculous. Second, I have written several times the reasons why, including in my latest reply to Sabine. It is a mystery to me why you and Sabine pretend these comments do not exist.

I am familiar with Bell,s writings and I think I have a good understanding of your arguments. I still think that they are wrong.

I don’t quite understand Sabine’s position in regards to fine-tunning and I prefer a much more clear discussion based on known physical theories. As I have said earlier, the claim that superdeterminism requires fine-tunning is wrong. Superdeterminism needs to somehow constrain the possible initial state. Fine-tunning is a possibility, of course, but it’s not the only one. The constraint could be imposed by the equations of a field theory.

So, I do not think that my arguments have been answered by you in this thread and neither in your past debates about superdeterminism.

For example, one of your points in the presentation of „What Bell did” was that Newtonian gravity, even if non-local could not explain the observed correlations because the force decreases with the square of the distance. The error here, in my opinion is that the magnitude of a force has nothing to do with the correlations that are imposed by the existence of that force. Pluto is much more distant from the Sun than Mercury, but its relative motion is not more uncorrelated, or random.

In my example with classical electromagnetism the state of source and detectors must be a solution of Maxwell’ s equations. The distance between them is irrelevant. This means that the initial state is constrained; it cannot be any state you want. You want to use the digits of PI to set your detector? Fine, but you then need to solve Maxwell’s equations and see what the corresponding states for the source could be. You cannot just ignore that constraint and say it doesn’t matter. And it might be the case that once you calculate those allowed states for the source you will find that they would correspond to QM’s prediction. Or maybe not. But a justification should be provided.

You say:

” The observation that the electro-magnetic situation at a particular region of space is, strictly speaking, dependent on the positions and momenta of all charged particles everywhere (which is true) is neither here nor there. That would only be relevant if the phenomenon we are interested in (e.g. whether some radiation is emitted by a atom in some direction) is sensitive, at the appropriate level of epsilonics, to every one of those positions and momenta.”

The emmited EM wave is exactly dependent on the motion of the charges that are involved in the process, and that motion is exactly and uniquely determined by the position/momenta of all other charges. I guess that your intuition tells you that changing the position of an electron by one nanometer a light-year away couldn’t change significantly the properties of the emmited wave. This is true, but the point is that an identical state with only a change in position of one electron would not be allowed by Maxewll’s equations. You would need to change the position/momenta of a lot of charges, possibly all, to find a valid state, and those changes could very well impact the emitted wave.

What is in your opinion the difference between a superdeterministic theory and a mere deterministic theory? What does the "super" refer to? It seems that you disagree with Tim what "superdeterminism" means.

"Superdeterminism is not scientific because it allows for any correlation at all to be observed, whereas in reality this is not the case.

Do you dispute the latter statement? If you are not claiming that any correlation at all can be produced in a real experiment, then my argument stands, and the discussion about the initial quantum state that you refuse to think about is irrelevant.

If you do think that any correlation at all can be produced in a real experiment, then it is a problem for my argument, and I have to show you that you're wrong."

I am saying you have failed to show that superdeterminism can "produce" any correlation in "a real experiment". Hence you have no argument. I have tried, evidently unsuccessfully, to demonstrate that your argument is wrong because it is *also* wrong for quantum mechanics. Of course you can make a better argument for quantum mechanics. And if you'd make that, you might maybe also see why your claim about superdeterminism doesn't stand up.

Good, you didn't say superdeterminism is "silly," I apologize. You said it's "ridiculous". Be that as it may, you have not backed up you argument and I do not understand why you go around and make proclamations that you can't back up. Best,

Let's see there. You fail to see the relevance of my response to your comment. You then accuse me of not reading what you wrote and continue with proclamations about your grandiosity. None of this is helpful.

My point was, using your words (in the hopefully right way, though your terminology is muddled), that you have not provided any evidence that superdeterministic theories do not "explain generic phenomena". (Of course they do. They're quantum mechanics with extra variables sprinkled over it. If you average over the extra variables, you get back quantum mechanics. They hence explain observations as good or as bad as quantum mechanics. That's why it's called an interpretation.)

You continue to speak of luck and coincidence without quantifying it. How do you not notice this, oh Tim Maudlin, the great philosopher? You also confuse the probability of instances in one set (say, the universe) with the probability of getting that set to begin with. You are referring to the former. I am telling you that you cannot make statements about the former without the latter. Since you cannot know the latter, this is not a workable argument to begin with. The useful argument speaks about the simplification in the initial state provided by a time evolution (as I said above).

" "Explaining" what seems to be a lucky coincidence by reference to an earlier lucky coincidence is not explaining anything: it is just passing the problem from one state to another. "

I am quoting this sentence because it shows the problems with your argument that I just explained.

a) reference to coincidences without quantifying them

b) confusing probability of instances within the universe (what you were referring to) with probabilities of the whole state (what I was referring to)

Superdeterministic theories are deterministic theories with correlations between the state that you want to measure and the detector (settings). The outcome of the measurement is determined, but it's determined by hidden variables, so the best you can do if you don't know the hidden variables is a statistical predictions. Since the correlations aren't local at the time of preparation you can't use violations of Bell's inequality (or its generalizations) to rule out such theories.

They're also sometimes referred to as "conspiracy theories" because they seem to say that the experimenter cannot "freely chose" the detector settings. Hence also the referral to the "free will loophole", though the reference to free will here is somewhat misleading. In any case, that's what it's called.

I find the reference to "conspiracies" problematic because it leads people to not take the option seriously (see this comment section as demonstration of the problem). For this reason there have never been any tests that could actually tell superdeterminism apart from quantum mechanics. (By construction, Bell-type experiments can't do it.) Best,

"I am saying you have failed to show that superdeterminism can "produce" any correlation in "a real experiment"."

Now that is new. I've never seen anybody disputing this. It is blinding obvious that with correlations between the state and the settings you can produce any correlations you want. Your objection probably lies in the "real experiment" part. Well, superdeterminism is just this correlation assumption, there is no restriction on how it applies to "real experiments". If you want to restrict that you need to build an actual superdeterministic theory and say how it models "real experiments". Do you propose any, or this is just an empty claim like all others you make?

"I have tried, evidently unsuccessfully, to demonstrate that your argument is wrong because it is *also* wrong for quantum mechanics."

Now what are talking about here? Are you disputing that quantum correlations are restricted by Tsirelson's bound? Didn't you just say that you're not claiming that quantum mechanics can produce any correlations at all in a real experiment?

Sabine is just talking out of her ass. "Superdeterminism' is just about the correlations between state and measurements, nothing to do with determinism. Deterministic theories usually don't postulate these conspiratorial correlations (Newtonian mechanics, classical electrodynamics, general relativity, etc.). On the other hand, one can perfectly well postulate a non-deterministic theory with these conspiratorial correlations (Sabine herself is postulating one in this thread, even though she hasn't realized this yet).

Also, "hidden variables" is a red herring. There is nothing to make them hidden, and in fact one can very well postulate a superdeterministic where all the variables are manifest.

Lastly, there has been no tests of superdeterminism because it is logically impossible to do so. The impossibility is not restricted to Bell tests.

"(Of course they do. They're quantum mechanics with extra variables sprinkled over it. If you average over the extra variables, you get back quantum mechanics. They hence explain observations as good or as bad as quantum mechanics. That's why it's called an interpretation.)"

I'm gonna jump in here. I think this sums up what's wrong with your notion of scientific explanation. Your notion is, and correct me if I'm wrong --but please make explicit what you mean by scientific explanation then: "If an experiment is in agreement with the predictions of the theory, then the theory explains the experiment." Take the following counter example of a theory, it only has one axiom: "Anything can happen."

Clearly, any experiment is in agreement with this theory. According to your view, this theory does just as good a job at explaining the experiment as any other. However, no one (or maybe the most ardent solipsist in the world would, but besides that, no one) would ever qualify that theory as scientific.

It's also worrisome that you claim seemingly contradictory things for the sake of defending your views, e.g.:

"Any theory with a Hamiltonian evolution can produce any correlation. All you have to do is take the present state and evolve it backwards."

vs

"I am saying you have failed to show that superdeterminism can "produce" any correlation in "a real experiment"."

So you initially argue that superdeterminism reproducing any correlation at all is not a problem since any theory with a Hamiltonian would do the same. But after that you argue in favor of superdeterminism, saying that you have not seen any convincing proof that it can reproduce them. I'm not arguing for or against superdeterminism right now, I just wanted to ask you to state your points clearly and consistently.

""Croatia makes it to the finals of the World Cup in the year 2018 of the Julian calendar". That is an actual phenomenon, as we all know, and the initial conditions of the universe had to be hyper-hyper-hyper-hyper fine-tuned for it to occur. Move the location of a single hydrogen atom 10 billion years ago, and that phenomenon does not happen according to physics as we understand it. Is that an objection to physics as we understand it? Of course not! But we also regard the fact as one that happened, in an obvious sense, by chance."

The grandness of what you all attribute to the laws of physics stupefies me. Actual human concerns are much more limited. That the Earth, Julian calendar, etc., exist is for the purposes of football, taken for granted. That the World Cup is held every four years, in years of the form 4*k+2 is of a little more interest. There is an enormous amount of effort (and perhaps betting money) put into trying to predict which teams are going go how far in the tournament, and not using reductionism. e.g., see fivethirtyeight.com for a model.

If there is not enough in the above to sink your philosophical/physics teeth into, consider the possibility that the only compute engine capable of doing what your theories need it to do is the Universe itself.

Or consider that even figuring out how blackholes work in a toy model like AdS/CFT is beyond our capabilities. The extrapolation of way beyond our capabilities is bold, even necessary, but also dubious (e.g., as correctly pointed out when the issue of recurrence time for blackholes in AdS came up).

Or consider that reductionism is a good strategy to explain what you already know about, but not good at all at figuring out what to explain. You can apply reductionism on an elephant and largely explain it; but given some fundamental theory, you have no effective way of arriving at "elephant". If you try to do so, I think you will find you are smuggling in a lot of non-fundamental concepts in the chain of reasoning. In particular, you won't find yourselves arriving at anything that we do not already know about. Even arriving at relatively simple objects like blackholes, neutron stars, etc., is quite hard. It is a bit of an illusion that we have the means to explain everything in the universe.

"I think this sums up what's wrong with your notion of scientific explanation. Your notion is, and correct me if I'm wrong --but please make explicit what you mean by scientific explanation then: "If an experiment is in agreement with the predictions of the theory, then the theory explains the experiment.""

This is wrong. I have said above what I mean by explanation. In essence, it's anything that simplifies just collecting the data. The paragraph you quote said nothing about explanation, it was referring to Tim's claim that superdeterminism will result in experimental outcomes we don't observe. Best,

Sorry, I should have put this together in one reply, but regarding your comment

"So you initially argue that superdeterminism reproducing any correlation at all is not a problem since any theory with a Hamiltonian would do the same. But after that you argue in favor of superdeterminism, saying that you have not seen any convincing proof that it can reproduce them. I'm not arguing for or against superdeterminism right now, I just wanted to ask you to state your points clearly and consistently."

Ahead, let me clarify I never spoke of "any correlation at all", I said as long as they're in the configuration space to begin with. Though I don't think this bears relevance to your question.

What I am saying is:

The claim that superdeterminism is "ridiculous" because it can result in any correlation (as Mateus said) would rule out any deterministic theory and therefore doesn't make sense. It's simply a wrong argument.

Tim is making a different argument (which Mateus probably meant to make but didn't), claiming that superdeterministic theories would generically give rise to correlations we don't observe. I am noting that he can't speak about what is a generic measurement outcome without making assumptions about the probability of the initial state of the universe. The argument is hence also wrong.

The correct argument to make asks whether a superdeterministic model actually explains something from an initial state for a time-evolution. This is a question which can't be answered without actually looking at the model.

I wish to object that I am making an argument "in favor of" superdeterminism. I am merely saying that it is an option which has been discarded based on unsound reasoning.

I've been very patient with you, but I draw a line at you commenting on my ass. I will not approve further comments from you. I hope that you will re-read this comment thread a few times and eventually realize that your argument isn't an argument. It's a half-digested repetition of something you probably heard elsewhere but clearly never thought about.

In a reversible deterministic universe, I think each state would be just a single inevitable microstate, and we couldn’t talk meaningfully about measures for macrostates of the universe because we can’t assign a probability distribution for other (counterfactual) microstates. I suppose if we gloss over some hidden variables we could have an apparent entropy, basically by introducing apparent indeterminateness. This would line up with the fact that no state is really simpler than any other, since they each entail exactly the same information, but we don’t have access to all that information. Likewise no state has lower entropy, but it may have lower apparent entropy when glossing over the hidden variables. (Everything starts out hidden, and then gradually becomes unhidden.)

"The grandness of what you all attribute to the laws of physics stupefies me. Actual human concerns are much more limited."

I take the concern of some actual human physicists to be discovering the actual, true laws of nature. If this is accomplished, then from some initial state of the universe all of the physical facts about the 2018 World Cup finals either follow rigorously (if the laws are deterministic) or are among a collection of universal physical states with a non-zero probability of occurring (if the laws are stochastic). I fail to see any stupefying grandness here. Of course we will never know that complete initial state, or be able to calculate with it. So what? I never suggested we would.

It's the same unknown as before, I'm sorry but I don't know how to include an identity when leaving a comment. Whatever, I'm Felipe Montealegre.

I kind of divide my comment into two parts, first:

"This is wrong. I have said above what I mean by explanation. In essence, it's anything that simplifies just collecting the data. The paragraph you quote said nothing about explanation, it was referring to Tim's claim that superdeterminism will result in experimental outcomes we don't observe."

Sorry but it does refer to explanation, it says superdeterministic theories will do just as good a job at explaining experiments as quantum mechanics, since they're QM with added degrees of freedom. But your definition of scientific explanation is far from clear yet. First off, there are many techniques that simplify the collection of data that are not regarded as scientifically explaining things --I don't know, excel or something? Second: it's also not clear how to make this definition agree with your claim about superdeterministic theories explaining experiments, "They hence explain observations as good or as bad as quantum mechanics". They clearly don't simplify the collection of data with respect to QM: you get to predict the same experimental statistics, but you need to keep track of a bunch of other stuff. Is the initial state of the universe dependent on which string of measurement choices an experimentalist has taken throughout her life? Clearly having to keep track of these questions cannot possibly make data collection easier, if it does, I would be surprised and I would like to read about it.

Second part:

"Ahead, let me clarify I never spoke of "any correlation at all", I said as long as they're in the configuration space to begin with. Though I don't think this bears relevance to your question."

This is still ill defined, what is configuration space? When I speak of a correlation, I mean a conditional probability distribution: given a preparation x of the experiment, the probability of measurement outcome a is p(a|x). So when I read any correlation at all, I would expect any conditional probability distribution. Which probability distributions do you claim may be reproduced by any theory with a Hamiltonian?

Given the following: "The claim that superdeterminism is "ridiculous" because it can result in any correlation (as Mateus said) would rule out any deterministic theory and therefore doesn't make sense. It's simply a wrong argument." I suppose you actually mean that every deterministic theory can reproduce any correlation --in my sense?-- at all. But that is not clearly true, if the deterministic theory satisfies the assumptions of Bell's theorem, then it cannot violate Bell's bound and hence cannot reproduce any correlation at all.

So please be clearer on your assumptions.

Just to kind of wrap it up, is the main claim from your side is that there is no evidence supporting that superdeterministic theories can reproduce any correlation --with some appropriate definition of any correlation?

You physicists are killing me with this free will b.s. It is so symptomatic of your field right now. Seriously, do you think lack of free will would place me typing a comment into Bee's blog right now? No, I really have to WANT to do this, for some unknown reason for which I should probably see a psychologist.

Get over it. Sometimes even the scientific method means you don't give the benefit of the doubt to an idea that is obviously nonsense.

Furthermore, the scientific method also means that you should NOT dismiss ideas from individuals outside your field.

I really wish the physics community would realize it's time to go back to Copenhagen and apply a little intelligence. If it is all electromagnetism then how the heck ARE you going to measure something without disturbing it? I mean isn't that an obvious effect if you are right there, the very next level under quarks is rock bottom! Arghhhhhh. You are killing me

Thanks for identifying. I think you have to sign in with a Google-account to get rid of the "Unknown".

I didn't say that superdeterminism is quantum mechanics with added degrees of freedom (I'm not sure exactly what this would mean). I said that a superdeterministic theory by construction should reproduce the results of quantum mechanics when the (non-local) variables are averaged over. That's the whole motivation of looking at it: You want to re-establish a deterministic evolution by postulating the existence of (non-local) hidden variables.

This should also answer your question why superdeterministic models by construction explain as much or as little as quantum mechanics - because you can derive quantum mechanics from them (or at least you should be able to do that). To that extent they're merely words, or "interpretations" if you want it politely. The more interesting question is if you can get anything *more* out of it (when not taking the average.) For that of course you need a concrete model.

As to the restrictions on the configuration space, I meant conditions like, say, states are normalized to 1 or vectors in a Hilbert-space or some other requirement on the mathematically allowed configurations. If you want a reversible evolution you'll also need to prevent singularities (as was mentioned previously). This was just to say there is some fineprint on the statement that any state can be obtained by a reversible evolution. As I said, I don't really see how it matters.

"Which probability distributions do you claim may be reproduced by any theory with a Hamiltonian?"

I didn't say anything about probability distributions. You can get states. If you can write down a state for your theory that has a certain correlation, you can evolve that state backwards, and trivially the forward-evolution of the backward-evolved state will give you the correlation. It's really not a deep point.

"if the deterministic theory satisfies the assumptions of Bell's theorem, then it cannot violate Bell's bound and hence cannot reproduce any correlation at all."

Not sure what you mean, quantum mechanics is deterministic and it violates Bell's bound.

"is the main claim from your side is that there is no evidence supporting that superdeterministic theories can reproduce any correlation -- with some appropriate definition of any correlation?"

Depends on what you mean by "reproduce". Sorry to pick on this, please see exchange above for how confused Mateus got about his use of the word "produce".

You can, in superdeterminism as in any deterministic theory, come up with ("mathematically write down") an initial state that gives rise to any end-state that's compatible with your assumptions about the states your theory contains. (Again, there's the fine-print. Depending on how you define the states, some correlations may be excluded simply because you can't construct a state that has them, period. This has nothing to do with the time-evolution.)

Whether you will actually observe that is a different question. And on that account indeed I am saying there is no (mathematical) evidence that superdeterminism makes it likely to observe events we don't observe.

Let me add again that it might well be possible to show that some model for superdeterminism *does* suffer from the problem, I'm just saying it hasn't been shown - and it can't be shown by using the argument Tim and Mateus are making. It's not as simple as saying there's an initial state that's not compatible with observation or claiming there's some finetuning going on. Roughly speaking you need to show that all initial states which have explanatory power (in the sense of simplifying the measured data) give you results we don't observe. And about this you can't make a statement without taking into account the time-evolution (which should serve as the simplifier). Best,

'"if the deterministic theory satisfies the assumptions of Bell's theorem, then it cannot violate Bell's bound and hence cannot reproduce any correlation at all."

Not sure what you mean, quantum mechanics is deterministic and it violates Bell's bound.'

No, quantum mechanics (whatever you choose to mean by that) does not "satisfy the assumptions of Bell's theorem". In particular, it does not satisfy Bell (or Einstein) locality. That's kind of the the whole point: I claim that Bell has proven that no empirically adequate theory can be local, as is proven by Bell, so we need to just jettison the requirement of Bell-locality and think hard and clearly about how to introduce non-locality into a theory. 't Hooft (and for all I can tell, you) wants desperately to maintain locality, and is willing to stoop to superdeterminism (hyperfine tuning) to achieve it. If you have not followed the dialectic even this much, the you are really in the dark and need to sit and think a while before commenting again.

It's utterly disingenuous of you to complain about a request for clarification about someone else's comment, especially since my very comment shows I know what the theorem says. I was thinking the assumption the commenter referred to was the absence of correlation between measured state and detector, as this is what we've been discussing.

Having said that, you continue to talk about "hyperfine tuning" without having a proper definition. That definition won't come from nothing, so why don't you "sit and think for a while" as you put it so nicely?

I don't know about 't Hooft, but I certainly don't want to "maintain locality" (or causality for that matter). But really this is besides the point. You go around and make claims for which you have no proof. I am asking for proofs. Best,

Tim,In Laplace's time "initial conditions of the universe" was a simple thing. Now less so, no? When we do cosmology, we're doing "initial conditions within one of the particular models of universe permitted by General Relativity" and it is within this scope that the conceptual issues become tractable, is that not so? Thanks!-Arun

I fail to understand why you insist on embarrassing yourself in such a public venue. You say that "you continue to talk about "hyperfine tuning" without having a proper definition. That definition won't come from nothing, so why don't you "sit and think for a while" as you put it so nicely?"

Not please the date at the end of the following, which is cut-and-pasted from above:

"Sabine,

Hyperfine tuning is just what it sounds like: an extreme version of fine-tuning the initial conditions of a theory in order to get a particular phenomenon. Please explain if you have any question about exactly what that means.

To be more precise in this case (I am assuming that you did not read Bell), Suppose we have a particular run of the GHZ experiment in which all of the apparatuses were set to measure X-spin and the outcome was in accord with the GHZ predictions. You offer some local dynamics that you claim explains why the GHZ correlations obtain. Let's further suppose that the dynamics is deterministic. (If it isn't, we are done: the indeterministic local dynamics implies that any one of the three outcomes could have been different without any of the others being different, and hence does not assign probability 1 to the GHZ correlations.)

OK, so we use your deterministic local dynamics to evolve the correct outcome state back in time to an initial state. Good. Now since we are treating the settings of the experimental apparatuses as free variables, we ought to be able to change those settings to any of the eight possible global experimental situations, leaving everything else the same, and use your dynamics to evolve the state forward. And if the dynamics is local, we know that at least one of those settings will then yield a result forbidden by the GHZ predictions. So you are reduced to saying that it was just luck that the bad set of settings was not chosen, or else that the state with the bad settings is just forbidden by the theory. The first option concedes that there is no explanation given by the theory for the fact that the GHZ correlations always obtain: it is just a matter of lucky chance. The second option removes the luck by physically prohibiting the bad initial states. That is hyperfine tuning, and as Mateus has correctly argued allowing yourself the luxury of appealing to such a method of explanation undercuts all of scientific method.

I never said a word about the intermediate states, and explained both what hyperfine tuning is and why it is scientifically unacceptable. If you dispute this claim, state precisely why.

12:06 PM, July 13, 2018"

Stop playing games. Either read what has already been written, or admit you are not reading it, or admit that you can't recall over the space of 5 days what has already been answered.

"In Laplace's time "initial conditions of the universe" was a simple thing." Huh? I am not following you in the least. Laplace would have correctly concluded that the initial conditions of the universe were very, very, very, very, complex. That's why Laplace's "demon" is outfitted with unlimited memory and analytical ability.

Felipe here again. I'm sorry but this is too much for me. I said any theory satisfying Bell's assumptions, which QM does not. Did you not read my comment, do you not know Bell's assumptions or are you just willing to say contradictions in order to have a come-back?

But that's irrelevant, the real problem is that your definition of what any correlation is circular: by any correlation you mean any correlation achievable within the particular theory you're looking at. In that case, by all means, any correlation may be reproduced by any Hamiltonian theory: that just means that any correlation achievable within this Hamiltonian theory may be achieved within this Hamiltonian theory. And of course this is no argument against the theory you're looking at, we agree on that.

But what Mateus and Tim (at least as far as I understood) have been arguing is that if you open the door to superdeterminism, then you may reproduce any correlation in my sense (not actually mine, just the widely accepted sense). Again: is the chance of getting lung cancer increased by smoking? Maybe the hidden variables determining the future of a person (including getting cancer) are correlated with your choice of doing the experiment.

You may say that within some superdeterministic model this particular correlation is not there. But once you started adding hidden --unobservable-- variables which are correlated with measurement choices, what makes adding more of these less justified? You would be just throwing the scientific method out the window. You can also say, well maybe they are actually observable but haven't been observed yet. But that would be pretty ironic, considering your criticism of string theory (which I agree with).

As I already said above, I misread that statement of yours. Let me just use Tim's formulation, in the hope that's what you meant

"Sabine: Any deterministic theory can predict and explain any correlation that can appear in a physical state allowed by the theory.

Felipe: Not true: no local theory (i.e. "theory satisfies the assumptions of Bell's theorem") can predict or explain violations of Bell's inequality for experiments done at spacelike separation."

Right. So what's the problem? The backward evolved state won't in general fulfill the assumptions of Bell's theorem.

"that just means that any correlation achievable within this Hamiltonian theory may be achieved within this Hamiltonian theory. And of course this is no argument against the theory you're looking at, we agree on that."

Indeed. Well, I guess I should be happy that at least we can agree on something.

"But what Mateus and Tim (at least as far as I understood) have been arguing is that if you open the door to superdeterminism, then you may reproduce any correlation in my sense (not actually mine, just the widely accepted sense)."

Saying "this is so" is not an argument.

"You may say that within some superdeterministic model this particular correlation is not there. But once you started adding hidden --unobservable-- variables which are correlated with measurement choices, what makes adding more of these less justified? You would be just throwing the scientific method out the window."

Well, if you don't need them to explain anything you shouldn't add them.

Look, let me put it this way. Suppose I tell you here is my superdeterministic model and if you measure observable A,B,C to precision epsilon you can predict the outcome of a quantum measurement, something that according to quantum mechanics is not possible. How does this amount to "throwing out the scientific method"? Best,

B.

PS: Oh yeah, and good bye, I don't expect an answer. I still hope you'll at least think about one.

To say the hyperfine tuning is an extreme version of fine tuning is not a joke. It does presume that the person it is addressed to is already familiar with the common criticism of a theory that it has to be fine-tuned in some methodologically unacceptable way to predict some phenomenon. This is a very, very common thing that is said in the physics community, and I made the assumption that you were familiar with the common locutions used in that community. It is a pretty natural supposition, and if it was wrong, then I ought to have started at an even more basic level in explaining it to you. I don't know if it is appropriate to apologize for just assuming you had some knowledge that is common in the profession. If it is, I apologize, and in the future I will try to avoid making any such assumption. But given the quite detailed example with GHZ that I provided, am I to conclude that you even now do not understand what the term means?

Please note again the date. I already replied in complete detail to your objections, and explained why they are groundless.

"Sabine,

I understand that you are busy, and have limited time, but if you are not willing to carefully read a post directed to you, then you can do us the favor of not replying to it either. I took some time to write a fairly long and detailed explanation, and either you did not read it at all or you did not read it with any attention.

You write:"In order for a deterministic theory to predict the phenomenon that is the present state of the universe you need to hyperfinetune the initial state (according to your definition - just for the record, I don't think that's how most physicists use the word). Hence all deterministic theories are hyperfinetuned according to your definition."

No, you completely missed the point—which might explain why your attempts to defend superdeterminism are so lacking. The issue is not merely predicting some phenomenon, which has the rather trivial character you keep pointing out in a deterministic theory when you want to predict the exact, completely detailed state of the universe at some time. To predict which phase point presently describes the entire universe you would need to invoke some past completely detailed phase point. But this sort of prediction does not constitute the type of *explanation of a generic phenomenon* that is what we seek from our theories.

No physical theory in the history of the universe ever has or presumably ever will seek to *explain* why Croatia got to the finals of the World Cup this year in anything like the way Newton's theory explains (with the appropriate caveats) Kepler's Laws. These are generic phenomena—behaviors that many, many different systems display—and they require a generic rather than specific explanation. That means that the phenomenon must be robust under many, many changes of specific detail. Merely deriving the present precise state of the universe from a past precise state does not provide such an explanation for anything.

I took some trouble to explain how certain phenomena—like Croatia and the World Cup—are not thought to have any such explanation at all: they are matters, to a great extent, or luck and coincidence. Of course they arose from a previous state of the universe, and the relevant features of that state that gave rise to the result are equally matters of luck and coincidence. "Explaining" what seems to be a lucky coincidence by reference to an earlier lucky coincidence is not explaining anything: it is just passing the problem from one state to another.

In order to understand what is going on, you have to think about the nature of scientific explanation. This is not a topic that was covered in your training, and so either you would have to read widely and think deeply on your own or else—as is almost certain—you have adopted some naive and easily refutable view about the nature of explanation such as the hypothetic-deductive model.

If you actually read my previous post with care, you will find that I would never, ever say that the entire present state of the universe is the sort of thing that admits of the relevant sort of explanation, and therefore would never conclude that present physics must be hyperfine tuned. What I would say is that the present state of the universe—in all its gory detail—does not admit of the relevant sort of explanation at all, and so no theory is hyperfine tuned to provide such an explanation. If you actually read the post and then have some clear question about it I will answer them.

Well, since you can't seem to find my reply, I'll copy it here for you

"Tim,

Let's see there. You fail to see the relevance of my response to your comment. You then accuse me of not reading what you wrote and continue with proclamations about your grandiosity. None of this is helpful.

My point was, using your words (in the hopefully right way, though your terminology is muddled), that you have not provided any evidence that superdeterministic theories do not "explain generic phenomena". (Of course they do. They're quantum mechanics with extra variables sprinkled over it. If you average over the extra variables, you get back quantum mechanics. They hence explain observations as good or as bad as quantum mechanics. That's why it's called an interpretation.)

You continue to speak of luck and coincidence without quantifying it. How do you not notice this, oh Tim Maudlin, the great philosopher? You also confuse the probability of instances in one set (say, the universe) with the probability of getting that set to begin with. You are referring to the former. I am telling you that you cannot make statements about the former without the latter. Since you cannot know the latter, this is not a workable argument to begin with. The useful argument speaks about the simplification in the initial state provided by a time evolution (as I said above).

" "Explaining" what seems to be a lucky coincidence by reference to an earlier lucky coincidence is not explaining anything: it is just passing the problem from one state to another. "

I am quoting this sentence because it shows the problems with your argument that I just explained.

a) reference to coincidences without quantifying them

b) confusing probability of instances within the universe (what you were referring to) with probabilities of the whole state (what I was referring to)

"This is a very, very common thing that is said in the physics community, and I made the assumption that you were familiar with the common locutions used in that community. It is a pretty natural supposition, and if it was wrong, then I ought to have started at an even more basic level in explaining it to you. I don't know if it is appropriate to apologize for just assuming you had some knowledge that is common in the profession. If it is, I apologize, and in the future I will try to avoid making any such assumption."

I literally just wrote a whole book about stupid things that physicists commonly say, like for example their insistence to refer to finetuning without writing down a probability distribution. Now I have to endure a philosopher who excuses his sloppiness by referral to common practice among physicists.

Btw, you make appearance in the book because George Ellis names you as a "philosopher one should have a good working relation with", and look how well that is going.

" But given the quite detailed example with GHZ that I provided, am I to conclude that you even now do not understand what the term means?"

I've read about GHZ experiments/heard talks, but it's not something I deal with on a daily basis if that's the question. But the details are irrelevant anyway. What you claim is wrong for reasons that have nothing to do with Bell's inequality or this or that specific experiment: You cannot quantify the fine-tuning of an initial state without drawing on the time-evolution. The choice of an initial state is *always* part of the model and there is nothing whatsoever wrong with it. You can't quantify the probability of an initial state - it's not possible. The only thing you can quantify is whether the intitial state combined with the time-evolution provides a simplification (that you would probably call an explanation). But for this you need to look at the actual model.

Summary is: You can not say anything about the explanatory power of a superdeterministic theory without actually looking at the time-evolution of the model in question. Best,

We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.

In a theory of initial conditions "at a certain moment" in the framework of absolute space-time is much simpler than having to first prove the existence (rather than just assuming it) of a global Cauchy surface and no Cauchy horizons.

Likewise, Laplace merely needed to know forces, positions and velocities, which in the Newtonian framework are directly observable. The specification of an initial quantum state for the universe is conceptually more challenging, I think.

1600+ comments in the black hole thread are a kind of indicator of the difficulties, I think.

The constant refrain one hears of the (un)likelihood of initial conditions discussed ad nauseum without having a probability measure seems to be another conceptual trap, I think.

Perhaps a theory of initial conditions is an essential part of fundamental physics, more than is discussed today.

Tim Maudlin’s very first comment [*] on this thread intended to “... improve the tenor of this debate, which has indeed degenerated to a rather unenlightening dynamic”. Ironically, it was in that comment that the term “hyperfine tuning” was injected into this thread in an almost off-hand way. But disagreement about that term has caused the tenor of this “debate” to degenerate even further into a very unenlightening dynamic.

That is unfortunate. And from the muck of the commentary, I glean that no one approves of hyperfine tuning. Instead we have a passionate disagreement about its proper definition and why it’s bad.

Concerning the your post to which I did not reply: I was being nice. But since you insist, let's take a look at it.

"My point was, using your words (in the hopefully right way, though your terminology is muddled), that you have not provided any evidence that superdeterministic theories do not "explain generic phenomena". (Of course they do. They're quantum mechanics with extra variables sprinkled over it. If you average over the extra variables, you get back quantum mechanics. They hence explain observations as good or as bad as quantum mechanics. That's why it's called an interpretation.)"

The phrase "quantum mechanics with extra variables sprinkled over it" is complete nonsense. If there is one thing characteristic of quantum mechanics, it is the use of a wavefunction which is (at least sometimes) governed by Schrödinger's equation. The main example we have of an attempt to create an actual superdetermistic local theory is that of 't Hooft, which has no wavefunction at all. That is by design: 't Hooft wants a local theory and the wavefunction (if it describes something physically real) characterizes a non-local beable. 't Hooft does want to derive predictions of quantum phenomena like the GHZ phenomenon from his theory by averaging some variables over a phase space, but (as is proven by Bell) in order to get the right statistics he has to very, very, very severely restrict the phase space averaged over in completely bizarre and unnatural ways. *That is hyperfine tuning*.

It is this sort of unnatural severe restriction of the phase space in order to predict some observed phenomenon that is methodologically unacceptable. It is this sort of thing that would allow a theory to predict a strong correlation between smoking and getting lung cancer even though the theory posits no causal connections between smoking and lung cancer (to the delight of the tobacco companies). It is this that would undermine standard scientific method, such as the use of double-blind random experiments to justify causal claims. That is the point Mateus has been making over and over and over, the argument that he has made and that you continue to ignore. It is this that renders "superdeterminsitic" theories ridiculous and silly. Because they are.

You haven't answered my questions. I assume you either cannot define "hyperfinetuning" or it dooms on you that if you did it wouldn't support your claim that superdeterminism is "ridiculous and silly".

Now you try to shift the discussion to "unnatural severe restrictions on the phase space". Needless to say, you neither defined "unnatural" nor did you actually draw a conclusion from your definition.

To sum it up you have not been able to formulate a well-defined argument, you merely produce lots of empty words. Not that this is any news.

Let me also note in the passing that you aren't even able to apologize you missed my days-old response and erroneously accused me of not responding.

Just how many times and in how many ways do I have to define "hyperfine turning" for you to even take notice of it? Start with a kinematic space of possibilities. Put a dynamics on it. Now restrict the kinematic space in some way by removing initial conditions so that only certain models remain, models with some generic feature you want. That is fine-tuning the available initial conditions of a theory to recover a generic phenomenon. That is what 't Hooft does (as he admits), and if you do it in a extreme way it is hyperfine tuning. The point is to defeat the statistical independence condition used by Bell.

I don't know how many other ways to say this. The is what Bell meant by a "superdeteministic" theory. It was a bad choice of terminology on his part, because you already start with a deterministic theory before you fine tune the available initial conditions. If you mean something else by "superdeterminism" say what it is. "quantum mechanics with extra variables sprinkled over it" is a completely obscure phrase. Which I note you do not even attempt to explicate.

I am familiar with Laplace (who is actually plagiarizing Boskovich). But it that passage says nothing at all about the simplicity or complexity of the instantaneous state. As far as the demon goes, it is irrelevant for Laplace's point. So I don't see what argument you are trying to make here.

"Start with a kinematic space of possibilities. Put a dynamics on it. Now restrict the kinematic space in some way by removing initial conditions so that only certain models remain, models with some generic feature you want."

It's still sloppy, but slowly moving towards a meaningful definition.

First, as I have said many times already, there's only one initial condition for the universe. We *always* pick only one initial state for a model. Hence, talking about "removing initial conditions to get a model" is a meaningless complaint.

Second, more importantly, you cannot tell whether an initial condition leads to what you call a "generic feature" without knowing what the dynamics of the model looks like.

Please let me know if you can manage to get yourself to agree on these two points and revise your definition accordingly.

"That is what 't Hooft does..."

As I have also said many times, I am not a fan of 't Hoofts model, mostly because it's untestable. What I am trying to get across here is that the criticism you raise, if formulated in a meaningful way, cannot apply to superdeterminism in general. You can only figure out whether a superdeterministic model is explanatory by looking at the dynamics. Best,

I'm glad you think this is progress, but the progress is entirely subjective: it is progress for you because you have not been paying attention from the beginning and you are now being forced to. I'm sure that everyone else already understands everything I am about to say, but let me lay it out in detail.

First, when you say "Second, more importantly, you cannot tell whether an initial condition leads to what you call a "generic feature" without knowing what the dynamics of the model looks like."

If you honestly think that this is news to anyone, then you are very, very misinformed. Of course we are assuming, and have been assuming all along, that the dynamics is given. We have also been assuming (although this is not required) that the dynamics is deterministic. And in the case of Bell's theorem we are assuming that the dynamics is—in a perfectly well-specified sense—local. So when you make this comment as if anyone thought otherwise what it shows is that you have no idea what the debate is even about.

Second, this is also completely confused: "First, as I have said many times already, there's only one initial condition for the universe. We *always* pick only one initial state for a model. Hence, talking about "removing initial conditions to get a model" is a meaningless complaint."

Yes, there is only one initial condition for the actual universe. Congratulations. Are you really under the impression that this is news?Do you "*always* pick one initial state for a model"? Well if you *ever* pick a precise initial condition for a model, for sure it is an unbelievably oversimplified and idealized initial state, that bears only a vague resemblance to the unique actual initial state of the actual universe. The one initial condition of the actual universe is much, much, much too complex and empirically inaccessible to ever have it, or to calculate with it if you had it. So one of two things must happen when making models. Either some unrealistic precise initial condition (that certainly is not the initial condition of the actual universe) is used, or else some technique is used that extracts predictions from a model even though no precise initial condition is postulated. There are clear mathematical techniques for doing the latter, and the output of those techniques does depend on the totality of all initial conditions of the theory, not on some single initial condition. And for the latter techniques, that in some sense average over a set of possible initial conditions, if you restrict the set of possible initial conditions you get different results. That is the tactic of the "superdeterminist".

I have defined hyperfine tuning, A.K.A. superdeterminism, over and over in different ways, and you still have not paid enough attention to understand it. If what I have clearly defined and criticized (and Mateus has criticized) is not what you have in mind, then you have to say what you have in mind. So far all you have done is projected your own inattention to what I have written onto me, as if you don't understand because I haven't explained, rather than you not understanding because you are not paying attention. You also provided as your own explication of what "superdeterminism" via the completely obscure phrase "quantum mechanics with extra variables sprinkled over it". I have already explicated hyperfine tuning many times in different ways. How about giving us a clue about what in the world your phrase means?

"Now restrict the kinematic space in some way by removing initial conditions so that only certain models remain, models with some generic feature you want. That is fine-tuning the available initial conditions of a theory to recover a generic phenomenon. That is what 't Hooft does (as he admits), and if you do it in a extreme way it is hyperfine tuning. The point is to defeat the statistical independence condition used by Bell."

't Hooft was very clear that in order to avoid the "conspiracy" argument against superdeterminism one must allow for any initial condition to be chosen. In this paper:

https://arxiv.org/abs/quant-ph/0701097

he speaks about replacing the free-choice assumption, that does not make sense in a deterministic theory with the "unconstrained initial state" assumption. So, it is clear that the way 't Hooft tries to deny the statistical independence assumption has nothing to do with the fine-tuning, or hyper-fine-tuning of the initial state.

I have already provided an example of a theory that does not allow for statistical independence even if the initial state is freely chosen, that theory being classical electromagnetism.

Say that I grant you the power to choose the initial state (position/momenta of the charged particles) with infinite precision (better than allowed by uncertainty principle). So, you can arrange every electron and every nucleus in your detectors and source. There is still something beyond your power, the fields. The fields are given by the theory and they are unique for the initial state you have chosen. So the spins of the particles would still be dependent on the detector settings.

The irony of superdeterminism is its parallel to theological fatalism. No experiment can ever confirm or falsify it because by presupposition the parameters of the experiment itself, not to mention its outcome, are already predetermined!

You cannot falsify superdeterminism because it's not a model, it's a property of a class of models. You can only falsify the models themselves because otherwise you don't have predictions. It's the same reason why you can't falsify, say, supersymmetry or Lorentz-invariance violation. You only only falsify models that use these assumptions.

Reading 't Hooft's papers is a useless exercise. As became clear in my (very, very long) exchange with him, he did not have a clear understanding of what "superdeterminism" even is, or how one could try to defeat the statistical independence assumption used in Bell's theorem. And once that became clear, we walked through the simplest toy model of his approach and—lo and behold—he admitted that he had to exclude some initial conditions. That is what he has been doing, and the only way his model works.

Now if you think you can articulate an actual physical model in which the state of the triple of particles being created in a GHZ experiment is so sensitive to the electromagnetic field produced by a distant pseudo-random number generator programmed to calculate the parity of the millionth digit of pi, as opposed to the electromagnetic fields produced by a distant random-number generator programmed to calculate parity the millionth and first digit of pi, that the particles will come out in a completely different state in each case, and in each case in a state which will not violate the quantum-mechanical predictions for that particular run (i.e. the run where the detectors are set differently due to the different calculated parities) please give it a shot. Best of luck to you. Be sure to send us a message when you succeed.

But if that task instead looks manifestly hopeless to you, then you understand how crazy superdeterminism of the sort you are proposing is.

You can falsify locality even though it is a property of a class of models rather than a model, because Bell demonstrates that *every possible* local theory must satisfy a set of constraints on its empirical predictions. That's why it is such a powerful theorem. So your general principle is demonstrably false. As for superdeterminism, if it is understood as hyperfine-tuning you don't have to empirically falsify it. It is crazy. If it is not hyperfine tuning, then we are still awaiting your clear account of what it is.

"If you honestly think that this is news to anyone, then you are very, very misinformed. Of course we are assuming, and have been assuming all along, that the dynamics is given. We have also been assuming (although this is not required) that the dynamics is deterministic. And in the case of Bell's theorem we are assuming that the dynamics is—in a perfectly well-specified sense—local. So when you make this comment as if anyone thought otherwise what it shows is that you have no idea what the debate is even about."

"Yes, there is only one initial condition for the actual universe. Congratulations. Are you really under the impression that this is news?"

Great, we seem to have moved on to the "I always said so" stage. No, of course I'm not saying anything new. I am merely walking you through what you should know already.

"some technique is used that extracts predictions from a model even though no precise initial condition is postulated. There are clear mathematical techniques for doing the latter, and the output of those techniques does depend on the totality of all initial conditions of the theory, not on some single initial condition."

Those techniques use probability distributions over initial states. You can use them for subsets of the universe (because you have an ensemble), not for the universe itself. I already told you above several times that you keep confusing the two things.

"And for the latter techniques, that in some sense average over a set of possible initial conditions, if you restrict the set of possible initial conditions you get different results. That is the tactic of the "superdeterminist"."

You are unclear here about which initial condition you believe the "superdeterminist" restricts. The initial condition of the universe is a postulate that is contained in the model. In this model you can derive probability distributions for initial conditions of subsets. You can only derive the latter from the former if you have a dynamical law.

Now suppose you can chose an initial state (for the universe) and your dynamical law tells you that the initial states you get in the subsets (eg in your lab) are generically the ones with the required correlations to violate Bell's inequality (and so on). What do you think there is non-scientific about this?

"How about giving us a clue about what in the world your phrase means?"

I already did this, but you didn't listen. Bell-type experiments rule out local hidden variables. They don't rule out non-local hidden variables. Superdeterministic theories postulate the existence of non-local hidden variables to reestablish determinism. They reproduce quantum mechanics when averaged over the additional variables by assumption, that was the point of doing it. That's what my phrase referred to.

Now of course on such a level the hidden variables are useless (also by assumption) and in that sense non-scientific. If that's what you mean to say, we can just agree on that. But once you write down a concrete model for superdeterminism it will result in predictions that quantum mechanics can not make (because it's not deterministic). I don't see what's unscientific about this, hence my objection on your claim that superdeterminism is silly. Best,

"Bell-type experiments rule out local hidden variables. They don't rule out non-local hidden variables. Superdeterministic theories postulate the existence of non-local hidden variables to reestablish determinism. They reproduce quantum mechanics when averaged over the additional variables by assumption, that was the point of doing it. That's what my phrase referred to."

You don't seem to even understand the phrase "non-local hidden variables theory". The most famous non-local hidden variables theory is Bohmian mechanics, which does (of course) yield exactly the predictions of non-relativistic quantum mechanics. But it is most certainly *not* a superdeterministic theory, and is most certainly *not* a counterexample to Bell's theorem because it is (.....wait for it) non-local! Just as the name says. And Bell's theorem applies to local theories, not non-local theories.

You don't even seem to comprehend the grammar of "non-local hidden variables theory". The phrase is not meant to suggest that the *hidden variables* are non-local. In Bohmian mechanics, the hidden variables are *local* beables, they are particles that always have definite positions. It is not the "hidden variables" that are non-local (and it is not the "hidden variables" that are hidden, by the way!), it is the *quantum state* that is non-local. And since the quantum state guides the particles (via the guidance equation) the *dynamics* of the hidden variables becomes non-local. And because of that, the theory is not constrained by Bell's Theorem. Nor does it rely on any restriction of initial conditions. The violations of Bell's inequality arise spontaneously.

The phrase "hidden variables" is, as Bell says, "a piece of historical silliness". A better term would the "additional variables". Start with a theory that postulates a wavefunction, or more precisely a quantum state that is represented by the wavefunction. All quantum theories to date start with that. If you add more physical variables to the theory, those are additional variables, and only theories that postulate a quantum state deserve the name.

So: what do you mean by "non-local hidden variables"? You can't mean what everyone else does. How about an example.

why do you say that superdeterminism requires non-local variables? The reason superdeterministic theories are not ruled out by Bell is because the statistical independence/free choice assumption is denied not because of the rejection of locality.

Once some of the particle/detector states are excluded one can obtain the QM correlations without the requirement of non-local variables.

Consider that the standard model is a magician being watched by and experienced audience. We the audience are awed and mystified by the show. The question is then becomes can we read the trick book or are we condemned remain ignorant.

"Superdeterministic theories postulate the existence of non-local hidden variables to reestablish determinism. They reproduce quantum mechanics when averaged over the additional variables by assumption, that was the point of doing it. ….Now of course on such a level the hidden variables are useless (also by assumption) and in that sense non-scientific." One must assume that a language and grammar exist to understand the trick book. Discovering the hidden variables is the only course open. Model making with out some indication of how things are hidden progress is not possible on this path.