Cornell solid-state physicist David Mermin has described the appraisals of the importance of Bell's theorem in the physics community as ranging from "indifference" to "wild extravagance".[2]Lawrence Berkeley particle physicist Henry Stapp declared: "Bell's theorem is the most profound discovery of science."[3]

Bell's theorem rules out local hidden variables as a viable explanation of quantum mechanics (though it still leaves the door open for non-local hidden variables). Bell concluded:

In a theory in which parameters are added to quantum mechanics to determine the results of individual measurements, without changing the statistical predictions, there must be a mechanism whereby the setting of one measuring device can influence the reading of another instrument, however remote. Moreover, the signal involved must propagate instantaneously, so that such a theory could not be Lorentz invariant.[4]

Bell summarized one of the least popular ways to address the theorem, superdeterminism, in a 1985 BBC Radio interview:

There is a way to escape the inference of superluminal speeds and spooky action at a distance. But it involves absolute determinism in the universe, the complete absence of free will. Suppose the world is super-deterministic, with not just inanimate nature running on behind-the-scenes clockwork, but with our behavior, including our belief that we are free to choose to do one experiment rather than another, absolutely predetermined, including the ‘decision’ by the experimenter to carry out one set of measurements rather than another, the difficulty disappears. There is no need for a faster-than-light signal to tell particle A what measurement has been carried out on particle B, because the universe, including particle A, already ‘knows’ what that measurement, and its outcome, will be.[5]

In the early 1930s, the philosophical implications of the current interpretations of quantum theory troubled many prominent physicists of the day, including Albert Einstein. In a well-known 1935 paper, Boris Podolsky and co-authors Einstein and Nathan Rosen (collectively "EPR") sought to demonstrate by the EPR paradox that QM was incomplete. This provided hope that a more-complete (and less-troubling) theory might one day be discovered. But that conclusion rested on the seemingly reasonable assumptions of locality and realism (together called "local realism" or "local hidden variables", often interchangeably). In the vernacular of Einstein: locality meant no instantaneous ("spooky") action at a distance; realism meant the moon is there even when not being observed. These assumptions were hotly debated in the physics community, notably between Nobel laureates Einstein and Niels Bohr.

In his groundbreaking 1964 paper, "On the Einstein Podolsky Rosen paradox",[4][6] physicist John Stewart Bell presented an analogy (based on spin measurements on pairs of entangled electrons) to EPR's hypothetical paradox. Using their reasoning, he said, a choice of measurement setting here should not affect the outcome of a measurement there (and vice versa). After providing a mathematical formulation of locality and realism based on this, he showed specific cases where this would be inconsistent with the predictions of QM theory.

In experimental tests following Bell's example, now using quantum entanglement of photons instead of electrons, John Clauser and Stuart Freedman (1972) and Alain Aspectet al. (1981) demonstrated that the predictions of QM are correct in this regard, although relying on additional unverifiable assumptions that open loopholes for local realism.

In October 2015, Hensen and co-workers[7] reported that they performed a loophole-free Bell test which might force one to reject at least one of the principles of locality, realism, or freedom (the last leads to alternative superdeterministic theories).[citation needed] Two of these logical possibilities, non-locality and non-realism, correspond to well-developed interpretations of quantum mechanics, and have many supporters; this is not the case for the third logical possibility, non-freedom. Conclusive experimental evidence of the violation of Bell's inequality would drastically reduce the class of acceptable deterministic theories but would not falsify absolute determinism, which was described by Bell himself as “... not just inanimate nature running on behind-the-scenes clockwork, but with our behaviour, including our belief that we are free to choose to do one experiment rather than another, absolutely predetermined.” However, Bell himself considered absolute determinism an implausible solution.

Bell's theorem states that any physical theory that incorporates local realism cannot reproduce all the predictions of quantum mechanical theory. Because numerous experiments agree with the predictions of quantum mechanical theory, and show differences between correlations that could not be explained by local hidden variables, the experimental results have been taken by many as refuting the concept of local realism as an explanation of the physical phenomena under test. For a hidden variable theory, if Bell's conditions are correct, the results that agree with quantum mechanical theory appear to indicate superluminal effects, in contradiction to the principle of locality.

These three key concepts – locality, realism, freedom – are highly technical and much debated. In particular, the concept of realism is now somewhat different from what it was in discussions in the 1930s. It is more precisely called counterfactual definiteness; it means that we may think of outcomes of measurements that were not actually performed as being just as much part of reality as those that were made. Locality is short for local relativistic causality. (Currently accepted quantum field theoriesare local in the terminology of the Lagrangian formalism and axiomatic approach.) Freedom refers to the physical possibility of determining settings on measurement devices independently of the internal state of the physical system being measured.

Illustration of Bell test for spin-half particles such as electrons. A source produces a singlet pair, one particle is sent to one location, and the other is sent to another location. A measurement of the entangled property is performed at various angles at each location. The scheme for measurements on photons looks very similar: the quantum state is different but has very similar properties.

The theorem is usually proved by consideration of a quantum system of two entangledqubits. The most common examples concern systems of particles that are entangled in spin or polarization. Quantum mechanics allows predictions of correlations that would be observed if these two particles have their spin or polarization measured in different directions. Bell showed that if a local hidden variable theory holds, then these correlations would have to satisfy certain constraints, called Bell inequalities. However, for the quantum correlations arising in the specific example considered, those constraints are not satisfied, hence the phenomenon being studied cannot be explained by a local hidden variables theory.

Following the argument in the Einstein–Podolsky–Rosen (EPR) paradox paper (but using the example of spin, as in David Bohm's version of the EPR argument[4][8]), Bell considered an experiment in which there are "a pair of spin one-half particles formed somehow in the singlet spin state and moving freely in opposite directions."[4] The two particles travel away from each other to two distant locations, at which measurements of spin are performed, along axes that are independently chosen. Each measurement yields a result of either spin-up (+) or spin-down (−); it means, spin in the positive or negative direction of the chosen axis.

The probability of the same result being obtained at the two locations depends on the relative angles at which the two spin measurements are made, and is strictly between zero and one for all relative angles other than perfectly parallel or antiparallel alignments (0° or 180°). Since total angular momentum is conserved, and since the total spin is zero in the singlet state, the probability of the same result with parallel (antiparallel) alignment is 0 (1). This last prediction is true classically as well as quantum mechanically.

Bell's theorem is concerned with correlations defined in terms of averages taken over very many trials of the experiment. The correlation of two binary variables is usually defined in quantum physics as the average of the products of the pairs of measurements. Note that this is different from the usual definition of correlation in statistics. The quantum physicist's "correlation" is the statistician's "raw (uncentered, unnormalized) product moment". They are similar in that, with either definition, if the pairs of outcomes are always the same, the correlation is +1; if the pairs of outcomes are always opposite, the correlation is -1; and if the pairs of outcomes agree 50% of the time, then the correlation is 0. The correlation is related in a simple way to the probability of equal outcomes, namely it is equal to twice the probability of equal outcomes, minus one.

Measuring the spin of these entangled particles along anti-parallel directions—i.e., along the same axis but in opposite directions, the set of all results is perfectly correlated. On the other hand, if measurements are performed along parallel directions they always yield opposite results, and the set of measurements shows perfect anti-correlation. This is in accord with the above stated probabilities of measuring the same result in these two cases. Finally, measurement at perpendicular directions has a 50% chance of matching, and the total set of measurements is uncorrelated. These basic cases are illustrated in the table below. Columns should be read as examples of pairs of values that could be recorded by Alice and Bob with time increasing going to the right.

The best possible local realist imitation (red) for the quantum correlation of two spins in the singlet state (blue), insisting on perfect anti-correlation at zero degrees, perfect correlation at 180 degrees. Many other possibilities exist for the classical correlation subject to these side conditions, but all are characterized by sharp peaks (and valleys) at 0, 180, 360 degrees, and none has more extreme values (±0.5) at 45, 135, 225, 315 degrees. These values are marked by stars in the graph, and are the values measured in a standard Bell-CHSH type experiment: QM allows ±1/√2 = ±0.7071..., local realism predicts ±0.5 or less.

With the measurements oriented at intermediate angles between these basic cases, the existence of local hidden variables could agree with a linear dependence of the correlation in the angle but, according to Bell's inequality (see below), could not agree with the dependence predicted by quantum mechanical theory, namely, that the correlation is the negative cosine of the angle. Experimental results match the curve predicted by quantum mechanics.[1]

Over the years, Bell's theorem has undergone a wide variety of experimental tests. However, various common deficiencies in the testing of the theorem have been identified, including the detection loophole[9] and the communication loophole.[9] Over the years experiments have been gradually improved to better address these loopholes. In 2015, the first experiment to simultaneously address all of the loopholes was performed.[7]

To date, Bell's theorem is generally regarded as supported by a substantial body of evidence and there are few supporters of local hidden variables, though the theorem is continually the subject of study, criticism, and refinement.[10][11]

Bell's theorem, derived in his seminal 1964 paper titled On the Einstein Podolsky Rosen paradox,[4] has been called, on the assumption that the theory is correct, "the most profound in science".[12] Perhaps of equal importance is Bell's deliberate effort to encourage and bring legitimacy to work on the completeness issues, which had fallen into disrepute.[13] Later in his life, Bell expressed his hope that such work would "continue to inspire those who suspect that what is proved by the impossibility proofs is lack of imagination."[13]

The title of Bell's seminal article refers to the 1935 paper by Einstein, Podolsky and Rosen[14] that challenged the completeness of quantum mechanics. In his paper, Bell started from the same two assumptions as did EPR, namely (i) reality (that microscopic objects have real properties determining the outcomes of quantum mechanical measurements), and (ii) locality (that reality in one location is not influenced by measurements performed simultaneously at a distant location). Bell was able to derive from those two assumptions an important result, namely Bell's inequality. The theoretical (and later experimental) violation of this inequality implies that at least one of the two assumptions must be false.

In two respects Bell's 1964 paper was a step forward compared to the EPR paper: firstly, it considered more hidden variables than merely the element of physical reality in the EPR paper; and Bell's inequality was, in part, liable to be experimentally tested, thus raising the possibility of testing the local realism hypothesis. Limitations on such tests to date are noted below. Whereas Bell's paper deals only with deterministic hidden variable theories, Bell's theorem was later generalized to stochastic theories[15] as well, and it was also realised[16] that the theorem is not so much about hidden variables, as about the outcomes of measurements that could have been taken instead of the one actually taken. Existence of these variables is called the assumption of realism, or the assumption of counterfactual definiteness.

After the EPR paper, quantum mechanics was in an unsatisfactory position: either it was incomplete, in the sense that it failed to account for some elements of physical reality, or it violated the principle of a finite propagation speed of physical effects. In a modified version of the EPR thought experiment, two hypothetical observers, now commonly referred to as Alice and Bob, perform independent measurements of spin on a pair of electrons, prepared at a source in a special state called a spin singlet state. It is the conclusion of EPR that once Alice measures spin in one direction (e.g. on the x axis), Bob's measurement in that direction is determined with certainty, as being the opposite outcome to that of Alice, whereas immediately before Alice's measurement Bob's outcome was only statistically determined (i.e., was only a probability, not a certainty); thus, either the spin in each direction is an element of physical reality, or the effects travel from Alice to Bob instantly.

In QM, predictions are formulated in terms of probabilities — for example, the probability that an electron will be detected in a particular place, or the probability that its spin is up or down. The idea persisted, however, that the electron in fact has a definite position and spin, and that QM's weakness is its inability to predict those values precisely. The possibility existed that some unknown theory, such as a hidden variables theory, might be able to predict those quantities exactly, while at the same time also being in complete agreement with the probabilities predicted by QM. If such a hidden variables theory exists, then because the hidden variables are not described by QM the latter would be an incomplete theory.

The concept local realism is formalized to state and prove Bell's theorem and generalizations. A common approach is the following:

There is a probability spaceΛ and the observed outcomes by both Alice and Bob result by random sampling of the (unknown, "hidden") parameter λ ∈ Λ.

The values observed by Alice or Bob are functions of the local detector settings and the hidden parameter only. Thus, there are functions A,B : S2 × Λ → {-1, +1} , where a detector setting is modeled as a location on the unit sphere S2, such that

The value observed by Alice with detector setting a is A(a, λ)

The value observed by Bob with detector setting b is B(b, λ)

Perfect anti-correlation would require B(c, λ) = −A(c, λ), c ∈ S2. Implicit in assumption 1) above, the hidden parameter space Λ has a probability measureμ and the expectation of a random variable X on Λ with respect to μ is written

where for accessibility of notation we assume that the probability measure has a probability densityp that therefore is nonnegative and integrates to 1. The hidden parameter is often thought of as being associated with the source but it can just as well also contain components associated with the two measurement devices.

Bell inequalities concern measurements made by observers on pairs of particles that have interacted and then separated. Assuming local realism, certain constraints must hold on the relationships between the correlations between subsequent measurements of the particles under various possible measurement settings. Let A and B be as above. Define for the present purposes three correlation functions:

where N++ is the number of measurements yielding "spin up" in the direction of a measured by Alice (first subscript +) and "spin up" in the direction of b measured by Bob. The other occurrences of N are analogously defined.

Let Cq(a, b) denote the correlation as predicted by quantum mechanics. This is given by the expression

The way single particle operators act on the product space is exemplified below by the example at hand; one defines the tensor product of operators, where the factors are single particle operators, thus if Π, Ω are single particle operators,

etc., where the superscript in parentheses indicates on which Hilbert space in the tensor product space the action is intended and the action is defined by the right hand side. The singlet state has total spin 0 as may be verified by application of the operator of total spin J · J = (J1 + J2) ⋅ (J1 + J2) by a calculation similar to that presented below.

whatever are the particular characteristics of the hidden variable theory as long as it abides to the rules of local realism as defined above. That is to say, no hidden variable theory can make the same predictions as quantum mechanics.

has never been found. That is to say, predictions of quantum mechanics have never been falsified by experiment. These experiments include such that can rule out hidden variable theories. But see below on possible loopholes.

where a, b and c refer to three arbitrary settings of the two analysers. This inequality is however restricted in its application to the rather special case in which the outcomes on both sides of the experiment are always exactly anticorrelated whenever the analysers are parallel. The advantage of restricting attention to this special case is the resulting simplicity of the derivation. In experimental work the inequality is not very useful because it is hard, if not impossible, to create perfect anti-correlation.

This simple form does have the virtue of being quite intuitive. It is easily seen to be equivalent to the following elementary result from probability theory. Consider three (highly correlated, and possibly biased) coin-flips X, Y, and Z, with the property that:

X and Y give the same outcome (both heads or both tails) 99% of the time

Y and Z also give the same outcome 99% of the time,

then X and Z must also yield the same outcome at least 98% of the time. The number of mismatches between X and Y (1/100) plus the number of mismatches between Y and Z (1/100) are together the maximum possible number of mismatches between X and Z (a simple Boole–Fréchet inequality).

Imagine a pair of particles that can be measured at distant locations. Suppose that the measurement devices have settings, which are angles—e.g., the devices measure something called spin in some direction. The experimenter chooses the directions, one for each particle, separately. Suppose the measurement outcome is binary (e.g., spin up, spin down). Suppose the two particles are perfectly anti-correlated—in the sense that whenever both measured in the same direction, one gets identically opposite outcomes, when both measured in opposite directions they always give the same outcome. The only way to imagine how this works is that both particles leave their common source with, somehow, the outcomes they will deliver when measured in any possible direction. (How else could particle 1 know how to deliver the same answer as particle 2 when measured in the same direction? They don't know in advance how they are going to be measured...). The measurement on particle 2 (after switching its sign) can be thought of as telling us what the same measurement on particle 1 would have given.

Start with one setting exactly opposite to the other. All the pairs of particles give the same outcome (each pair is either both spin up or both spin down). Now shift Alice's setting by one degree relative to Bob's. They are now one degree off being exactly opposite to one another. A small fraction of the pairs, say f, now give different outcomes. If instead we had left Alice's setting unchanged but shifted Bob's by one degree (in the opposite direction), then again a fraction f of the pairs of particles turns out to give different outcomes. Finally consider what happens when both shifts are implemented at the same time: the two settings are now exactly two degrees away from being opposite to one another. By the mismatch argument, the chance of a mismatch at two degrees can't be more than twice the chance of a mismatch at one degree: it cannot be more than 2f.

Compare this with the predictions from quantum mechanics for the singlet state. For a small angle θ, measured in radians, the chance of a different outcome is approximately f1=θ2/2{\displaystyle f_{1}=\theta ^{2}/2} as explained by small-angle approximation. At two times this small angle, the chance of a mismatch is therefore about 4 times larger, since f2=(2θ)2/2=22θ2/2≈4f1{\displaystyle f_{2}=(2\theta )^{2}/2=2^{2}\theta ^{2}/2\approx 4f_{1}}. But we just argued that it cannot be more than 2 times as large.

This intuitive formulation is due to David Mermin. The small-angle limit is discussed in Bell's original article, and therefore goes right back to the origin of the Bell inequalities.

Generalizing Bell's original inequality,[4]John Clauser, Michael Horne, Abner Shimony and R. A. Holt introduced the CHSH inequality,[17] which puts classical limits on the set of four correlations in Alice and Bob's experiment, without any assumption of perfect correlations (or anti-correlations) at equal settings

Making the special choice a′=a+π{\displaystyle a'=a+\pi }, denoting b′=c{\displaystyle b'=c}, and assuming perfect anti-correlation at equal settings, perfect correlation at opposite settings, therefore ρ(a,a+π)=1{\displaystyle \rho (a,a+\pi )=1} and ρ(b,a+π)=−ρ(b,a){\displaystyle \rho (b,a+\pi )=-\rho (b,a)}, the CHSH inequality reduces to the original Bell inequality. Nowadays, (1) is also often simply called "the Bell inequality", but sometimes more completely "the Bell-CHSH inequality".

the CHSH inequality can be derived as follows. Each of the four quantities is ±1 and each depends on λ. It follows that for any λ ∈ Λ, one of B + B′ and B − B′ is zero, and the other is ±2. From this it follows that

The CHSH inequality is seen to depend only on the following three key features of a local hidden variables theory: (1) realism: alongside of the outcomes of actually performed measurements, the outcomes of potentially performed measurements also exist at the same time; (2) locality, the outcomes of measurements on Alice's particle don't depend on which measurement Bob chooses to perform on the other particle; (3) freedom: Alice and Bob can indeed choose freely which measurements to perform.

The realism assumption is actually somewhat idealistic, and Bell's theorem only proves non-locality with respect to variables that only exist for metaphysical reasons. However, before the discovery of quantum mechanics, both realism and locality were completely uncontroversial features of physical theories.

The measurements performed by Alice and Bob are spin measurements on electrons. Alice can choose between two detector settings labeled a and a′; these settings correspond to measurement of spin along the z or the x axis. Bob can choose between two detector settings labeled b and b′; these correspond to measurement of spin along the z′ or x′ axis, where the x′ − z′ coordinate system is rotated 135° relative to the x − z coordinate system. The spin observables are represented by the 2 × 2 self-adjoint matrices:

The operators B(b′),B(b){\displaystyle \scriptstyle B(b'),B(b)} correspond to Bob's spin measurements along x′ and z′. Note that the A operators commute with the B operators, so we can apply our calculation for the correlation. In this case, we can show that the CHSH inequality fails. In fact, a straightforward calculation shows that[citation needed]

Bell's Theorem: If the quantum mechanical formalism is correct, then the system consisting of a pair of entangled electrons cannot satisfy the principle of local realism. Note that 2√2 is indeed the upper bound for quantum mechanics called Tsirelson's bound. The operators giving this maximal value are always isomorphic to the Pauli matrices.

Scheme of a "two-channel" Bell test
The source S produces pairs of "photons", sent in opposite directions. Each photon encounters a two-channel polariser whose orientation (a or b) can be set by the experimenter. Emerging signals from each channel are detected and coincidences of four types (++, −−, +− and −+) counted by the coincidence monitor.

Experimental tests can determine whether the Bell inequalities required by local realism hold up to the empirical evidence.

Actually, most experiments have been performed using polarization of photons rather than spin of electrons (or other spin-half particles). The quantum state of the pair of entangled photons is not the singlet state, and the correspondence between angles and outcomes is different from that in the spin-half set-up. The polarization of a photon is measured in a pair of perpendicular directions. Relative to a given orientation, polarization is either vertical (denoted by V or by +) or horizontal (denoted by H or by -). The photon pairs are generated in the quantum state

where |V⟩{\displaystyle |V\rangle } and |H⟩{\displaystyle |H\rangle } denotes the state of a single vertically or horizontally polarized photon, respectively (relative to a fixed and common reference direction for both particles).

When the polarization of both photons is measured in the same direction, both give the same outcome: perfect correlation. When measured at directions making an angle 45 degrees with one another, the outcomes are completely random (uncorrelated). Measuring at directions at 90 degrees to one another, the two are perfectly anti-correlated. In general, when the polarizers are at an angle θ to one another, the correlation is cos(2θ). So relative to the correlation function for the singlet state of spin half particles, we have a positive rather than a negative cosine function, and angles are halved: the correlation is periodic with period π instead of 2π.

Bell's inequalities are tested by "coincidence counts" from a Bell test experiment such as the optical one shown in the diagram. Pairs of particles are emitted as a result of a quantum process, analysed with respect to some key property such as polarisation direction, then detected. The setting (orientations) of the analysers are selected by the experimenter.

The fair sampling problem was faced openly in the 1970s. In early designs of their 1973 experiment, Freedman and Clauser[18] used fair sampling in the form of the Clauser–Horne–Shimony–Holt (CHSH[17]) hypothesis. However, shortly afterwards Clauser and Horne[15] made the important distinction between inhomogeneous (IBI) and homogeneous (HBI) Bell inequalities. Testing an IBI requires that we compare certain coincidence rates in two separated detectors with the singles rates of the two detectors. Nobody needed to perform the experiment, because singles rates with all detectors in the 1970s were at least ten times all the coincidence rates. So, taking into account this low detector efficiency, the QM prediction actually satisfied the IBI. To arrive at an experimental design in which the QM prediction violates IBI we require detectors whose efficiency exceeds 82.8% for singlet states,[19] but have very low dark rate and short dead and resolving times. This is now within reach.

Because, at that time, even the best detectors didn't detect a large fraction of all photons, Clauser and Horne[15] recognized that testing Bell's inequality required some extra assumptions. They introduced the No Enhancement Hypothesis (NEH):

A light signal, originating in an atomic cascade for example, has a certain probability of activating a detector. Then, if a polarizer is interposed between the cascade and the detector, the detection probability cannot increase.

Given this assumption, there is a Bell inequality between the coincidence rates with polarizers and coincidence rates without polarizers.

The experiment was performed by Freedman and Clauser,[18] who found that the Bell's inequality was violated. So the no-enhancement hypothesis cannot be true in a local hidden variables model.

While early experiments used atomic cascades, later experiments have used parametric down-conversion, following a suggestion by Reid and Walls,[20] giving improved generation and detection properties. As a result, the most recent experiments with photons no longer suffer from the detection loophole (see Bell test experiments). This makes the photon the first experimental system for which all main experimental loopholes have been surmounted, albeit presently only in separate experiments (Giustina et al. (2013), Bell violation using entangled photons without the fair-sampling assumption, Nature 497, 227–230; B.G. Christensen et al. (2013), Detection-Loophole-Free Test of Quantum Nonlocality, and Applications, arXiv:1306.5772).

Most advocates of the hidden-variables idea believe that experiments have ruled out local hidden variables. They are ready to give up locality, explaining the violation of Bell's inequality by means of a non-local hidden variable theory, in which the particles exchange information about their states. This is the basis of the Bohm interpretation of quantum mechanics, which requires that all particles in the universe be able to instantaneously exchange information with all others. A 2007 experiment ruled out a large class of non-Bohmian non-local hidden variable theories.[21]

If the hidden variables can communicate with each other faster than light, Bell's inequality can easily be violated. Once one particle is measured, it can communicate the necessary correlations to the other particle. Since in relativity the notion of simultaneity is not absolute, this is unattractive. One idea is to replace instantaneous communication with a process that travels backwards in time along the past light cone. This is the idea behind a transactional interpretation of quantum mechanics, which interprets the statistical emergence of a quantum history as a gradual coming to agreement between histories that go both forward and backward in time.[22]

A possible (but not universally accepted) solution is offered by the many worlds theory of quantum mechanics. According to this, not only is collapse of the wave function illusory, but the apparent random branching of possible futures when quantum systems interact with the macroscopic world is also an illusion. Measurement does not lead to a random choice of possible outcome; rather, the only ingredient of quantum mechanics is the unitary evolution of the wave function. All possibilities co-exist forever and the only reality is the quantum mechanical wave function. According to this view, two distant observers both split into superpositions when measuring a spin. The Bell inequality violations are no longer counterintuitive, because it is not clear which copy of the observer B will be seen by observer A when they compare notes. If reality includes all the different outcomes, locality in physical space (not outcome space) places no restrictions on how the split observers can meet up.

This point underlines the fact that the argument that realism is incompatible with quantum mechanics and locality depends on a particular formalization of the concept of realism. In its weakest form, the assumption underpinning that particular formalization is called counterfactual definiteness. This is the assumption that outcomes of measurements that are not performed are just as real as those of measurements that were performed. Counterfactual definiteness is an uncontroversial property of all classical physical theories prior to quantum theory, due to their determinism. Many worlds interpretations are not only counterfactually indefinite, but are also factually indefinite. The results of all experiments, even ones that have been performed, are not uniquely determined.

If one chooses to reject counterfactual definiteness, reality has been made smaller, and there is no non-locality problem. On the other hand, one is thereby introducing irreducible or intrinsic randomness into our picture of the world: randomness that cannot be "explained" as merely the reflection of our ignorance of underlying, variable, physical quantities. Non-determinism becomes a fundamental property of nature.

Assuming counterfactual definiteness, reality has been enlarged, and there is a non-locality problem. On the other hand, in the many-worlds interpretation of quantum mechanics, reality consists only of a deterministically evolving wave function and non-locality is a non-issue.

There have also been repeated claims that Bell's arguments are irrelevant because they depend on hidden assumptions that, in fact, are questionable. For example, E. T. Jaynes[25] claimed in 1989 that there are two hidden assumptions in Bell's theorem that could limit its generality. According to him:

Bell interpreted conditional probability P(X|Y) as a causal inference, i.e. Y exerted a causal inference on X in reality. However, P(X|Y) actually only means logical inference (induction). Causes cannot travel faster than light or backward in time, but deduction can.

Bell's inequality does not apply to some possible hidden variable theories. It only applies to a certain class of local hidden variable theories. In fact, it might have just missed the kind of hidden variable theories that Einstein is most interested in.

However, Richard D. Gill has argued that Jaynes misunderstood Bell's analysis. Gill points out that in the same conference volume in which Jaynes argues against Bell, Jaynes confesses to being extremely impressed by a short proof by Steve Gull presented at the same conference, that the singlet correlations could not be reproduced by a computer simulation of a local hidden variables theory.[26] According to Jaynes (writing nearly 30 years after Bell's landmark contributions), it would probably take us another 30 years to fully appreciate Gull's stunning result.

A recent flurry of activity about implications for determinism arose with the paper: The Free Will Theorem[27] which stated "the response of a spin 1 particle to a triple experiment is free—that is to say, is not a function of properties of that part of the universe that is earlier than this response with respect to any given inertial frame."[28] This theorem raised awareness of a tension between determinism fully governing an experiment (on the one hand) and Alice and Bob being free to choose any settings they like for their observations (on the other).[29][30] The philosopher David Hodgson supports this theorem as showing that determinism is unscientific, and that quantum mechanics allows observers (at least in some instances) the freedom to make observations of their choosing, thereby leaving the door open for free will.[31]

The violations of Bell's inequalities, due to quantum entanglement, provide near definitive demonstrations of something that was already strongly suspected: that quantum physics cannot be represented by any version of the classical picture of physics.[32] Some earlier elements that had seemed incompatible with classical pictures included complementarity and wavefunction collapse. The Bell violations show that no resolution of such issues can avoid the ultimate strangeness of quantum behavior.[33]

The EPR paper "pinpointed" the unusual properties of the entangled states, e.g. the above-mentioned singlet state, which is the foundation for present-day applications of quantum physics, such as quantum cryptography; one application involves the measurement of quantum entanglement as a physical source of bits for Rabin'soblivious transfer protocol. This non-locality was originally supposed to be illusory, because the standard interpretation could easily do away with action-at-a-distance by simply assigning to each particle definite spin-states for all possible spin directions. The EPR argument was: therefore these definite states exist, therefore quantum theory is incomplete, since they do not appear in the theory. Bell's theorem showed that the "entangledness" prediction of quantum mechanics has a degree of non-locality that cannot be explained away by any local theory.

What is powerful about Bell's theorem is that it doesn't refer to any particular physical theory. It shows that nature violates the most general assumptions behind classical pictures, not just details of some particular models. No combination of local deterministic and local random variables can reproduce the phenomena predicted by quantum mechanics and repeatedly observed in experiments.[34]

^ abC.B. Parker (1994). McGraw-Hill Encyclopaedia of Physics (2nd ed.). McGraw-Hill. p. 542. ISBN0-07-051400-3. Bell himself wrote: "If [a hidden variable theory] is local it will not agree with quantum mechanics, and if it agrees with quantum mechanics it will not be local. This is what the theorem says." John Bell, Speakable and Unspeakable in Quantum Mechanics, Cambridge University Press, 1987, p. 65.