You can use the terms "and" & "or" in your search; "or" phrases are resolved
first, then the "and" phrases. For example, searching for "black hole and
galaxy or universe" will find articles that have the phrase "black hole" in them
and also have either "galaxy" or "universe" in them. Please note that other
search syntax like quote marks, hyphens, etc. are not currently supported.

When you view web pages with matches to your search, the terms you searched for will be highlighted in yellow.

If you are aware of an interesting new academic paper (that has been published in a peer-reviewed journal or has appeared on the arXiv), a conference talk (at an official professional scientific meeting), an external blog post (by a professional scientist) or a news item (in the mainstream news media), which you think might make an interesting topic for an FQXi blog post, then please contact us at forums@fqxi.org with a link to the original source and a sentence about why you think that the work is worthy of discussion. Please note that we receive many such suggestions and while we endeavour to respond to them, we may not be able to reply to all suggestions.

Please also note that we do not accept unsolicited posts and we cannot review, or open new threads for, unsolicited articles or papers. Requests to review or post such materials will not be answered. If you have your own novel physics theory or model, which you would like to post for further discussion among then FQXi community, then please add them directly to the "Alternative Models of Reality" thread, or to the "Alternative Models of Cosmology" thread. Thank you.

As I was stating at the end of part one, I am grateful to Joy Christian for give such a detailed answer in “Quantum Music from a Classical Sphere.” I was puzzled by his arguments and I was preparing several short replies to various points, when I realized this was getting out of hand and it was better to have a single coherent reply in a new post.

Let me start by saying that the topics we are presenting here are hard and there is no absolute agreement even between experts on the very definitions of what one means by locality, realism or contextuality, or a no-go theorem for hidden variable theories. Some people consider quantum mechanics as local and non-realistic, while others insist on non-locality. Contextual hidden variable theories encompass a wide and ill-defined domain that is probably best defined by their opposition to the much sharper-defined non-contextual hidden variable theories. Some people think that non-contextual hidden variable theories were rejected by the theorems of Gleason and Kochen- Specker and Bell’s theorem is important only to reject some additional contextual hidden variable theories. Other people point out that the Kochen- Specker theorem cannot be put to any experimental test, and the importance of Bell’s theorem is to reject non-contextual hidden variable theories. Finally, some people think contextual hidden variable theories are unphysical, while others think they are the future of physics.

To cut through this confusing mess, the best approach is to rely on accurate historical narratives. This would provide a stable setting for the meanings of the terms used, which would allow a common discussion ground where we can compare apples to apples. Also it is best to seek experimental results to settle disputes instead of debates, however interesting. Last, speculation about future results is best not to be discussed at all. If those tenets are followed, charges of “alarming confusions” and a fruitless dialogue can be avoided.

With this preparation, let me attempt to explain what contextuality for hidden variable is, why Joy’s result is contextual, and what it all means.

First some preliminaries. Earlier I discussed the EPR paper and then I jumped to Bell’s contribution. However, historically Bohr also made an early critical observation in his attempt to refute EPR’s argument. Bohr was using the good old fashioned complementarity principle: in some experiments the particles behaves like waves, while in others like point particles. However, one cannot see them both in the same experiment. Therefore this invalidates EPR’s argument where on one arm of the experiment one behavior is measured, while in the other arm the complementary behavior is measured. To justify this experimentally, Bohr noted that observations depend not only on the state of the system but also on the disposition of the apparatus. And indeed, no physicist is ever going to deny this. Joy mentioned general relativity, and I can also point out for example electrostatics, and any other theories requiring boundary value problems. Joy considers this “contextuality,” but I would argue that this is clearly an overreaching usage of the term, which in hidden variable theories has a very different meaning which I will define later in its historical context. For the sake of the clarity, let’s call this “Bohr’s contextuality”.

As a supporting argument for his position, Joy mentioned Bell’s classical paper “On the problem of hidden variables in quantum mechanics.” However, Bell’s paper has a slightly different intention than Joy presented. Bell did not have an abstract for this paper and please allow me to explain how I understand this famous paper.

Einstein once said: “God does not play dice” and Bohr replied: “Do not tell God what to do.” In a similar vein, I would say Bell’s main idea of the paper is: “Do not tell me how to construct my hidden variable theories.” Requirements which are obeyed by quantum mechanics are not necessarily obeyed by hidden variable theories, which should only reconstruct quantum mechanics’ predictions.

For the interest of time and space let me only discuss Bell’s criticism of von Neumann and Gleason in this paper. Von Neumann had enormous influence in quantum mechanics due to his seminal work on axiomatizing it. He produced a “no-go” theorem on hidden variable theories which Bell found unjustified in its assumptions. In particular, von Neumann required that the linear combination of expectation values is the expectation value of the combination. While true in quantum mechanics, this is too strong to be demanded for any particular value of the hidden variable, as an elementary example can show (from Reflections on Relativity by Kevin Brown):

Suppose we have two variables X and Y and two hidden variables 1 and 2. Suppose for hidden variable 1: X=2, Y=5 and X+Y=5 while for hidden variable 2: X=4, Y=3 and X+Y=9. Averaging for hidden variables 1 and 2, avg(x) = 3, avg(Y) = 4, and avg(X+Y)=7 and the sum of the averages is the average of the sums. On each hidden variable however, this property is not valid. (If you are worried about the funny math X+Y = 5 when X = 2 and Y = 5, “X”, “Y”, and “X+Y” are three separate experiments requiring separate experimental setups and the number appearing to the right hand side of the equal sign is the experimental outcome. The “+” in “X+Y” is only part of the label for an experimental setup and not a genuine plus operator.)

Gleason’s theorem improves on von Neumann’s result because it makes it a mathematical necessity to have a particular form of the average (the trace formalism) which does obey von Neumann’s condition for compatible (commuting) observables. Bell’s criticism on Gleason’s result is much more subtle and only here “Bohr’s contextuality” becomes a mandatory part of the argument. One can argue that von Neumann’s condition from above can be dismissed even without the toy example (which is not part of the paper, but I use it for the clarity of the argument), by pointing out that it is nonsensical for non-commutative variables which requires a different experimental setup. However, for compatible commutative variables it is a natural requirement (and this is what is used by Gleason). Here Bell’s analysis becomes subtle. To rule out hidden variables Gleason uses this requirement for commuting observables on spaces of dimensionality higher than two, meaning it has to be applied at least twice: on variables X and Y as well as X and Z, and while X and Y are compatible (commute), and X and Z are compatible as well, X and Z may not be (and the funny math from above can happen). It is only at this point where Bell says: “the result of an observation may reasonably depend not only on the state of the system (including hidden variables) but also on the disposition of the apparatus.”

So, Bell’s main argument was a defense of the freedom to construct a hidden variable theory. Unjustified requirements were also proven unjustified by showing they violate Bohr’s contextuality (but this is only a secondary supporting idea in the paper).

But why bother splitting hairs over the meaning of contextuality? Because it critically appears in many proposals of hidden variable theories aimed at recovering the predictions of quantum mechanics. Everyone agrees that quantum mechanics accurately describes nature, but not everyone is satisfied with quantum mechanics’ interpretation. Why is this so? In the traditional quantum world, there are no safe harbors for quantum objects (pure states are not immune to “collapse”) and you get epistemology (a statistical description) without ontology (objective reality) which generates a sense that something is fundamentally missing in this scheme. Hidden variables are supposed to fill this missing ontological gap and restore the “ignorance interpretation“ for the statistical nature of quantum mechanical predictions. First and foremost, in my opinion, hidden variables have to have a rock solid ontological value (ontological definiteness – typically known as non-contextuality). (Ontological definiteness is a stronger property than mere objective definiteness, which can be claimed for example by some parts of Bohm’s theory. Still, all hidden variable theories corresponding to Hilbert spaces of a dimension higher than two suffer in one form or another of ontological indefiniteness. In the case of Bohm’s theory--a contextual algebraic hidden variable theory--momentum is ill defined but still has objective definiteness. However spin is treated just like in standard quantum mechanics with ontological indefiniteness. I would challenge anyone to name a single hidden variable theory with dimensions higher than two which does not have ANY ontological problems.)

For measurement results, a dependency on the experimental context is completely acceptable. However, if hidden variables or their meaning changes with the experimental context, we are back to square one in terms of the strangeness of quantum mechanics. Not only do hidden variable theories suffer from strangeness, but all interpretations of quantum mechanics aimed at making it palatable to our classical intuition employ a bag of tricks: transactional interpretation is wonderful if you can swallow messages from the future; Bohm’s theory restores classicality at the expense of action at a distance; Everrett’s approach transforms “OR” into “AND” and deprives even classical physics of counterfactual definiteness, etc, etc.

So what do we call this rock solid (ontological) foundation? (Ontological) non-contextuality. If you are unhappy with standard quantum mechanics and its wave/particle complementarity, then you should not be happy with any other ontological flip-flopping explanation regardless of the packaging and other genuinely intuitive but ultimately non-essential features tied to the particular mathematical formalism used (lack of superposition, lack of entanglement, etc). The big pink elephant of epistemology without ontology remains in the middle of the room.

Ontological non-contextuality represents the golden standard for any physical theory aimed at restoring classical intuition. (As a side remark, general relativity and all other classical physical theories obey ontological non-contextuality.) Local realism obeys ontological non-contextuality too. We can now drop the ontological qualifier and simply state either contextual or non-contextual in the remainder of the text. This is the standard meaning of contextuality or non-contextuality when discussing hidden variables (and in this meaning general relativity is non-contextual). All non-contextual hidden variable theories are ruled out by many no-go theorems, and the majority of physicists consider contextual hidden variables unphysical with no experimental evidence ever backing their existence. There are also many flavors of contextual hidden variable theories and we already encountered the algebraic type in Bohm’s theory.

Now we are in a position to put everything together.

What does Bell’s theorem assert (regardless of any fine mathematical print)? Any locally realistic theory obeys certain inequalities which are violated by quantum mechanics and ultimately by nature in experiments. What is Joy’s challenge to Bell’s theorem? Non-commutative beables allows him to go over Bell’s inequality precisely like quantum mechanics. Is Joy’s theory noncontextual as he claims? No. Non-commutative beables are ruled out by Clifton’s theorem for non-contextual theories, meaning Joy’s theory cannot be non-contextual (and I clarified some more what kind of contextual theory it is in my preprint: http://arxiv.org/abs/1107.1007).

But let’s assume that both Clifton and I are mistaken. There is a simple way to put this to the test: if Joy’s claim of non-contextuality is correct, then one can write a computer program modeling his theory and obtain Tsirelson’s bounds. (This is not my idea; I got it from researching the web for reactions to Joy’s results.) I claim this cannot be done. If anyone manages to actually write such a program which goes above Bell’s limit then I admit I am completely wrong and Joy is right in his non-contextuality claim. And if he is right on his theory’s non-contextuality claim, then he did manage to disprove Bell’s theorem as usually understood by the physics community.

In the meantime, let me explain why I think Joy’s theory cannot be modeled on a computer and why it is not a strictly locally realistic theory.

Recently Joy published http://arxiv.org/abs/1106.0748. It is relevant to note that the state space (encoding the complete information) for spin 1/2 is SU(2) which is isomorphic with SO(3) where geometric algebra can naturally be used. There is an additional degree of freedom coming from the double cover property of the isomorphism and this allows for the possibility of a hidden variable which encodes the information on which one-to-two map you are located. It is not hard to see that Joy’s formalism when done consistently in the geometric algebra formalism is an equivalent representation of the standard quantum mechanics and his agreement with quantum mechanics comes as no surprise. However, Joy’s formalism lacks a “Born rule” which translates his formalism into the actual experimental outcomes (real numbers). If in http://arxiv.org/abs/1106.0748 one applies at any intermediate steps a map from geometric algebra objects to real numbers, it is easy to see that one only recovers Bell’s limit. (In standard complex quantum mechanics formalism this would correspond to applying Born’s rule too early and adding probabilities instead of amplitudes resulting in incorrect classical behavior predictions). But not applying any translation mechanism until the end in Joy’s formulation means that this step is delayed until the interaction taking place during the measurement process (which has to somehow implement/model the geometric algebra operations).Tung Ten Yong (http://arxiv.org/abs/0712.1637) has criticized Joy’s model because the final results are expressed as geometric algebra objects. I do not think this is a decisive criticism of Joy’s theory as Yong claims because if nature is quantum mechanical at heart, and if Joy’s theory reproduces all quantum mechanics results, this means that geometric algebra should be the proper ontology of nature. And then it is very natural to expect interaction during measurement to realize geometric algebra operations.

But geometric algebra objects are non-local and while Joy can call his model local due to factorizability, it is non-local due to the critical usage of a geometric algebra product (during measurement). Locality does not mean only separability between Alice and Bob, but must include a notion of physical distance as well. This additional non-locality would prevent any faithful simulation on a computer program because one cannot program a way to compute the experimental outcome for Alice at one end without accessing all information encoded in SO(3), meaning the inclusion of the information at Bob’s end as well (which is unavailable due to physical separation).

Again, if you disagree with the analysis above, Clifton’s theorems, or my preprint http://arxiv.org/abs/1107.1007, write a computer program modeling Joy’s theory for EPR-Bohm and show you get Tsirelson’s bound. If you succeed, I am wrong, this post is essentially incorrect, Joy’s theory is non-contextual, and Joy did disprove Bell’s theorem by counterexample.

Joy’s theory can be considered both local due to factorizability, and non-local due to geometric algebra. It can also be considered both realistic as it employs classical three dimensional geometrical objects, and non-realistic as it is (ontologically) contextual (the local beables critically depend on the experimental setting).

Finally, the same conclusions from my prior post and preprint stand: Joy does not disprove Bell’s theorem mathematically because he starts with a different assumption, that of a non-commutative beable algebra. I really did not understand Joy’s remark: “In particular, contrary to what Florin asserts, I do not start with mathematical assumptions different from those of Bell,” Quoting Joy (http://arxiv.org/abs/quant-ph/0703244v12): “In fact, apart from the multiplicative non-commutativity, they satisfy exactly the same algebra as do the local beables of Bell. […] Bell’s algebra, by contrast, is a commutative and associative normed division algebra over the field of real numbers.” Also: “Finally, Grangier expresses his unhappiness with the choice of my running title: “Disproof of Bell’s Theorem.” He rightly points out that as a mathematical theorem Bell’s theorem cannot be disproved, since its conclusions follow from well defined premises in a mathematically impeccable manner.”

Contrary to his claims and paper titles, Joy does not disprove the importance of Bell’s theorem as the key result validated by experiments rejecting local realism. If Clifton’s theorem is right or if my spin one analysis is correct, Joy’s theory is a contextual theory not classifiable as locally realistic. This is a scientific falsifiable statement, as any computer simulation of Joy’s theory exceeding Bell’s limit would render both Clifton’s theorems and my preprint wrong.

Last, Joy’s theory does expose a weakness in Bell’s theorem. Clifton’s theorems fill this weakness and Bell and Clifton’s theorem combined do rule out all hidden variable theories obeying local realism because Bell requires commutativity of beable algebra and Clifton shows that beable algebra must be commutative for all non-contextual theories and for any relativistic quantum field theories with bounded energies. As a counterexample, Joy’s theory does limit Bell’s importance in Shimony’s interpretation for some contextual hidden variable theories. However, other physicists including myself disagree with Shimony and place Bell’s theorem’s importance solely for rejecting non-contextual hidden variable theories in an experimentally verifiable fashion (in my preprint’s abstract I presented both points of view).

Scientific conclusions aside, I did enjoy learning about Joy’s result, and I do think they represent very valuable contributions in our understanding of quantum mechanics and hidden variable theories. More interesting work remains to be done and I would be very interested to see any proofs for his “Theorema Egregium” or (Eq. 10 in my preprint). I wish Joy success in further developing his ideas.

Thank you very much for your detailed reply. I will need to write down some equations to properly address the points you are making. So instead of posting my reply here I will post it in this collection of my ongoing replies. I hope that is alright with you.

I am looking forward to your reply. I hope by now I made clear the motivation and background info for my preprint.

By the way, I have Bell's Speakable and Unspeakable book and in there Bell's papers have no abstracts. I was quite surprised when FQXi found an on-line version with an abstract and I was very eager/nervous to see how Bell himself summarized his paper.

Also I want to let you know what I think about the author of the computer modeling idea. I felt a bit awkward to put this idea in the reply because I think the author engaged in what I consider unethical behavior. However I consider the idea valid and my way of distancing from its author was not cite his name to reward/advertise his dishonest approach.

I appreciate your honesty and courtesy. There is a world of difference between you and the nameless author. Just for the record, his behaviour was far worse than unethical. He has in fact violated several criminal laws.

In any case, I do not buy the simulation argument, as you will find out from my reply.

About a computer simulation of Joy's approach. We can always ask S.L. for the source code, to check it. (I don't think it is a good practice to hide someone's result, even if we suspect he is dishonest. People can verify his results, and decide whether his results are correct or not. [Not whether he is dishonest or not.])

Or we can implement ourselves something like this. Joy can provide the specs and can personally verify each function, to remove any suspicion. Anyway, I predict that if we take the scalar part of the Clifford product of Pauli bivectors, we will always obtain the cos correlation (both for singlet states and for independent states).

You have demonstrated yourself to be not only a scholar, but, more importantly a gentleman. FQXI, as well as Joy and yourself, have set the standard for how cutting edge physics should be argued and discussed. These forums continue to be both fascinating and informative.

As for the author of the computer simulation idea, his comments along with the comments of another "infamous" physics blogger (who shall remain nameless) posted on that same tread can only be described as despicable. Their behavior only demonstrates that civility does not follow education. It is indeed a sad commentary on the society in which we live in today that the degree of this type of behavior seems to flourish.

It much easier to resort to name calling and attacks on character, than to take the time to understand and dissect the true essence of the work put forth, as you have done. It seems clear from Joy's original forum that he has had to endure these vicious types of behavior from the very beginning of his work. Those in the physics community that resort to these tactics should be ashamed of themselves.

Your debate with Joy reminds me of the debate between Susskind and Hawking re. the information paradox, but you seem to have understood the key elements of Joy's papers in a more timely fashion than Susskind and ultimately, Maldecena re. black holes and information loss. That debate endured for over a decade, and some may still believe that it has yet to be completely resolved.

This discourse, along with the debate between Joy and yourself, shows that two parties can indeed be intellectually adversarial while remaining mutually respectful and courteous. You, Joy, and FQXI continue to show the community, as Susskind and Hawking before you, how to treat ideas that conflict with "acceptable knowledge". You are very much a compliment to your profession.

I wish to subscribe to your observation about the constructive debate between Florin and Joy. The visceral reaction of many, when one makes a groundbreaking claim, is to refute him immediately, at any costs.

I allow myself to be off topic and comment also on your remark about Susskind vs. Hawking. This is my opinion, based on what Susskind wrote and said about Hawking. It seemed to me that he presented Hawking like a guy who wanted with any costs to "make the world unsafe for quantum mechanics", while Hawking only pointed out a problem which emerged in the connection between QM and GR. Susskind claims to solve the problem by his "black hole complementarity principle", which essentially states that reality can be logically inconsistent, so long as the inconsistency cannot be revealed through direct observations. Moreover, when Hawking provided his own solution to the quantum information puzzle in 2004, Susskind presented this as if Hawking capitulated to him. But reading Hawking's paper reveals that he believes that the information is actually lost in each universe, but when summing over all histories, the loss cancels. As for the black hole complementarity, Bob may see Alice evaporating while crossing the horizon, but this is only an illusion, and not one of the complementary truths, as Susskind believes. I explain in the attachment why.

Thank you for your kind words, I am pretty sure I don't deserve them. Regardless of our scientific positions, in the end we all work towards the same goal: increase our understanding. Name calling is not only rude, it reveals the lack of arguments of the person engaging in it.

I intended the remark re. the debate between Susskind and Hawking as an analogy of how two parties can be intellectually adversarial, yet remain mutually courteous and respectful toward each other. I realize it is a slim analogy since Hawking "only pointed out a problem which emerged in the connection between QM and GR" which threatened the foundations of QM in particular, whereas Joy is directly disputing the conclusions of one of QM's most venerated theorems. Susskind admitted knowing very little about BHs before learning of Hawking's paradox, whereas Florin undoubtably is well versed not only in QM, but the intricacies of GA.

It seems to me that the difference between understanding and misunderstanding, between vindication and refutation, often comes down to the semantics rather than the math. Does not the disagreement between Florin and Joy depend upon the definitions and interpretations of such things as the very meaning of contextual and non-contextual and local and non-local, in regards to the complex math (no pun intended) that Joy utilizes and wrt the experimental apparatus set up and outcome? It seems to me that Florin agrees with Joy's mathematics, yet disagrees with his conclusions. I find it fascinating and an opportunity to advance my understanding, which admittedly is severely lacking. I will also be interested to find out why Joy "does not buy the simulation argument".

BTW, I did enjoy your attachment on BH complementary and have my own views, but out of respect for the current discussion, I will defer them until other time and place.

You deserve all the praise and more. The difference between you and the unnamed parties is like the difference between night and day. I learned about the computer simulation challenge through another physicist's website, and while he didn't engage in the mudslinging, he did not distance himself from it either. I found that rather strange given his unorthodox, even fringe views.

Never-the-less, without people such as yourself, we all would not benefit from your hard work. There's a big difference between textbook knowledge and self study, and the knowledge gained between two very educated people debating and supporting their views.

If I am missing something, I would be grateful if you would explain me. I will say briefly what I think so far about Joy's theory. To make the things clear, I split the theory in two layers.

1. (a simplified version of Joy's theory, as I see it). We want to obtain the The EPR correlation for two particles in a singlet state, which is the cos of the angle between the orientations of the measurement devices. The obvious way to obtain cos is to take the dot product between the two orientations.

2. The second layer - the usage of Pauli's algebra - seems to provide a justification for the 1st layer, but I think that in fact it is just a mean to obtain the dot product, hence the cos, and it obfuscates the idea.

Being it dot or Clifford product, I fail to see other justification for it than to get the desired cos.

But let's assume that Joy's papers present a good reason for this. I don't see why this is considered local, and a disproof of Bell's theorem. The particles are "at a distance", so this operation would be non-local. Also, we can take the dot/Clifford product for any two particles, being from a singlet or not, and get the same cos, which this time is not the correct correlation. So, we would have to say that if they are entangled, we can dot them, if not, we should do something else. So we couldn't get rid of entanglement.

and at this moment I don't see other reason for this approach than to get the cos. And the approach itself sounds to me like a "spooky Clifford product at a distance" :-), so I wouldn't consider it a disproof of Bell's theorem.

Are the things really that simple as I see them, or I am missing something?

Both your posts, as well as your paper, are very well-written, and as usual you did a great job conveying highly abstract ideas so that they can be followed by a wide audience. Joy's response deserves equal praises, and he answered with patience the questions addressed to him.

You said: "Tung Ten Yong (http://arxiv.org/abs/0712.1637) has criticized Joy's model because the final results are expressed as geometric algebra objects. I do not think this is a decisive criticism of Joy's theory as Yong claims because if nature is quantum mechanical at heart, and if Joy's theory reproduces all quantum mechanics results, this means that geometric algebra should be the proper ontology of nature."

I must say that the very moment when I read your remark, I was convinced that Yong was in fact criticizing the use of geometric algebra objects AS EXPECTATION VALUES. This was one of the first features that stroke me when reading Joy's work.

The expectation value is a real number. The measurement device indicates real numbers. Quantum mechanics is based on complex numbers, Hilbert spaces, matrices, yet the results are real numbers, ALWAYS. Because the expectation value is a probabilistic thing. So, in this particular point, I must agree with Yong.

Joy's expectation value is an even element of the Pauli algebra. While it looks structurally similar to the expectation value for real numbers, there is no obvious meaning of it. What is the meaning of a a Clifford-valued expectation value?

But I don't want to reject this novel type of expectation value just for not being real numbers. For example, the expectation values of momentum, along the three axes, are real numbers, but you can if you really want make statistics with the vector which has them as components. So, one may be able to write "vector valued expectation values".

So, in my opinion, when introducing a novel kind of expectation value, it should come with some explanations of why is this an expectation value, and what actually is a Clifford algebra valued expectation value. I couldn't find this explanation in Joy's papers.

Of course, if we take the limit for large number of observations, the bivector part cancels and we remain with a real number - the cosine. But what if we make only one observation? Would the result have a bivector part? If so, what's its meaning?

You state: "So, in my opinion, when introducing a novel kind of expectation value, it should come with some explanations of why is this an expectation value, and what actually is a Clifford algebra valued expectation value. I couldn't find this explanation in Joy's papers."

In Joy's defense I'll say his theory is a work in progress and he chose to direct his energy to other issues first. I can say Yong is right to point out strange things, but that cannot be used as grounds for discarding a new theory.

To your question: "If so, what's its meaning?" my simple answer is: I don't know. I could speculate and have an educate guess, but I don't want to do that because I could be wrong. However, I am not concerned about this at this stage. All I would say is that at some point some consistent translation mechanism has to be found from Joy's formulation to standard statistics. I disagree with Joy not on his math, but on his interpretation. And if his math is right I am confident such a mechanism does exist and will be eventually discovered. In other words, if Joy found an equivalent way of formulating QM, Born rule's translation into the new formalism will follow. It is just not done at this point.

> "I can say Yong is right to point out strange things, but that cannot be used as grounds for discarding a new theory"

My opinion is that we point problems when we find them to advance the work in progress, not to discard it. The software tester is not the enemy of the software developer. My remark was intended to clarify that particular observation of Yong, which seemed to be misunderstood. It happens that this problem occurred to me independently of Yong, but my reaction was to speculate on Joy's page that his formula may be a gate toward a new kind of probability theory.

I agree we should not rush to discard a theory in progress. No theory is born already mature. We can only try to offer our help.

Of course, some could say that this means we should not discard Bell's theorem as well. It is hard to see it as a work in progress, but nevertheless, I would vote to give it the same immunity.

I am not a fan of no-go theorems. I think they are often (usually?) abused to reject whatever theories are a menace for our own views. Especially if we invest our entire life in a particular direction of research, we would like an insurance that we are on the right track, and the no-go theorems plus some metaphysical principles can provide just this. Sometimes, they even proclaim the alternative research ways as pseudoscience, as it is too often done with the hidden variables theories, by claiming that Bell eliminated them altogether.

This is why I admire Joy, for questioning Bell's well-established theorem. And I admire you, for questioning Joy's work. And both of you, for allowing others to question your views, and for taking their observations seriously.

Although I am not a fan of the abuse of no-go theorems, I see no reason to proclaim Bell's theorem disproved, not even in a tiny part.

I don't understand your remark: "geometric algebra objects are non-local". Could you please explain it more?

For example, we can rewrite Maxwell's equations using geometric algebra. Does this make the electromagnetic field non-local?

What if, instead of the geometric product in Joy's theory, we take just the dot product (which is commutative)? We obtain directly the cos, without worrying about the bivector part of the expectation value, and we avoid using geometric algebra. Does this make Joy's theorem more local?

Sure. In GA one puts all geometric objects on equal footing: scalars, vectors, bi-vectors, etc and one does add and multiply apples and oranges. It is legal in GA. The flip side is that a GA object is not always a scalar and can encode information about spatially extended geometric objects. In other words, it is non-local.

"For example, we can rewrite Maxwell's equations using geometric algebra. Does this make the electromagnetic field non-local?"

One can rewrite all Lie groups in GA and almost anything is expressible in this language. The devil is in details about locality. That is why I did not make a blanket statement about nonlocality in computer modeling for Joy's theory and I tried to explain where I think one will encounter a problem.

"What if, instead of the geometric product in Joy's theory, we take just the dot product (which is commutative)"

Then you get nonsensical or trivial results. The GA product combines the dot product and the exterior product into one. One reduces the dimesionality, the other increases it and you get a rich algebra. Using only dots is equivalent to using only projector operators and applying several times results very quickly is the uninteresting empty set.

Given that it is often stated on these pages that people don't understand geometric algebra well enough, I have to say something.

Yes, with geometric algebra you can model all finite dimensional Lie groups. Take the spin group Spin(n,n), and you can find GL(n, R)'s double cover as subgroup. The rest is easy. Many equations become simpler in this formalism. But they do not add any magical feature which cannot be obtained, often in a much non-elegant way, with other structures like tensors.

Using Clifford algebras concentrates sometimes more equations, apparently very different, in only one equation. It takes some work to unwind such equations. So yes, they may be a barrier in understanding someone's theory - both of the good points, and of the mistakes.

You say "The flip side is that a GA object is not always a scalar and can encode information about spatially extended geometric objects. In other words, it is non-local."

A Clifford algebra is associated to a vector space (endowed with a dot product). A Clifford-algebra valued field is a section of a Clifford bundle: each point of space has its own Clifford algebra, associated to the tangent space at that point. This is local.

Yes, if you want you can encode non-local information in it, but this is artificial. It is not because of their intrinsic nature. Take any section of any Clifford bundle. It has nothing non-local in it, except if you put it there. You can encode non-local information in scalars as well, if you want.

I consider Joy's theory non-local, but not just for the reason of using Clifford algebras, and certainly not just because violates Bell's inequality.

Much has been made out of a possible demonstration of the validity of my arguments by means of a computer simulation. It has been suggested that if my ideas are correct, then one can write a computer program modelling them and obtain the Tsirelson bounds. While such a demonstration would indeed be sociologically important (especially...

Much has been made out of a possible demonstration of the validity of my arguments by means of a computer simulation. It has been suggested that if my ideas are correct, then one can write a computer program modelling them and obtain the Tsirelson bounds. While such a demonstration would indeed be sociologically important (especially considering the fact that Geometric Algebra remains an unfamiliar language for many), it is irrelevant as far as the logic of the EPR debate on physical reality is concerned. An analytical model such as mine cannot possibly be proved or disproved by its numerical simulation. A simulation of a model is a mere implementation of its analytical details, not an experiment that can either prove or disprove its validity. If reality can always be so simply simulated then there would be no need for the staggeringly expensive actual experiments. Reality is mathematically far richer and profounder than some of us are able to fathom, let alone a computer software. The EPR correlations are observed in nature in the actual, real-life experiments, not on a dead computer. I have produced impeccable analytical arguments to demonstrate that Bell’s theorem is wrong, both in its input and output. Moreover, I have produced impeccable and explicit analytical models to explain the correlations observed in the actual, real-life optical experiments performed at both Orsay and Innsbruck (cf. arXiv:1106.0748). Furthermore, I have proposed an actual real-life experiment that can indeed falsify my model (cf. arxiv.0806.3078). Therefore the additional demand of producing a computer simulation of my model is like demanding a flight-simulation from the Wright Brothers when they have in fact produced an actual, real-life flying machine!

There are also more fundamental reasons why the whole fashion of a computer simulation with regard to EPR correlations is completely wrong headed. It is based on a serious misconception of the true physical and mathematical reasons for the existence of the EPR correlations in nature. In all real-life demonstrations of the correlations, Alice and Bob always observe truly random outcomes of their measurements---A = +1 or -1, and B = +1 or -1. Therefore, any correlation between A and B themselves cannot possibly reproduce the observed quantum correlation, unless the topological non-trivialities of the physical space itself are also correctly simulated. In the language of my model this means that one has to model the physical space, not in terms of the R^3 points alone, but in terms of all of the subspaces of R^3 (namely, points, lines, areas, and volumes) within the even sub-algebra of the physical space (as conceived by Grassmann some 167 years ago). However, what I have seen so far in attempts to simulate the EPR correlations is a completely wrong headed approach. What is usually tried is based on an implicit assumption that the numbers A and B are only apparently, but not truly random, and if only we can somehow discover the correct functional dependence of A and B on the disposition of the apparatus and the hidden variables then the right correlation between A and B would emerge. I repeat: this is a completely wrong headed approach. One can never ever reproduce the EPR correlations this way. For the EPR correlations are what they are because of the topological non-trivialities of our physical space itself, NOT because of some hidden order in the randomness of the variables A and B.

I activate as computer programmer for almost 13 years, mostly in computational geometry, and I used quaternions and Clifford algebras in computations. Below I explain why I agree with what Florin said about computer simulation. But I agree with you that such a simulation will not add something qualitatively relevant to the mathematical description. Except that it forces us to unwind the mathematical solution step-by-step to see what it actually does.

A physical theory can be viewed as describing the possible states of the world, and how the world evolves from one state to another. If there is a mathematical description of the states, and a mathematical description of the evolution from one state to another, I don't see any reason which can prevent us simulate this description on a computer. There are only technical limitations: the simulation may require more memory and computational power than the computer can provide. I don't think this is the case.

To simulate EPR from the standard QM view, is easy. The state is the singlet state, which can be simulated as an element of the tensor product of the Hilbert space of the particle, and the orientations of the measurement devices can be simulated as unit vectors, and the correlation will occur because the superposition from the singlet state is destroyed. All steps can very well be simulated. Of course, projecting the singlet state into a separable state is a non-local operation.

For a local theory, the things should go very straight. To simulate a mathematical description of reality, we can use a "dead" computer, for the same reason why a dead computer can hold a pdf containing that mathematical description.

A hidden variable theory can provide a list of elements of reality, and the prescriptions to move from one state to another. For example, there can be a function which receives as input the element of reality describing the complete state of a particle, and the orientation of the measurement device, and output the observed value of the spin. The question is: can we call this function independently for each particle - without passing the information regarding the other particle (its elements of reality or the orientation of the measurement device) - and still get the correlations?

Your objections to my conclusions have never bothered me. However, there is something that has always bothered me and it is: For someone who identifies themselves as an independent researcher, your breadth of knowledge continues to amaze me. On previous occasions, I have thought about writing this message but didn't. You are deserving of respect. You are someone to listen to even during disagreements. Moreover, whether you intended your last message to be humorous or not, I found it amusing. Thanks for being here.

I am intrigued by this matter of computation. First off it sounds as if there is some scuttlebutt about this. Yet of greater interest, if one really wanted to address this question it seems to come down to an issue of algorithmic structure and computability. If you are right this means the Tsirelson bound is simply not computable by JC’s system. Unfortunately I am not familiar with Clifton’s theorem and other matters so I do not see clearly what the problem is. This issue of hidden variables and contextuality is pretty tangential to my main focus. Yet this seems to suggest that either this is some problem involving a contradiction, or it is a problem involving NP completeness, where the space is somehow to large to ever compute this, or maybe (though I doubt it) some sort of Turing machine halting problem or Godel’s theorem.

This is all much easier, it has nothing to do with Godel or NP complete directly. With quantum resources one can achieve many things impossible to do with local operation and classical communications: teleportation, speed-up of algorithms, etc. Joy claims his theory is local and realistic, and if true, we should be able to do all those things using only local resources because Joy's theory do reproduce QM result exactly. If it is only a local and realistic model then it should be able to be simulated on a classic Turing machine. In other words, a classical computer could have all the computation power of a quantum computer.

But I disagree that Joy's theory is local, and hence the challenge. Alternatively, prove my spin one analysis wrong, or prove Clifton's theorems wrong. Any of those 3 will do the trick. However, geometric algebra software packages are freely available on the web, and this is the easy way to prove me wrong if I am wrong indeed (which I think I am not).

And yes, the person originating the idea acted as a jerk towards Joy and I had some doubts about putting this in the post or not. I hope Joy is not mad on me about it. But an idea is an idea and I was judging it on its merits alone and not on its history.

One does not need any of the stuff you are demanding to see that my framework is both local and non-contextual. No elaborate theorems or computer programs are needed to see this. Bell has given us rigorous and foolproof criteria for both locality and non-contextuality. My models fully comply with these criteria, as can be easily checked. Let A(a, L) and B(b, L) be the results of Alice and Bob in the usual notations. Then the correlation between A and B can be non-local *if and only if* either (1) the result A depends on the context b, or (2) the result B depends on the context a, or (3) the result A depends on the result B, or (4) the result B depends on the result A. There is absolutely no other way that the correlation can be non-local. This is Bell’s own criterion, and it is universally accepted. Now you can easily check that this criterion of locality is rigorously satisfied by all of my models. Thus my framework is clearly and manifestly local. Next, here is Bell’s criterion of contextuality: The model is contextual *if and only if* the result, such as A(a, L), changes its value (say from +1 to -1) when the context “a” is changed from, say, a to a’. You can easily check that this does not ever happen in any of my models, because the local results are entirely determined by the hidden variable L alone. The local results do not change when the context is changed, because the correlations are purely topological effects, not contextual effects. In other words, all of the locality and non-contextuality demands made by Bell are rigorously met by my models. So, in my opinion, all the other elaborate stuff you are raising is simply “grasping for straws” in order to hold on to orthodoxy, which has been squarely defeated.

I agree with you that the result obtained by Bob is not determined by the orientation chosen by Alice. It is the probability to obtain the result which depends on both the orientation a chosen by her, and on the result A(a, L), obtained by her. The probability to get B=1 is not equally distributed for all possible choices of b, but also depends on A and a.

Can we obtain the probability for Bob's result from a formula which doesn't depend on the orientation chosen by Alice and on her result?

In your approach, the two bivectors representing the two orientations are used to obtain this correlation. This correlation is postulated to be the scalar part of their Clifford algebra product (or their dot product). Thus, the cos correlation is obtained. But the formula involves both orientations. The probability of the result obtained by Bob depends on A and a of Alice, "at a distance", because the formula involves both directions simultaneously, although they are at different locations.

----------------------------

What I find interesting about how you get this cos is that you simply generalize the hidden variables correlations as they are stated by Bell. On the one hand, that formula (19) from 0703179 is equivalent to (4), the quantum one for the case of a singlet state. On the other hand it looks like Bell's.

While Bell's formula refers to scalars only, it is not because he was "far too sloppy". It is because the correlations and probabilities happen to be scalars, and this is how probabilities work. Your correlation equation (4) cannot be directly given a probabilistic meaning. I don't know how to take it seriously as an equation involving probabilities. I look forward to see if you can provide an interpretation for it. Is this important? I think it is, because this may help us to see exactly how your hidden variables act to predict the correlations of the outcomes (this is, IMHO, the ontology). But more important, this part is necessary if you want to show that there is no non-local correlation between the two particles. Otherwise, to some may look just like a trick to obtain cos correlation for something which looks only formally like Bell's correlations for hidden variables, and actually masks non-locality.

I'm compelled to agree with Joy on this point. A model which depends on continuous functions for both input and output cannot be simulated on either a quantum or classical computing machine.

Selecting topological criteria over real line - oo to + oo geometric criteria really was a stroke of genius, because it allows us to substitute our normal (intuitive) notion of "distance" for a topological space allowing simultaneity of events, which is what a nonrelativistic quantum measurement result looks like. Instead of measuring "at a time" consistent with the Lebesgue criteria for real events, we can correlate events on a scale-invariant tableau; in other words, causality (causal correlation) is integrated into quantum configuration space so that it is identical to the physical space.

Joy is right -- this is precisely what Einstein (EPR) proposed within the relativistic limit. Bell's theorem, in contradiction, shows that quantum configuration space cannot be mapped onto physical space without a nonlocal model which is assumed nonrelativistic.

The real key, though, is the preservation of classical continuous functions against discrete measurement. Joy finds that correlation at the 8-dimension limit, and so "local realism" assumes a whole new meaning. Ingenious.

I was typing a reply and I lost the whole thing by mistake. Let me summarize instead. I believe you are arguing for contextual and outcome independence. And indeed, in the case of Bell, factorization is equivalent with those 2. However, I can assume your point of view and challenge: "This is Bell’s own criterion, and it is universally accepted." on the same grounds: because it...

I was typing a reply and I lost the whole thing by mistake. Let me summarize instead. I believe you are arguing for contextual and outcome independence. And indeed, in the case of Bell, factorization is equivalent with those 2. However, I can assume your point of view and challenge: "This is Bell’s own criterion, and it is universally accepted." on the same grounds: because it assumes commutative beables. As I was saying in the post, I have no arguments against factorization, but the there is an additional concept of locality which is at odds with GA formulation: ordinary distance. In GA everything is a blur: points, lines, subspaces, but Alice and Bob break this GA ontological “symmetry” by out demanding a physical separation between them. Translating from GA formalism to actual experimental results (scalars) requires Alice and Bob to access the information encoded in the entire space and its subspaces, and the physical separation makes it impossible. This has nothing to do with Alice being independent from Bob (which she is) but with Alice accessing the information present in Bob’s local neighborhood (which she cannot do).

In Bell’s original formulation, the point above is mute because of the topology of how hidden variables are encoded and hence factorization is enough. Factorization is not enough when non-local objects enter the mathematical description in a critical way. So in this debate, the error is in switching the context of the discussion: from commutative beable to non-commutative ones. In the first context, locality is equivalent with factorizability, in the second context it is not. This hardly unusual in math as truth of a sentence is dependent on the list of axioms. For example, the same statement that parallel lines never meet is true in Euclidean geometry and false in Riemannian geometry.

I hope this makes it clear why using only local operations can never get Tsirelson’s bound. It was clear in Bell’s original formulation, and is clear in your formulation as well. In Bell’s formulation it is about the logic of set theory vs. the logic of projective spaces. In your formulation is about breaking GA formulation vs. correctly applying GA to recover Tsirelson bound. Any violation of GA rules by erasing the zero blade values and converting the GA objects to scalars is illegal and yields Bell’s bound (try it in your correlation paper and apply it to the hidden variable treating it as scalars at any step in the computation). Doing it correctly in GA results in Tsirelson bound. Any local operation by Alice or Bob breaks the GA rule.

In your reply paper you state: “It would be extraordinary if the fate of local realism were to hinge on a technical distinction between vanishing scalars and strictly zero vectors.” This is exactly what happens here. The distinction is not technical, but critical. Take http://arxiv.org/PS_cache/arxiv/pdf/1106/1106.0748v3.pdf for example. In Eq. 16, 17 consider A and B to be mere plus or minus scalars. Computing correlations results in Bell’s limit. Now consider 16 and 17 correctly and you get Tsirelson’s bound. Very big difference on erasing zero components.

Bell's factorizability is only dictated by probability theory, as a condition for the independence of the hidden variables. Even if Joy's formula based on Clifford variables satisfies formally Bell's requirements, including factorizability, they cannot be interpreted as probabilities. Any translation from Joy's formula to probabilities will reveal that the factorizability in Bell's sense is violated.

One way to go from Clifford algebra to real numbers is to express the equations in a basis. By doing this, it becomes visible that the variables are not independent, and that factorizability in Bell's sense doesn't hold.

Now, we can say "well, Bell said it, and it is not true, here is the counterexample". The problem with this counterexample is, in my opinion, that we cannot assign true probabilities to the variables. If this counterexample could come with a probabilistic interpretation of the variables, so that the cos can be naturally interpreted as the correlation, then the things would be different.

Well 1 addition and multiplication of these probabilities....n events ....p=.1/n.....0 or 1 and what the intergration of a function ...and the law of big numbers ..and the density of repartition....and youtr moyens......how could find interesting convergences about the Quantun number equal to the uniqueness of the cosmological number. And your groups finite or inifinte for the uniqueness serie.And the repartition of dispersion....?,? Where are they ? and the accidentals errors and this and that you bad superimpose simply just because you do not see the finite serie about this number. The distribution is purely reals and in a pure spherical 3D distributivity , thermodynamical furthermore.Your mesures of precision are weaks just due to the misunderstanding of this uniqueness and its entropy of distribution since this hypothetical BB.You want what extrapolations, interpolations, derivations, integrations, what??? empirical formules and what you see the isolated spheres no simply.Theorie of probabilities yes of course and the theorie of spherization you know in 3D.Be sure you shall know.

This almost seems to boil down to asking whether a Rubik’s cube has nonlocal properties. The 4x10^{19} configurations of the cube could be a Hilbert space. If I wanted to teleport a state (Rubik's cube configuration) to Alice I would need to perform a set of Hadamard operations on an ancillary state that she and I have access to. This would demolish some entanglement. The question might amount to asking whether or not the teleportation could be completely accomplished with just the Rubik’s cube operations and nothing else. Hence the contextuality of states in a quantum setting would be defined entirely by the cube operations, without a Hadamard matrix operation for a SLOCC operation.

That is indeed interesting, Lawrence. However, I don't think the discrete states of a Rubik's cube on S^2, which are commutative and associative, correspond to the continuous noncommutative and nonassociative correlations on S^7 in which Joy is working. In fact, I think he obviates "configuration space" and "discrete state" as we normally speak of them in quantum mechanics.

Take a Rubik’s cube that is in the “new state” and rotate the red face and then the yellow face. Record the configuration of the cube. Now restore it and rotate the yellow and the red face. The two operations are not commutative.

The two operations are not commutative, true, but neither are they continuous. My point was, that because Joy's theory depends on classical continuous functions, he is looking to restore continuity by a kind of Moebius twist operation that he locates on S^7.

My own model (which is also analytical) stays in the Euclidean space, so that the closed algebra on the manifold of S^3 preserves continuity (and commutativity).

I wish to start by saying I have enjoyed the discussion between yourself and Joy and I look forward to the continuation. My own studies of Bell's work have led me to the following position. Bell's theorem can be generalized as follows:

--

IF there exists a single probability distribution p(a,b,c) from which three distributions p(a,b), p(b,c), and p(a,c) can be extracted, then the expectation values from these distributions must necessarily obey the following inequality:

I wish to start by saying I have enjoyed the discussion between yourself and Joy and I look forward to the continuation. My own studies of Bell's work have led me to the following position. Bell's theorem can be generalized as follows:

I wish to start by saying I have enjoyed the discussion between yourself and Joy and I look forward to the continuation. My own studies of Bell's work have led me to the following position. Bell's theorem can be generalized as follows:

--

IF there exists a single probability distribution p(a,b,c) from which three distributions p(a,b), p(b,c), and p(a,c) can be extracted, then the expectation values from these distributions must necessarily obey the following inequality:

|E(a,b) - E(a,c)| <= E(b,c) + 1

If we find a situation for which the inequality is not obeyed, then a single probability distribution p(a,b,c) does not exist in that situation. This can be extended to the 4 variable case.

--

My view is that violation of inequalities by experiments points to the lack of a single p(a,b,c) distribution, owing to the fact that no two particles can be measured simultaneously at 3 detector settings. Therefore the above inequality can not be tested experimentally in the EPR case. Interestingly, in 1107.1007v1, you quoted a statement from Shimony which states "It has been suggested by Fine (private correspondence)that joint probability distributions of non-commuting observables can be introduced in contextual hidden variablestheories by speaking counter-factually". I suspect Fine recognizes the lack of p(a,b,c) in the EPR experiments and hence this suggestion. The suggestion implies that even though a p(a,b,c) distribution may not exist in actuality, it could be used by speaking counter-factually. I disagree with this suggestion since I believe it goes against the very essence of joint probability and expectation.

To see this it is sufficient to assume two variables A and B, only one of which can be measured at a time. It therefore is obvious that there can be no p(AB) distribution. Even though A and B can be spoken of counter-factually, their joint expectation value is undefined and nonsensical, since they can never be simultaneously actual. Fine's suggestion that a joint probability distribution for p(a,b,c) which involves outcomes of experiments which can never be simultaneously actual is just as wrong in my opinion.

However, I would disagree with this view because a counter-factual can never be tested experimentally. So there can be no such thing as an experimental confirmation of a counter-factual inequality.

In Bell test experiments therefore, in which p(a,b), p(b,c) and p(a,c) are not results from a single p(a,b,c) distribution, but obviously results from different distributions (owing to the way the experiments are performed) are not guaranteed to obey Bell's inequalities.

As for the violation of the inequalities by QM, it begs the question, can the expectation values E(a,b), E(b,c), E(a,c) calculated from QM from the same distribution? Or an even easier question to answer. Can QM provide an expectation value E(a,b,c) for the EPR case?

I would like to end by stating that on the one hand, constructing a demonstrably locally real theory which violates Bell's inequality is one way of disproving Bell's theorem. This is the approach Joy has chosen. On the other hand, I believe there are philosophical problems with the correspondence between expectation values in the inequalities and those obtained from experiments or theories of the experimental setups. This is the approach I am highlighting above.

Sorry for answering late, I was realy busy at work and I have only a very limited bandwith to devote to FQXi besides answering to Joy. I would be interested to read some more of your ideas, do you have anything published?

I am not a supporter of counterfactuals, but I am making the same assumptions Joy is making to see where they may lead.

You state: "My view is that violation of inequalities by experiments points to the lack of a single p(a,b,c) distribution, owing to the fact that no two particles can be measured simultaneously at 3 detector settings."

I am not quite agreeing with this because the lack of p(a,b,c) can also mean no inequality at all: you may go either above and below the limit. But going below the limit makes irrelevant the fact that in some experiment the limit is obeyed. In other words, one cannot reach any conclusion regardless of the violation or not of the inequality. Maybe I am missing something.

I am afraid the main message I am trying to convey is getting lost in the erudition (as Einstein once put it). The correct reading of my results is this: quantum correlations are nothing but correlations among the points of a parallelized 7-sphere; and as such they are purely classical, topological effects. There is no such thing as non-locality, non-reality, or contextuality...

I am afraid the main message I am trying to convey is getting lost in the erudition (as Einstein once put it). The correct reading of my results is this: quantum correlations are nothing but correlations among the points of a parallelized 7-sphere; and as such they are purely classical, topological effects. There is no such thing as non-locality, non-reality, or contextuality of any kind (either local or remote). All the talk about factorizability, geometric algebra, non-commutativity, probability, etc. etc., is simply mudding the waters. To me, the issues you have been bringing up are all cosmetic issues. To be sure, I use geometric algebra to demonstrate my results, but only as a convenient tool. The real facts are purely topological facts. Geometric algebra simply provides a convenient mathematical language to represent these facts. The parallelized 7-sphere, or more specifically the parallelized 3-sphere, exists independently of any of its algebraic representations. There is indeed a classical non-commutativity involved in my calculations, but, again, only as an intermediate, calculational tool, not as an ontological necessity. You are being misled about the whole issue because you are focusing far too much on my earlier, preliminary papers. I am, on the other hand, focusing entirely on my most recent two papers. In these two papers a crucial statistical distinction is made between the raw scores and standard scores. I am afraid you have not yet fully appreciated this important distinction. The raw scores are the *non-contextual* measurement results observed by Alice and Bob. They are simply numbers, +1 or -1, albeit representing the points of the 3-sphere. The standard scores, on the other hand, are purely mathematical constructs. They are bivectors that allow me to make correct inferences and calculations about the correlations between the *raw* scores, +1’s and -1’s. And as such these bivectors are simply intermediate calculational tools. Nevertheless, they too are entirely non-contextual (their senses do not depend on the experimental directions chosen by Alice and Bob). The fact that there is no contextuality of any kind involved in my model is as clear to me as daylight. It is evident that the values of the variables A(a, L) and B(b, L) do not change when the respective contexts a and b are changed. In fact A and B are functions *only* of the handedness, L, of the 3-sphere: A = A(L) and B = B(L). Moreover, A(L) and B(L) are purely random numbers, with exactly 50/50 chance for the values +1 and -1. It is therefore impossible to get the EPR correlations from A(L) and B(L) if they are treated as points of the real line. This was Bell’s mistake. He assumed an incorrect topology for the co-domain of the functions A(L) and B(L). As I have discussed in several of my papers, this was both physical and mathematical mistake. It is abundantly clear from the physics of the EPR setup that A(L) and B(L) are points of the parallelized 3-sphere, not the real line; and as such the correlations between them cannot possibly be anything but the cosine correlations. Incidentally, I am not at all concerned about the local operations, resources, communication, or any other engineering issues popularized by the quantum information theory. I am only concerned about the local causality of the real physical world. Within my model the numbers A(L) and B(L) are determined strictly locally causally and entirely non-contextually. Thus, to me, the issues you bring up---such as factorizability, non-commutativity etc. etc.---are all irrelevant. The real facts of the matter are the measurement results, the numbers +1 and -1, and the correlations dictated among them by the topology of the 3-sphere. When these facts are correctly understood in topological terms, the quantum correlations no longer appear mysterious. Therefore I urge you to forget about contextuality and try to think about the whole issue from my topological perspective.

Here's how I see a software simulation of a local hidden variable theory applyed to the EPR experiment:

1. Make a computer program which generates randomly the hidden variables corresponding to each of the two particles corresponding to a singlet state (stored in a data structure).

2. Make a computer program which receives as input a data structure containing the hidden variables corresponding to one particle, and the orientation of the measurement device, and returns either +1, or -1. Give a copy of this program to Alice and another to Bob.

3. Prepare two states with the algorithm #1.

4. Send one of the states, by email, to Alice, and the other to Bob, which are at distant locations from one another.

5. Alice disconnects from the Internet, then runs the algorithm #2, with the first parameter being the hidden variables she got by email, and the second parameter the orientation of the measurement device, randomly chosen. Then she saves the orientation and the result (+1 or -1).

In the same time, Bob does the same as Alice, but for his particle.

7. Alice and Bob connect simultaneously to the Internet and send the orientations and the results to a server, to be recorded.

8. Repeat the steps 3 to 7 a large number of times.

9. Calculate the correlations between the data recorded on the server.

10. Compare the results with those predicted by Quantum Mechanics (and verified by experiments).

----------------------------

I think that it is a good exercise to think at possible ways to make the program output the two results correlated as they are in QM. The ardent practitioner of this sadhana receives the insight into the meaning of Bell's theorem :-)

I think this won't work, for the reasons I posted in the "Quantum Music ..." forum.

A time dependent network of continuous functions input and outpout (aka the Real World) is never disconnected from information bounded by time connectedness and in which arbitrary outputs are correlated by the time scale of observation. What you suggest is 2-node network of discontinuous functions, while what Joy proposes is a many node network of continuous functions. Not the same thing.

Thank you for your support earlier. I agree with your point here. Cristi has not understood the real problem yet. As a simple illustration of the problem, consider S^3, which has a non-trivial twist in its fibration. Now remove just one single point from it. That reduces S^3 to R^3, which has a trivial topology compared to S^3. As I have shown, Bell's theorem goes through for R^3 but not for S^3.

There are the following types of processes involved: the disintegration leading to the two particles, and the spin measurement. Just these. It doesn't matter where the planet Jupiter is, or if it is raining. I only ask the hidden variable theory to pass the simplest test, forget about the rest of the Universe.

If you have a hidden variable theory, it means that you have a realistic description of these two processes, and this realistic description can be used to predict the outcomes of the measurements, from the local elements of reality. You can put in each of the two programs whatever complicated networks or nonlinear differential equations you want. At the end, you will have two programs, more or less complicated.

The point is that any local hidden variables theory (not only Joy's) has to face the separation in location of the two particles.

You provided there some arguments about the impossibility to simulate. They are very general, and seem to apply as well to any physical phenomenon. It seems to imply that none of them should be simulable. Or, our experience shows that most physical process to which we know the detailed description could be simulated, even some highly non-linear, within a good approximation. I don't see from your arguments why this particular theory cannot be simulated at all, at least to give the difference between a linear function and cosine.

Anyway, the purpose of this post was to provide a more pragmatic way to think about Bell's theorem. I can make it even more pragmatic: instead of computer software, make mechanical devices. One contraption for the particles which generates the electrons, one contraption for each electron. Make the two particles be two contraptions, constructed so that after you select a direction on their dials, they show +1 or -1.

Thanks for clarifying that. I always liked your method, and the closer I look the better I like it.

After having located the web source of the "Randi challenge" silliness, I decided to browse some other forums, where I found statements like "R^3 is identical to S^3" to which I could only think to reply, "well, that's just nuts." So I appreciate your difficulty in overcoming misunderstandings of pretty basic topology, let alone analysis, even among trained mathematicians and physicists.

I'm going to continue to question your conclusions about quantum nonlocality; I know now, however, that the investment is worth it.

I didn't say the process is unsimulable. I said it is unsimulable by the 2-node network of discontinuous functions that you suggested.

I have struggled to understand Joy's point that the continuous function physics that is the context of Einstein (EPR) differs from the way that local realism is defined in the context of violations of Bell's inequality, and have concluded that he is correct.

I understand the difficulty. Because I work in this area, it should have been immediately evident to me, and it wasn't:

Joy's is a topological formulation, and because of that, we have to think back to the beginning of topology -- to Euler's network formula, and Euler's polyhedra characteric. You remember the bridges of Konisberg problem? We may be told that there are so many bridges and so many islands, and how big they are and how far apart, what shape, etc., yet all that information is irrelevant to graphing the solution -- the network graph will still simply contain a specific number of nodes and connections.

The graphing solution is a networked algorithm of continuous functions, such that changing information at one node can predict what will change at another node. It is critical that the nodes are connected -- which is why Joy's domain is restricted to parallelizable spheres (S^0, S^1, S^3, S^7) -- and it is critical that the measurements are correlated simultaneously. It doesn't help to remove the information from a node and hide it, because that discontinuous action changes the network configuration. Even if A and B remove their nodes simultaneously, they won't correlate with the network at a different time, because the network will have been changed.

Topology used to be called analysis situs. Joy's measurement scheme only works "in situ." A continuous function.

I don't see the connection between the networking algorithm you describe and Joy's theory. Also, I don't see how this explains the EPR correlations in a local way. Is this a metaphor, or it is the precise explanation of what you mean?

If the Hopf fibration structure of S^3 plays an importnant role, I would like to know the precise mechanism by which this ensures the correlations, and why this wouldn't work if we remove a point from S^3.

Actually, the nicely entangled circles from the Hopf fibration will not completely go away by removing a point.

The Hopf fibration can be viewed as a way to express the hypersphere S^3 as a twisted product between the circle S^1 and he sphere S^2. By removing one point, one of the fibers will become S^1-{one point}, which is topologically equivalent to R^1, while all the others will still remain S^1. Therefore, R^3 will still be a fibration, with variable fiber, all fibers except one being S^1. And most of the circles will remain entangled. Another way to think at this is to project stereographically S^3 to R^3. The North pole will go to infinity, the circles will be projected in circles, except those passing through both poles, which will be projected into lines. Most of the circles will remain entangled.

If Joy's theory works for S^3 because of the entangled circles of the Hopf fibration, I expect it to have some significant, if not identical effects in R^3 as well. For example, the integrals over S^3 will be the same as those over R^3.

It is true that parallelizability and the factorizability are related in the way indicated by Joy. But how exactly can this provide the correlations? How exactly removing one point and changing the topology of a single circle is responsible for jumping from cosine to a linear correlation? Where did Bell assume R^3 instead of S^3 and how precisely did this affect the results? What about the other questions I raised on this forum?

Actually, all circles remain entangled after removing one point, because the line goes to infinity.

Integrals remain the same after removal, because there's just one missing point - of measure 0.

Continuous functions remain the same, because they can be extended by continuity to that point. In R^3 there may be in principle functions which diverge at infinity, but in physics they in general tend to 0.

If the main advantage of S^3 is that it is parallelizable, this holds for R^3 too.

I look at it this way: Bell's theorem explicitly proves that one cannot map quantum configuration space to physical space without a nonlocal model. Joy Christian proves the converse: one cannot map physical space to quantum configuration space without a local model.

That leaves us, actually, at the same impasse -- between quantum discreteness and classical continuity, that...

I look at it this way: Bell's theorem explicitly proves that one cannot map quantum configuration space to physical space without a nonlocal model. Joy Christian proves the converse: one cannot map physical space to quantum configuration space without a local model.

That leaves us, actually, at the same impasse -- between quantum discreteness and classical continuity, that has stumped physics for a century -- with at least one critical hitch:

It may get us closer to the ability to define the boundary conditions of the universe at two discrete points in a non-arbitrary manner. Einstein's general relativity, a model finite in time and unbounded in space, chooses boundary conditions "in time" that are limited in space by special relativity (Minkowski space-time). All physics is local within this limit; that's what "local realism" means.

Einstein intended, however, that general relativity would be only intermediate toward a singularity-free theory that could incorporate Mach's principle, i.e., the idea that all physical influences in the universe are dynamically connected. He could not see a way to do this without specifying boundary conditions. Think of the classical problem of getting a particle (a spaceship) from Earth to the moon -- it is basically formulated as a two-point boundary value problem in six dimensions. Out of the infinite paths the ship could take on its journey, one path guarantees the least time and least energy (fuel).

Now suppose we apply a two point boundary to the universe itself. These points do not exist in general relativity because the space is unbounded. To get "true" continuous functions; i.e., where boundary conditions are absolute rather than arbitrarily chosen, we would need a bounded space. Let me try and explain metaphorically -- sit with one leg crossed over the other and position your index and middle finger on the tip of your big toe. Run one finger along the sole of your foot and around the heel, run the other finger over the top of your foot and over your shin. Then reverse direction, and upon returning to your toe, allow your fingers to rotate and exchange position, so that they move along a plane opposite to the one before.

What you've got is an analog for a Moebius strip -- a nonorientable 1-dimension surface in which two properties exchange position through one complete revolution. In complex analysis, though, points (represented by the tips of your fingers)contain two parts, so that a point in the plane is actually represented by a line. By constraining motion in eight dimensions (S^7) where octonions live, Joy's lines are always oriented by initial measurement conditions, because octonions are noncommutative and nonassociative. The Moebius twist guarantees parallel opposites (cf. spin up/spin down).

It is the property of orientability, not parallelizability, that is critical to Joy Christian's theory, and that makes the difference between R^3 and S^3. R^3 is oriented arbitrarily on the equator of S^3 (+ oo to - oo), yet that is not sufficient for a "local" interpretation of events in spacetime because like general relativity in Minkowski space, the space is unbounded. Joy's space that is bounded by the parallelizability property defines orientation of the two-point boundary as a least-action continuous function -- like the classical problem.

Also like the classical problem, in which six dimensions calculus requires only three dimension of space, Joy's eight dimension calculus on S^7 requires only the four dimension calculus on S^3. So the physical space of S^3 maps 1 for 1 to the quantum configuration space of S^7, which we can now call "localy real."

I think this is the best place to insert a few additional thoughts, because they tie into my 8 Aug post above.

As I've indicated, while I struggle to understand the physics in your theory (I'm getting there), I think the mathematical model is stunning and spot on.

Ever since you brought it up, I've been bugged by your "2pi vs. 4 pi" null rotation value, because I knew I had scribbled something along those lines somewhere, sometime, and I couldn't find it anywhere in my notes. After I had given up, the right notebook turned up under some books on my night table, with the relevant information on the very first page. The page is dated 2 Aug 2010 and headed "Hyperbolic fixed point projection." There's a couple of figures with callouts, but I think the text is understandable without them. I will transcribe it just as written, and then comment:

equatorial values: +1, -1, i

1^2 + (-1)^2 + (-1) = 1

The y axis is always oriented on kissing equators; the x axis, always on a fixed point of a hyperbolic projection. Therefore:

+1^3 + (-1)^3 + (i)^3 = 1

+1^4 + (-1)^4 + (i)^4 =

1 + 1 + (-1) = 1

(At this point, the obvious struck me, and I wrote on the next page):

(+1, -1, i)^1 = i

(+1, -1, i)^2 = 1 1 + 1 - 1

(+1, -1, i)^3 = 1 1 - 1 + 1

(+1, -1, i)^4 = 1 1 + 1 - 1

(+1, -1, i)^5 = i

[end notebook text]

Even though there are five iterations, we are of course counting only four, corresponding to 0, 1, 2, 3, 4 dimensions with real results in only 3 dimensions.

In my sphere kissing model on S^n Euclidean spheres, the 4pi result corresponds to the kissing number 12 of S^2. I.e., the integral (digital, or floor) value of 4pi is 12. (I explain in my ICCS 2006 paper why only the integral value is significant in this context.)

S^2 is the zeroth member of my sphere kissing group.

Under the assumption of Bell's theorem, 2pi (integral kissing number 6) would apply to domain S^1, whose subset is S^0 with kissing number 2 (pair correlations in the flat plane), while your pair correlations range all over S^3, just as you said -- by recurrence at the S^3 boundary.

It is good to be appreciated. Physicists tend to get bogged down into unnecessary details, but you are looking at my argument with a mathematician's eye, and hence are able to see things that my fellow physicists have missed.

A minor correction to what you write. You write: "...your pair correlations range all over S^3, just as you said -- by recurrence at the S^3 boundary." But S^3 has no boundary. What is happening is more like that old packman game in which packman disappears at one edge of the computer screen only to reappear at the other edge. As far as packman is concerned, the screen has no boundary!

Recursivity you say, well. I think people confounds what is the real meaning of a mathematician. And the politeness won't change this evidence.

Now let's analyze this S3 , one says bounded, the other not bounded, well you must choose, you want a dice or a serie of recurrence.Finite and precise and universal. But for once, tom is right, it is bounded. I love this platform.

ps the entanglement for uniqueness is specific and precise, the main central sphere is the biggest volume, after the serie of recursivity is universal for all uniqueness. The fractal from this central sphere is finite. The volumes furthermore decreases like the cosmological spheres, see like example the simple logic of our solar system and even the galaxy, this milky way and its central sphere, you shall see that a center of a spheres system is always the biggest volume.Now of course we continue after with the super groups of galaxies and the mega groups....and the center of our universe, the logic is the same for our quantum uniqueness. The system possesses a finite serie and precise like the volumes and the rotations spinals and orbitals. it is so important , like the bounded system, universal for all correct thermodynamical correlations.

How could you intrepret the real universl entanglement if you do not respect the pure serie of universal recurrence about the pure reals numbers and their specific distribution. It exists universal series like it exists series invented by humans, that depends of what we want explain, at my knowledge here it is about a pure universal point of vue.Then the limits must be precise and the series also.

In fact, the finite groups are essential. THAT DEPENDS OF WHAT YOU WANT DESCRIBE.

Then that depends of what is your recurrences series.....and their predictions of course.

And by the way, that same notebook continues on 10 August 2010 (I'm not making this up) a figure matching A point in C* by hyperbolic projection to a point of measure zero in R^n, with the text, "The 'screen' is a scale invariant field of infinitely self similar random fields. The localized measure, being time-dependent, is the least moment in which the complex network can match point for point between observer and point in the field."

I think I have seen your physical landscape before, and have refused to believe it until now.

The text contains a note from Zaslow, Gowers, p.163 (Tim Gowers's big collection of math essays; great book but I can't remember the title): "Every complex manifold of (complex) dimension n is also a real manifold of (real) dimension M = 2n." I think this relates in a deep way to your 2(sqrt2). I'm going to look into it. You rock, Joy.

wawww that rocks indeed, but be sure I will give you a real course of rock. It is ironic.

n dimension and kissing spheres, no but frankly and after what the multispheres also no perhaps ??? Incredible. If people doesn't see the team of copycat , well there it's serious.Oh God, they steal even that, even the truth.

You aren't a rationalist, neither an universalist, nor a relativist, nor a determinist, nor a rational mathematician. And your recurrences are totally falses in their details and in their generality. Copenaghen is the old school and it is well like that, and don't say a thing about the freethinking.The freedom is the sister of the democracy and the mother of the transparence, I speak with a real freedom and it is well like that.The reazl meaning of the freedom is more than you can imagine, but of course the freedom is the sister of the consciousness, if not that can be a chaotic thought. We return at a simple evidence, the human nature and its own vanity eating it. Ironic is a weak word.

AN EXPLICIT LOCAL VARIABLES TOPOLOGICAL MECHANISM FOR THE EPR CORRELATIONS

It is based on a non-trivial topology (wormholes).

Cut two spheres out of our space, and glue the two boundaries of the space together. This wormhole can be traversed by a source free electric field, and used to model a pair of electrically charged particles of opposite charges as its mouths (Einstein-Risen 1935, Misner-Wheeler's charge-without-charge 1957, Rainich 1925).

For EPR we need a wormhole which connects two electrons instead of an electron-positron pair. A wormhole having as mouths two equal charges can be obtained as follows: instead of just gluing together the two spherical boundaries, we first flip the orientation of one of them. Since the electric field is a bivector, the change in orientation changes the sign of the electric field, and the two topological charges have the same sign.

Now associate to the two electrons your favorite local classical description. The communication required to obtain the correlation can be done through the wormhole.

----------------

This may be the basis of a mathematically correct local hidden variable theory. Also, it seems to disprove, or rather circumvent, Bell's theorem. For Bohm's hidden variable theory, it provides a mechanism to get the correlation without faster than light signals. I proposed it here for theoretical purposes only, as an example. My favorite interpretation is another one.

I would like to invite you to test your weapons on the theory of local hidden variables I sketched above :-)

I know that it is a work in progress, but the major elements are already developed: topological charges and Bohmian mechanics. Can you see any major difficulties in applying it to make Bohmian mechanics local?

The main point is: I gave an example of local hidden variables theory.

Q: "Do you really want me to do that?"

A: Someone reading this may believe that you have a decisive and obvious argument, but you are so merciful and don't want to "destroy" me :-))

Don't feel obliged to answer if you don't have arguments.

Q: "if explanation of QM requires wormholes ..."

A: I never said it does. I'll repeat what I already said: I am very happy with the wavefunction ontology in QM. I don't think that we need additional hidden variables. But this is my personal preference... If I found something interesting for the Bohmians, I have to be honest and tell them, even though my views are different. They have a hard life these days: I've seen some websites where they are ridiculed, and hidden variable theories, local or non-local, are considered pseudoscience. Simply by taking to extreme the conclusion of some no-go theorems. Aren't those guys extremists?

To be honest with the Bohmians, I have to acknowledge that I found that, if we combine the idea of topological particles as in geometrodynamics with a hidden variable theory, the principal objection against the latter - the requirement of a faster than light mechanism to ensure the correlations - vanishes.

Q: "if explanation of QM requires wormholes ..."

A: The topological charges are tiny wormholes, which are not invented to solve the quantum puzzles, so it is hard to say that somebody considers that "explanation of QM requires wormholes". They were invented to model particles, especially charged ones, without singularities. They also seem to explain the electromagnetic field as a direct application of cohomology - the "already unified theory". The electrically charged ones behave like charges, without having a charge (hence "charge without charge").

So they were invented for another purpose, and they just happen to provide a convenient spacetime shortcut for the hidden variables.

Q: "... don't you think they will really be all around us?"

According to Einstein, Rosen, Misner and Wheeler, they are all around us - this is how the particles are. Your argument works as well against standard quantum mechanics:

"don't you think superposition will really be all around us?" :-)

So, even if you don't like wormholes, some may consider them preferable to charge singularities and to non-local interactions required by Bohm's theory, but this is not the main point here.

Before closing, I want to emphasize the main point here, and you didn't touch it:

It is because being and experience and thought/ideas are integrated and interactive that there are limits to the understanding. Where do you think that ideas come from? The requirements, limits, and extensiveness of truth center upon the extent to which thought is similar to sensory experience. We all come from the center of the human body.

AND, the purpose of vision is to advise of the consequences of touch in time. Another great physical truth -- (by Berkeley).

We do not move (naturally) with regard to the Sun and photons because there are no consequences of touch in time. That is, no gravity, black space, death, destruction of vision, inability to discern distance in outer space.

When I pointed out Joy's work to you I was hoping for just the type of result you've produced above. I'd like to thank you for this effort. In particular I enjoyed your clear discussion of contextual and non-contextual. One cannot even evaluate Joy's statements about the relevance of 'contextuality' without knowing what is meant by the terms. Many people are interested in this topic and you have done all of us a favor by clarifying these definitions.

I am also pleased that you are more open on this subject. You earlier claimed that I brought disgrace on my theory by even referring to de Broglie-Bohm or 'hidden variable'. Now you note that "there is no absolute agreement even between experts on the very definitions of what one means by locality, realism, or contextuality, or a no-go theorem for hidden variable theories."

In addition to Joy Christian's analysis challenging the status quo, I've also pointed out recently that weak quantum measurements resulted in the following observation:

"Single-particle trajectories measured in this fashion reproduce those predicted by the Bohm-deBroglie interpretation of quantum mechanics."

In my mind, both of these results, theoretical and experimental, challenge John Bell's test. In addition you have noted that, unlike Bell, Kochen-Specker cannot be put to any experimental test. It is from this perspective that I ask the following:

Is violation of Bell's inequality *the* basis for rejection of local realism in physics?

That is, if Bell had derived Tsirelson's bound, would most physicists today reject local realism in quantum mechanics?

Thanks again for analyzing Joy's work and sharing your analysis with us. I look forward to your answer.

You say: "Single-particle trajectories measured in this fashion reproduce those predicted by the Bohm-deBroglie interpretation of quantum mechanics."

I was not aware of that, however, I do not put much importance to it as regular evolution disagrees strongly with the trajectories of Bohm.

Also about: "That is, if Bell had derived Tsirelson's bound, would most physicists today reject local realism in quantum mechanics?"

The name of the bounds and who discovered them is irrelevant, they may equally be called Edwin and Florin's bounds. What counts is their meaning. Bell's bound are derived from local realism assumptions, and Tsirelson's bounds are derived from QM. There is a third bound: Popescu-Rohrlich which is derived from non-signaling.

And by the way, hidden variables are still a dead end. Joy's is trying to revive the area but he is getting no traction. Still, I consider his model worthy of investigation as his results do increase our knowledge, regardless of the meaning he ascribes them.

As Joy said, one of the obstacles encountered by the readers of his papers is insufficient understanding of topology. I'll try to help a little bit here.

The particularity of the hypersphere S3 which is supposed to make the difference between linear correlations and cosine correlation is its topology.

Other properties remain almost unchanged by removing one point. For example the integrals are the same on S3 and on S3-{N}, which is topologically equivalent to R3. The continuous functions on R3 which have finite limit at infinity are the same as the continuous functions on S3. We can extend such function from R3 to the point N by continuity, and get a continuous function on S3. Hence, so long as integrals and continuous functions are involved, I find it hard to believe that this extra point N can make such a big difference in correlation.

So let's take a look at its topology. It is claimed that the distinctive feature is provided by the Hopf fibration, and by the fact that the sphere S3 is parallelizable. Since both S3 and R3 are parallelizable, from the viewpoint of parallelizability there is no difference.

Let's now take a close look at the Hopf fibration. In the case of S3, this expresses the fact that S3 can be covered with circles S1 in a nice way. On the one hand, the circles from the covering don't overlap, but they are entangled. On the other hand, no point of S3 remains uncovered. Also, if we represent each circle S1 as a point, so that circles that are touching each other are represented by points that are touching each other, we get something like the sphere S^2. This construction can be made by using quaternions.

It is claimed that this nice feature of the circles covering S3 is lost by removing one point. The first thing to note here is that, assuming that this is true (which is not), it has to be proven that it is relevant to our problem. Not any interesting features are relevant.

Why removing one point doesn't break the entanglements of the circles S1? Remember that S3 is the disjoint union of some copies of S1 (the number of copies of S1 involved is equal to the numbers of points of S^2). By removing one point, we break one of these copies of S1, and what we get is S1-{N}, which is topologically equivalent to R1. The other circles remain the same. If one of the remaining copies was entangled with another one, it still remains entangled with it. If it was enangled to the S1 we just broken, it remains entangled with it, because to disentangle would mean to pass beyond the end of R1, which is now at infinity.

To picture this in 3D, we can use a stereographic projection. You can read this page having nice pictures, and watch this nice video to see how the stereographic projection works in this case. All the circles S1 of S3 (except that containing N) project on circles from R3. The circle containing N projects on a line, because N is projected to the infinity of R3. You can look at the picture from Florin's previous post. The circles are still entangled. We can fill the space by adding circles, but the axis OZ remains uncovered. The axis OZ corresponds to the line obtained by removing the point N. It passes through the interior of all circles of the fibration. None of these circles can escape from having OZ passing through its interior. So, all the entanglement relations are maintained by removing the point N.

Here I tried to explain why the properties of S3 are very similar to those of R3. There are topological properties which differ: S3 is compact and R3 is not, their cohomology groups and their homotopy groups are different etc, so I don't claim that S3 is the same as R3.

So I have to ask the following questions:

- Where did Bell assume R3 instead of S3 and how precisely did this affect the results?

- How exactly removing one point and changing the topology of a single circle is responsible for jumping from cosine to a linear correlation?

I argued last week that the removal of this point was the assignement of this point as the pole under a stereogrphic projection to infinity. The Hopf fibration is a form of the Desargue projection, or a set of rays which form the heavenly sphere. This is a conformal map amd I argue that the relevant geometry for this physics are those circles which remain conformal, which excludes the points at infinity. The conformal theory of QM SL(2,R) will then necessarily exclude this point removed by the stereographic projection.

"... covering S3 is lost by removing one point." If you did read my essay, you might have rejected the result of my heretical nearly intuitionistic reasoning.

I maintain that for really real numbers, y=|sign(x)|=1 is valid for any x without an excepted point y=0 at x=0. In other words, plain logics prevents me to remove any point from the line of real numbers because a point does not have any extension. Euclid defined a point something that does not have parts. On the other hand in good old mathematics a continuum is something every part of which has parts. I know, this is at odds with the mathematical thinking of those who proclaimed the freedom to arbitrarily define real numbers as the union of rational and irrational numbers and to ascribe trichotomy to them.

Of course, it requires the ability for consequent thinking if one is willing to accept that not a single number that exists in an continuum as a potentiality can be addressed or removed if one demands its actually infinite precision. My essay suggested using a limit from the lower side instead. My reasoning allows degeneration into a double root but not into a singularity. To me, EPR were not far from Buridan's ass.

By the way, I would like you to read my last comment concerning "Evolving Time's Arrow".

I regret that I just got around to your arXiv paper ("Comments on 'disproof of Bell's theorem')but things have been flying pretty thick and fast.

I stumbled on this point:

"Joy Christian then attempts a generalization and makes an extraordinary claim without proof [4]: for any Hilbert space H of arbitrary dimension there exist a topological space corresponding to EPR counterfactual elements of reality and a morphism m : H -> Omega such that the following relation holds:

with A_a and A_b two points in the topological space. From this it is not hard to prove that exact local-realistic completion of any arbitrary entangled state is always possible. At this point, it is not clear if Eq. 10

can always be achieved, and this is a very interesting open problem."

I wasn't aware it was an open problem; however, it is simply proved by consequence of Joy's condition of 4pi (vice 2pi) rotation, q.e.d. Details from my results in the "Quantum music ..." blog, 16 August, text and attached figure.

His whole framework is, as he has claimed, is complete in its generalization.

Joy could derive that in only a few cases related to the division algebras. My point was that I would like to see an actual proof that it works in general. I did not see any complete proof in any of his papers. Remember that the topology of the space Omega could be very convoluted in general. So either show that the topology is only the one related to the division algebras, or show that the property holds under any circumstance.

I think Florin has a point here. Explicit calculation for any general quantum state is a formidable task. So that is clearly not a smart way to go about this. However, I now have a much stronger position. I am now convinced that the only relevant topology is that of a parallelized 7-sphere, which of course subsumes all other parallelized spheres. But as I have already shown, for a parallelized 7-sphere Eq. (10) automatically holds, because octonions form a quasi-group. For all other topologies the correlations would be weaker-than-quantum (i.e., unphysical) as well as nonlocal (i.e., unphysical).

Understood. My point is, for classical pair correlations to the cosmological limit, clearly physical spacetime ( and that's what I mean by " ... complete in its generalization") subsumes any quantum configuration space. And Omega does not have to be smooth, only simply connected.

The assumption of physical events on S^3 combined with the topology of S^7 gives the 2-point boundary of continuous observer-observed entanglement a superior role in physical reality over pair correlations by quantum entanglement, because the 3 + 1 spacetime of S^3 oriented by the 7 + 1 spacetime of S^7 orients antipodal points of S^3 at ANY arbitrary distance from the (complex) origin up to the cosmological limit.

It struck me that this is only possible to see, by assuming 4pi rotation over the S^2 complex manifold because, since every point of S^3 is homeomorphic to S^3, 2pi rotation guarantees correlation only 1/2 the time for each event (as quantum entanglement for the Bell-Aspect program predicts), while pairs of events correlated ALL the time by a 2-point continuous function must be calibrated to locate the initial condition. This pair of points is actually a single complex point, sqrt -1, or i, which returns to itself only after four full rotations on the Riemann sphere (four exponential iterations) with its one simple pole at infinity. This corresponds to Joy's "Pacman" analogy -- we can only calculate where we "really" are in spacetime by seeing the event simultaneously disappear and reappear at predicted antipodal points of the horizon, a continuous real function self similar to quantum bilocation. Arbitrary quantum configuration space is subsumed by physically real spacetime.

If you examine my results, and diagram, you should see that hyperbolic projection from the fixed point ( - 1) to all points of S^3 shows perfect correspondence between the quaternion and octonion algebras and real analytical results of Lebesgue measure. Another insight I had to absorb from Joy's research program, in order to be convinced of its validity, is that the calculational artifacts of S^7 only support the physical spacetime of S^3, and are not physically real.

This reply is pretty slapdash; however, my aims are twofold:

1. To provide a basis to determine the exact experimental setup that corresponds to Joy Christian's framework.

2. Lay groundwork for a theorem that pair correlations in physical spacetime are independnet of pair correlations in quantum configuration space.

Your program is different from mine. I think James was trying to tell you this in his own way. For me 7-sphere is every bit as real as the 3-sphere. I am not at all concerned about quantum configuration space, or space-time, or time evolution, or relativity theory. All I care about are the quantum correlations (or expectation values) within the time-honoured tradition of von Neumann and Bell. The evidence of general quantum correlations, routinely observed in the laboratory for arbitrary number of measurement settings, channels, and outcomes, is crying out---at least to me---that 7-sphere is the real deal. I spell out the technical reasons behind this conviction in this paper. Intellectual caution is the only thing that is holding me back from declaring that we live in a parallelized 7-sphere, of which the 3-sphere of EPR correlations is but a single fiber. The exact experimental procedure in the special case of 3-sphere is already described in this paper (cf. eqs. (16) and (17) and the discussion that follows). This procedure can be easily generalized for observing more general correlations within the 7-sphere.

This is something we can definitely agree on - that both the octonion-like 7-sphere (close-packing of which yields the Gosset lattice and E8) and the quaternion-like 3-sphere (close-packing of which yields the 24-cell lattice and F4 (or D4)) are fundamentally important.

No, Joy, your research program does not differ qualitatively from mine. You'll have to listen to James's views in full for yourself, to determine if his approach is scientific.

Having taken time over the past two days, however, to carefully read your wonderful paper "Absolute Being vs. Relative Becoming" I also know to a reasonable cerainty that our research programs converge on the generalization of relativity in higher dimension spaces. Long ago, I wrote in an unpublished short story, "The only possible question is, 'Has it begun,' to which one answers in the only way possible way, 'It has.""

Our methods differ, not our programs, which are both rooted in metaphysical realism.

Metaphysical realism, 3-sphere, and 7-sphere are all good things to agree about. In a week or so, we---the FQXi members---will be talking about all these things and much more, while on a cruise ship from Bergen to Copenhagen: Setting Time Aright. Only "time" will tell whether I will come back enlightened or disappointed.

I want to add that if you hedge your bets on a nonphysical space, you'll not get a different outcome than Bell-Aspect. I.e., there will be no way to falsify their results and verify yours. This is the same dilemma that string theory is in, experimentally. It explains all the experimental results of classical and quantum physics, without novel and independent predictions of its own.

QM already avers that nonlocality is essential to understanding locally real results. If you want S^7 to merely explain that the assumption of nonlocality is not necessary to that understanding, you'll lose the argument, by the parsimony of Occam's Razor, because the Bell-Aspect program proves the assumption of nonlocality.

On the other hand, S^7 correspondence to physically real spacetime demands *novel* predictions. We have to actually *see* action at a distance and then prove by topological criteria that the distance in conventional interpetations of local realism is irrelevant. Already, in your "Relative becoming" paper, you have the basis -- in figure 8. Einstein would have approved of the "experimental metaphysics of time," because it provides a physical justification for higher dimension analysis (on which he commented in Appendix II of The Meaning of Relativity). The model must be analytic rather than probabilistic, and that demands continuous real world functions without having to specify arbitrary boundary conditions. That's what a purely physical spacetime looks like, whether in four dimensions or eight. In other words, there's no such thing as a locally real nonphysical spacetime.

I am not hedging my bets on a nonphysical space. 7-sphere IS the physical space, and observed quantum correlations are the evidence for this space. This is what my analysis of the link between the parallelizable spheres and quantum correlations has shown. Moreover, my analysis has also shown why we cannot exceed the upper bound on any quantum correlation. These are bonus results, not at all necessary to refute Bell's theorem. They tell us exactly why the quantum correlations are what they are---they are simply correlations among the points of a parallelized 7-sphere. On the other hand, the actual theorem of Bell concerns only the EPR correlations, which are correlations among the points of a 3-sphere. We do not have to do anything to falsify Bell's prediction for these correlations. The experiments have done that already. Bell's prediction for the EPR correlations does not agree with what is seen in the experiments. My prediction, on the other hand, agrees with the experimental results completely. This means that the original EPR argument for the incompleteness of quantum mechanics is no longer invalid, and the conclusion of "non-locality" by Bell et al. is no longer valid. The rest of my work is therefore just a bonus as far as the refutation of Bell's theorem is concerned. In other words, as far as I am concerned, the horse of non-locality is dead. There is no need to flog it any further.

I'm a supporter of your program -- for subsuming Bell's theorem, not replacing it -- and certainly not "disproving" it. It's not that nonlocality is sacred, it's that nonlocality is ESSENTIAL to quantum mechanics and Bell-Aspect. One cannot simply declare "nonlocality doesn't exist" and then provide a mathematical justification, when the physical evidence has already been presented (as you yourself said, that's like giving evidence for the impossibility of heavier than air flight after the Wright brothers have already done it). Heck, there are a whole lot of things that don't exist physically that can be mathematically modeled (Penrose triangle, e.g.), or modeled in a computer simulation.

You cannot win on these terms.

I AM convinced that you can demonstrate bi-locality in your own terms, and then show why this illusion is identical to what you call the illusion of quantum entanglement. Then "local realism" will really mean something beyond being another mere alternate interpretation of quantum mechanics. The first assumption of EPR is not that nonlocality doesn't exist, rather -- that general relativity is true and spacetime is physically real ("All physics is local").

I agree that the 7-sphere is important. I prefer to think of the 7-sphere as a physical representation of the Octonion, where the radius of the 7-sphere is the octonion 'scalar' component, and the hyper-surface of the 7-sphere is the octonion 'vector' components. If we think about Minkowski-like metrics (obviously an 8-D metric is different from a 4-D metric), then the radius corresponds to a 'time-like' dimension (I think this is the 'proper time' about which Peter was asking), and the 7 hyper-surface dimensions correspond to the 7 'space-like' dimensions. Recall that octonions/ Cayley numbers have that overall 'i' phase (and i^2 is -1) between the scalar and vector components.

I think Tom is saying that the 7-sphere is 8-dimensional, and that String Theory opponents will dislike having so many spatial coordinates that are not directly observable.

I had many conversations with Garrett Lisi about the E8 Gosset lattice being a minimum of 8-dimensional, but Garrett did not want to agree with me. Close-packing of 7-spheres leads directly to the Gosset lattice. Perhaps you and Garrett could bounce ideas off of each other on the cruise. I'm playing with more dimensions - perhaps 52-D, but the 7-sphere and E8 are important ingredients in my 'vegetable soup' TOE.

I am sorry Tom, but we will have to agree to disagree about this. There can be no physical evidence for something like non-locality. All anyone can ever observe is correlations. You may then interpret these correlations as evidence for non-locality, but that interpretation very much depends on the validity of a theorem by Bell. This theorem, however, exists rigorously only for the limited case of the EPR-Bohm correlations. Moreover, as a theorem about physics, Bell's theorem is demonstrably false. Here is the counterexample. So there is now no way to interpret the correlation observed by Aspect et al. as evidence for non-locality. The horse of non-locality is dead.

The Holographic Principle is a fundamentally non-local idea, but tachyons - if they exist (the vacuum has a negative vacuum expectation value that has a tachyonic nature) - can make 'non-local' phenomena appear 'local'. Many people interpreted this incorrectly as a Higgs boson that would seem to travel backwards in time. Particle properties could negate topological/ geometrical properties.

How does the 7-sphere become our observed Spacetime (which resembles a 3-sphere)? If the octonion 'unwinds' into a bivector of quaternions, then there is confusion about what is 'time' vs. 'space'. If the 7-sphere decays by discarding spatial dimensions (in 1's, 2's, 3's or 4's), then we could potentially end up with an M2-brane on which tachyonic Anyons may exist.

You have to distinguish between two very different kinds of non-localities. What you are talking about is what is known as "signal non-locality." The quantum non-locality, however, is a very different kind of non-locality. One cannot send a signal with quantum non-locality. In the context of Bell's theorem one is mainly concerned about this no-signalling quantum non-locality, and not about the physical non-locality you are concerned about. This is why you have consistently found my statements puzzling.

"I'm disappointed that you hold with the notion that a theorem can be disproved."

Your contributions here have been greatly appreciated by me. This point that you have brought up before, I have not really concerned myself with. Since it is important to you, then I assume it should be of importance to me. Can you use this simple example to drive your point home: Does Relativity theory disprove the Pythagorean Theorem?

Let me be explicit about what I mean by the statement "Bell's theorem HAS been disproved." As Bell repeatedly claimed in his book and elsewhere, functions of the local form A(a, lambda) = +1 or -1 and B(b, lambda) = +1 or -1 cannot reproduce correlations of the form < AB > = -a.b, with the usual meanings for the symbols (see Chapter 8 of his book, for example). Bell goes on to stress that: "This is the theorem." But if this is the theorem, then it is simply a false theorem, as you can see for yourself from this 1-page disproof of mine. Bell's theorem, as stated by Bell himself, is simply false. It is a "theorem" that never was. However you put it, Bell's theorem has been disproved.

I am sincere in my messages to Tom about the value to me of his contributions in this thread. He and I disagree about some theoretical matters. However, whatever I think should have no relevance to the value of your work. When I questioned the introduction of relativity theory concepts, it could have been any other theory's concepts, I needed to find out if your work was theory dependent. I recognize that there may be theories that find some correlation between your work, or other's work, and themselves. That correlation does not concern me. What concerned me was whether or not your work was theory independent. Is it general enough to be wholly dependent only upon empirical evidence and perhaps the methods of gathering empirical evidence, but, not reliant upon one's theoretical interpretation of that evidence?

My work is not theory dependent in any way, because Bell's work is not theory dependent. Both Bell and I are concerned about *possible* theories, not *actual* theories, and our methods are, as you put it, "general enough to be wholly dependent only upon empirical evidence and perhaps the methods of gathering empirical evidence, but, not reliant upon one's theoretical interpretation of that evidence."

I am truly sorry to say, that what you provide is not a "disproof" even if there were such a thing. It is also not a proof, since you state no explicit theorem to prove.

If it were a proof, though, it relies on the technique of double negation, which is much weaker than the proof of Bell's theorem. Bell's theorem is a proof by contradiction -- either A is true of B is true. Experimental evidence for nonlocality reconstitutes the proof as a proof by construction, the strongest of proofs.

Bell is also very clear about the space in which it works, which is the space of probability functions. If one wants to reconcile probability with the continuous functions in which your upper and lower bounds live, one must allow the same "at a time" measure criteria that supports probability functions. Not even geometric algebra will save you from running into a contradiction, because one of the concepts one has to give up in this case is double negation. Why?

Integration of the bounds in your context does not exist in the probability space of discontinuous functions, and you can't "roll the dice" in your space of continuous functions. As a consequence, one can get no more than Bell's own assumptions about EPR correlations out of this demonstration -- what that amounts to, is to assume a priori what one set out to prove. So there's no proof here that will stand technical scrutiny.

On the other hand, I retract nothing I have said about the significance of your research. You've done many important things. It's just that disproving Bell's theorem isn't one of them.

I beg to differ. I have decisively disproved Bell's theorem. Bell's theorem is no more. Hence there is no such thing as "quantum nonlocality." Bell's claim was based on his inability to understand and treat the physics and mathematics of rotations correctly. Rotations can only be treated correctly within the powerful language of geometric algebra developed by Grassmann, Hamilton, and others. When this language is employed correctly to rotations in the physical space, quantum correlations are easily understood as purely classical, local-realistic correlations without needing the problematic concepts of entanglement or nonlocality. This fully vindicates Einstein and EPR in viewing entanglement as a cover for our ignorance, and shows that the entire body of work inspired by Bell is fundamentally misguided. It has been erected on a false theorem---like a theorem which claims 7-3=2. Moreover, resorting to probability theory to save Bell's theorem is of no use, because probabilities concern ensembles of physical systems, not individual physical systems. Einstein and EPR were concerned about individual physical systems within a physical theory, not ensemble of physical systems within a probabilistic theory. It is high time that we let Bell's non-theorem R.I.P. It has helped me tremendously in understanding the raison d'être of quantum correlations, but it is of no further use in understanding the "final theory of everything."

"Every result that can be called scientific is theory dependent. One has no means to interpret results other than in theory."

No result that can be called scientific is theory dependent. Theory may be involved to a high degree, but, that is due to our needs and not to nature's. Results do not need to be interpreted. Results are observed as empirical evidence. Empirical evidence is not about theory. Results result from natural patterns that exist independent of theory. Humans need something to talk about, but nature does not.

A work that is truly general enough to frame what is possible to know about nature will be clean of theory. Up to this very day, all theories contain inventions of the mind that are obstacles to progress in understanding nature. Work that is truly general enough to be an accurate representation of what nature makes possible, will not suffer from the human injected deficiencies contained in all theories.

This message reflects only my own views and in no way is meant to represent Joy's work. I make this statement so that all will understand that I do not properly understand Joy's work and that I recognize that Joy speaks very well for himself. There is even the risk that the inclusion of my opinion here may be unhelpful. However, it is my opinion that you are wrong about: "Every result that can be called scientific is theory dependent. ..."

Well I probably should not have started this in Joy's thread. Each message usually leads to more messages. My last message contained the sentence "Results do not need to be interpreted." For all interested readers, that sentence was meant to communicate: That results are results. We see what they are. They have great scientific value for us. What they do not need added are guesses about what caused what. Effects are obviously the result of cause. Theorists guess about what causes are. All of the theoretical interpretation that might accompany results is helpful in identifying terms contained in equations and helping the theorist to keep their thoughts straight. However, theory is composed of excess baggage when it comes to understanding nature. In other words, theory is not nature and does not represent nature. Theory represents our needs.

It is a joke or what??? IF YOU YOU ARE RATIONAL ME I AM THE QUEENJ OF ENGLAND.Tom in fact you are the last strategy and like your friends you falls down simply because you aren't skilling simply. And your repetition of maths words shan't change this reality. You can imply confusions for the pseudo sciences, never for the real determinists. And you insist furthermore, forget you!r vanity and your business, after perhaps you shall bge a real thinker but I doubt.

I am not going to pursue a distracting discussion unrelated to your work. However, I would like to give some reason behind the things that I say about theory. The reason is that my own work is theory independent. The manner in which that feat is achieved is to discard all artificially indefinable proprties. This act requires the work to rely soly on empirical evidence in the form of patterns in changes of velocity.

Every property inferred from that empirical evidence must be expressible in the terms of that empirical evidence. This means there are only two units of measurement, in various combinations, by which to express properties such as force, resistance to force, electric charge, temperature, etc. Those units are meters and seconds. That practice is maintained throughout development of this non-theory 'theory'.

My point does not have to do with pursuing this any further in this thread. It has to do with making clear why some things I say hopefully appear to make good sense, while other things may seem to not make sense. My viewpoint is my own and it is the one from which my statements are drawn. I said that theory is 'excess baggage' that is not helpful for understanding nature. I will conclude this message and this subject by stating that theory is an artifact.

I don't understand this quixotic desire you have to disprove Bell's theorem. I do think I know, however, how to obviate nonlocality in what you call the ultimate TOE (which is something I don't have the ambition for -- my preference is to simply show that nonlocality is a property of quantum mechanics which must be subsumed by a complete physical theory that is nonperturbative without being probabilistic).

Prove this lemma: Pair correlation of nonrelativistic quantum events is independent of continuous topology on S^7, in which correlated vector pairs of octonions correspond to bivectors in the spacetime of S^3, comprising an interval of simply connected information oriented to every point of S^3, and unitary with the initial condition.

Which will lead naturally to the simply stated theorem: All physics is local.

The theorem will carry the implication that quantum mechanics is limited by locality, rather than the other way around. The terms of existence have to be clearly defined, because one cannot simply show that "nonlocality doesn't exist."

I have thought long about Einstein's statement cited in the powerpoint that I linked earlier, "The most incomprehensible thing about the world is that it is comprehensible." That's a corollary to the theorem of locality.

You make a great point. QM has been and continues to be incomprehensible even though its predictions are undeniable.

To some of the founders ( Bohr & Heisenberg) it had a mystical interpretation, while others (Einstein & Schrodinger) did not buy into it. It is my hope the Joy's framework will finally provide a comprehensible physical theory beyond QM with which to better understand our world without resorting a mystical and inscrutable theory and its interpretations.

I largely agree. I think the worst thing that can happen to Joy's research is for it to be marginalized as another unorthodox interpretation of quantum mechanics.

His framework is far richer than the coin-toss probability of Bell's inequality. There is no basis for an apples-to-apples comparison, between a theory that begs nonlocality and one that doesn't.

Florin's point about the possible convolutions of Joy's Omega topology goes to the nub of the problem -- in a scale invariant space of self similar random fields, no known physical principle prevents locally real pairs from appearing correlated "at a distance" when the defined topological distance accommodates relativity in a dynamic state of evolution. That is truly "subtle, but not malicious."

Should it be clarified that Joy's counter example only possibly shows that QM is not non-local for an EPRB type of scenario. I believe that QM could be non-local due to relativistic and quantum vacuum effects at very short distances like less than the electron compton wavelenth.

I have been following both Joy's and Florin's forums from the start. It has been fascinating and it seems to me that your analysis is "spot on". Florin has represented the opposing view admirably, and I believe Joy benefits from the communications he's received from both of you. Unfortunately, my math skills aren't sufficient to contribute or even distinguish between the "subtle" and the "malicious", but it does make me want to learn more.

I miss your participation. Out of ignorance, I ask for further clarification. Joy used geometric algebra. I am still looking into that. However, I assumed that geometric algebra would have already had the connection that you mention here:

"All I would say is that at some point some consistent translation mechanism has to be found from Joy's formulation to standard statistics. I disagree with Joy not on his math, but on his interpretation. And if his math is right I am confident such a mechanism does exist and will be eventually discovered. In other words, if Joy found an equivalent way of formulating QM, Born rule's translation into the new formalism will follow. It is just not done at this point."

Does your point have to do with something other than geometric algebra or are you saying that there is no current link between geometric algebra and standard statistics? If so, how important is this missing link? Missing links seem very important to me. As I have said to Joy, if my questions or remarks are off the mark please just say so and I will study your messages again.

I am actively working on Joy's theory and please let me answer your question only after I will be done. In the meantime I can explain how geometric algebra works, it is really easy.

You start with 3 elements: e1, e2, e3 corresponding to the 3 directions: x, y z. Because rotations are non-commutative, e1,2,3 do not commute in geometric algebra. So one can have expressions like 7 e2 + 32 e1e3. There are 2 basic rules to simplify horrible things like e2e2e1e3e1e3e2e1e2:

- rule 1: e1e1 = e2e2 = e3e3 = 1

- rule 2: e1e2 = -e2e1, e2e3 = -e3e2, e1e3 = -e3e1

Applying the rules on e2e2e1e3e1e3e2e1e2 you get: e1e3e1e3e2e1e2 (by e2e2 = 1), then -e3e1e1e3e2e1e2 (by e1e3=-e3e1) then -e3e3e2e1e2 then -e2e1e2 then +e1e2e2 and finally e1

What missing link? There is no "link" missing. The link between observed statistics and my model is already established in my latest paper, which Florin seems to have completely ignored. Geometric algebra is only a tool---a mere representation of the true essence of my model. If we restrict to the EPR correlations, then my model says that they are nothing but correlations between the numbers +1 and -1 occurring within a parallelized 3-sphere, which, in turn, is simply a classical topological space. It is convenient to use geometric algebra to bring these facts out, but only as a tool. Florin's attempt to find quantum mechanics within my model is deeply misguided. In particular, I find the following sentence of his quite bizarre: "...if Joy found an equivalent way of formulating QM, Born rule's translation into the new formalism will follow." What equivalent way? There is no quantum mechanics, non-locality, or contextuality of any kind within my framework, so why would one look for a "Born rule"? A rule to connect what and what? The definite measurement outcomes, +1 or -1, are already well defined by my equations (16) and (17), and the correlations between these outcomes (i.e., between the numbers +1 and -1) are then given by equation (33). Moreover, these equations are easily generalizable for the general case of 7-sphere. So I have no idea what "link" Florin is looking for.

Indeed, I was going to make the same point. Geometric algebra is not a magic wand, and neither has Joy waved it -- or his hands -- in that manner. The real significance of the result is completeness -- mathematical completeness of the same kind which characterizes relativity. The mathematical model is completely independent of experiment, and of quantum mechanics (which is why I continue to regret that it is positioned aside Bell-Aspect).

In the next day or two, I plan to post my own graphic interpretation of Joy's experimental framework, which will show why there are no gaps.

"It looks thin, modulo mathematical details, which would run into scores of pages."

I understand. I am hoping that more is made clear when others evaluate what you have posted. You noticed that I chose a different thread to post my response. I wanted your message to remain visable as long as possible. Thank you for posting it.

Thank you, quoting Florin from 'Part 2 of To Be or Not To Be (a Local Realist)':

"For the interest of time and space let me only discuss Bell’s criticism of von Neumann and Gleason in this paper. Von Neumann had enormous influence in quantum mechanics due to his seminal work on axiomatizing it. He produced a “no-go” theorem on hidden variable theories which Bell found...

Thank you, quoting Florin from 'Part 2 of To Be or Not To Be (a Local Realist)':

"For the interest of time and space let me only discuss Bell’s criticism of von Neumann and Gleason in this paper. Von Neumann had enormous influence in quantum mechanics due to his seminal work on axiomatizing it. He produced a “no-go” theorem on hidden variable theories which Bell found unjustified in its assumptions. In particular, von Neumann required that the linear combination of expectation values is the expectation value of the combination. While true in quantum mechanics, this is too strong to be demanded for any particular value of the hidden variable, as an elementary example can show (from Reflections on Relativity by Kevin Brown):

Suppose we have two variables X and Y and two hidden variables 1 and 2. Suppose for hidden variable 1: X=2, Y=5 and X+Y=5 while for hidden variable 2: X=4, Y=3 and X+Y=9. Averaging for hidden variables 1 and 2, avg(x) = 3, avg(Y) = 4, and avg(X+Y)=7 and the sum of the averages is the average of the sums. On each hidden variable however, this property is not valid. (If you are worried about the funny math X+Y = 5 when X = 2 and Y = 5, “X”, “Y”, and “X+Y” are three separate experiments requiring separate experimental setups and the number appearing to the right hand side of the equal sign is the experimental outcome. The “+” in “X+Y” is only part of the label for an experimental setup and not a genuine plus operator.)

Gleason’s theorem improves on von Neumann’s result because it makes it a mathematical necessity to have a particular form of the average (the trace formalism) which does obey von Neumann’s condition for compatible (commuting) observables. Bell’s criticism on Gleason’s result is much more subtle and only here “Bohr’s contextuality” becomes a mandatory part of the argument. One can argue that von Neumann’s condition from above can be dismissed even without the toy example (which is not part of the paper, but I use it for the clarity of the argument), by pointing out that it is nonsensical for non-commutative variables which requires a different experimental setup. However, for compatible commutative variables it is a natural requirement (and this is what is used by Gleason). Here Bell’s analysis becomes subtle. To rule out hidden variables Gleason uses this requirement for commuting observables on spaces of dimensionality higher than two, meaning it has to be applied at least twice: on variables X and Y as well as X and Z, and while X and Y are compatible (commute), and X and Z are compatible as well, X and Z may not be (and the funny math from above can happen). It is only at this point where Bell says: “the result of an observation may reasonably depend not only on the state of the system (including hidden variables) but also on the disposition of the apparatus.”

So, Bell’s main argument was a defense of the freedom to construct a hidden variable theory. Unjustified requirements were also proven unjustified by showing they violate Bohr’s contextuality (but this is only a secondary supporting idea in the paper)."

This was a quick cramming done by the master bluffer in response to my scathing criticism here:

"Let me begin by pointing out a number of alarming confusions in Florin's reasoning---not only concerning my own work, but also concerning the significance of Bell's theorem itself. The worse of these confusions occurs in his understanding of the notion of contextuality. Florin has painted this notion as something to be avoided at all cost. But that sharply differs from how Bell himself viewed the notion. Bell vigorously defended contextuality in his very first paper. He famously stressed that "the result of an observation may reasonably depend not only on the state of the system (including hidden variables) but also on the disposition of the apparatus." I could not agree more! He goes on in the paper to strongly criticise the theorems by von Neumann, Jauch and Piron, and Gleason for neglecting contextuality, and argues that the demand of strict non-contextuality implicit in these theorems is "quite unreasonable" from the physical point of view. The idea of contextuality, he argues, should not be confused with the idea of realism. These are two completely separate notions."

"Your comments are naive, at least with regard to the sociology of physics. Any new evidence in physics---no matter how starkly presented---is almost never easily accepted. It is often misinterpreted, flatly denied, effectively neutralized, or simply ignored if the physics community does not like it (just as you don't like my ideas because they would invalidate yours).

Recall how von Neumann's theorem against general hidden variables was believed in by the physics community for 30 years (yes, 30 years) despite its clear-cut refutation by Grete Hermann in 1935 (and despite the existence of explicit counterexample to the theorem by Bohm). It was not until John Bell rediscovered Hermann's objection in 1965 that the importance of her work began to be appreciated by the community. You grossly underestimate the power of dogma within physics.

Your second error is to think that my framework, or any hidden variable framework for that matter, implies that quantum mechanics is incorrect. This is your main blind spot."

The so called "scathing criticism" included the idiotic assertion that general relativity is a contextual theory. But how to say to Joy: hey, you are an imbecile? I had to do a large detour to frame the discussion correctly and clear up his propaganda smoke screen.

If I had it, I'd give it, Florin. I, however, am also concerned with the physics and not with Joy's mortgage. You're starting to sound like Steve -- "Them they are not skillings. They like only the money! What a world!"

I put my comments about contextuality and relativity in a new thread box, just below. General relativity is not contextual. In fact I have this conjecture that quantum physics and general relativity are in some categorical system (eg Topos theory) either equivalent or are partially equivalent. The fact that GR is not contextual might be one aspect of this.

I found these remarks about general relativity as a contextual theory. I think some deeper thought needs to be given to this matter. The concept of epistemological contextualism is that the frame from which one makes an account of knowledge is a determining factor in that knowledge. In quantum mechanics this relates to naïve realism on the nature of observables. A particle, such as the...

I found these remarks about general relativity as a contextual theory. I think some deeper thought needs to be given to this matter. The concept of epistemological contextualism is that the frame from which one makes an account of knowledge is a determining factor in that knowledge. In quantum mechanics this relates to naïve realism on the nature of observables. A particle, such as the Bohmian beable, has some objective real trajectory and spin, but the observed outcome is contingent upon the set up of apparatus. The orientation of and SG apparatus determines a measured outcome in relation to that device. However, that outcome is not an intrinsic property of the actual quantum particle. In that way the objective properties of the quantum system are said to remain, while observations pertain to how the wave function or pilot wave determines a relationship between the objective properties of the particle and the apparatus. Quantum mechanics has been found to lack the sort of localism required for this. In the end there is actually no experiment which can be performed to determine whether there is some inner objective reality. Thus quantum mechanics is observer independent, or there is no quantum logic which tells us how the wave function will determine an outcome with a correlation between the objective particle and the observer. IOW, there are no local hidden variables.

What can we say about general relativity? Contextuality in general relativity is not compatible with the basic format of coordinate independent spacetime physics. Contextuality in GR would mean there is some substratum of physics which connects a frame in some objective manner with spacetime time. IOW we do not measure the velocity of a particle with respect to space or spacetime, but relative to some other body or particle. The properties of a frame are given by mathematical fictions called connections (Christoffel symbols etc), and the relative physical properties between two frames is a difference or differential of these connections. This is then a curvature, and the proper description of relative motion is the geodesic deviation equation.

dU^a/ds + R^a_{bcd)V^bU^cV^d = 0

for U^a the vector separating two masses. This is a covariant expression and establishes a differential map between two charts on the manifold. Hence it is not possible to establish a contextual format for spacetime physics.

Curiously because of this some physical quantities can’t be localized or even determined to have constants of motion. A spacetime with no Killing vector or isometries in a time direction does not provide a conservation law for energy. In addition spacetime momentum as a component of the stress-energy p^a = e_bT^{ab} contained in a region with boundary obeys the Stokes’ law

∫p^ads_a = ∫e_bT^{ab}ds_a = ∫∇_a(e_bT^{ab}) dv

where the covariant differential on the e_b will give a connection dependent element. This means momentum can’t in general be localized in a volume without certain conditions on symmetries of the spacetime. The coordinate independent nature of spacetime physics precludes any underlying contextuality.

Lawrence, I used to think exactly as you do. Modern research by Hestenes, Lamport and Christian has changed my mind. Every challenge to classical realism can be met with their results.

Take "What can we say about general relativity? Contextuality in general relativity is not compatible with the basic format of coordinate independent spacetime physics."

If it weren't compatible, Hestenes would not be able to translate GA to spacetime algebra to Minkowski space. This is plenty good enough for me. It's stunning, in fact, and fulfills Hestenes' promise to simplify physics language.

"A spacetime with no Killing vector or isometries in a time direction does not provide a conservation law for energy."

As I said before, you are comparing limited mathematical methods with a mathematically complete theory. That is not a fair comparison -- the space of Joy's theory, based on octonionic algebra, is 8 dimensional. Even Einstein did not object to adding dimensions if there exist good physical reasons to do so.

Ordinary QFT constructs amplitudes at every point on a manifold in a local manner. This is in line with the partial differential equations for QFTs. Diffeomorphism invariant physical observables are necessarily nonlocal. This is because the manifold itself is subject to quantum fluctuations or is in some superposed configuration, and the uncertainty in light cones, null congruencies or horizons means that points are not separable in a local fashion. The Hamiltonian, such as in the ADM formalism. is then a complicated function of the moduli. The moduli is nonHausdorff and does not obey elliptic convergence conditions.

In looking at your paper you appear to say something similar to this. The nonlocality of spacetime points, or amplitudes on the manifold, means that quantum gravity is not subject to naive realism. You appear to be actually leading to this point, but now here you say quantum gravity is contextual.

"Ordinary QFT constructs amplitudes at every point on a manifold in a local manner."

Indeed. But it is safe to say that ordinary QFT is only an approximation to the full QG, whatever the latter may be.

"In looking at your paper you appear to say something similar to this. The nonlocality of spacetime points, or amplitudes on the manifold, means that quantum gravity is not subject to naive realism."

This is correct, but it depends on what you mean by quantum gravity. To me quantum gravity does not necessarily mean a quantum theory of gravity. It is an entirely different theory of which both GR and QFT are approximations, and this different theory is perfectly local and perfectly realistic. Needless to say, this is a position of mine, not a proof of any sort.

I think we mean different things by the word "contextual." What I mean by "contextual" is the fact that any measurement result in GR or QG would necessarily depend on the disposition of the measuring apparatus, but not necessarily non-locally or non-realistically. It is a whole different picture from the particle-physics based picture you have in mind.

Lawrence once worked out a quantum mechanics problem at the event horizon of a black hole. If I remember correctly, quantum mechanics is incompatible with time dilation. This "seam" between QM & GR should be explored experimentally. This is where were going to find a method to warp space-time using electronics.

I wondered if this was the case. Of course there is context in a measurement. In relativity there is a context in setting up a grid or lattice of flagged point. However, in both of the cases of quantum mechanics and relativity nature does not embody this context. Spacetime physics confers no inherent meaning to how you grid things up. The same holds with quantum physics. The context of a measurement is due to your ability to classically orient the measurement device, such as a polarizer or an SG apparatus. However, quantum mechanics remains fundamentally free of this context.

"The context of a measurement is due to your ability to classically orient the measurement device, such as a polarizer or an SG apparatus. However, quantum mechanics remains fundamentally free of this context."

Both Bohr and Bell disagrees with you here:

Bohr: "...the impossibility of any sharp distinction between the behaviour of atomic objects and the interaction with the measuring instruments which serve to define the conditions under which the phenomena appear."

Bell: "The result of an observation may reasonably depend not only on the state of the system ... but also on the complete disposition of the [measuring] apparatus..."

I agree with both Bohr and Bell, but with Einstein I see this fact as a result of a limitation (or incompleteness) of quantum theory.

We take measurements so that we can understand how nature works; we define a context so that we can describe the phenomena mathematically. Even if we could solve the event horizon/photon wave-function problem, what good would it do us? But if we performed experiments, we could learn more about the relationship between light and gravity.

Obviously there are no black holes for us to perform experiments on. What we could do is simulate the gravitational time dilation of a black hole. I've been talking about frequency shifting for a long time now. In a way, a frequency shift f(t) = [df/dt]t + f_0 is sort of like a time dilation (at least for that frequency shifted light). Since curvature of space-time can produce frequency shift, will the space-time continuum take any notice of our frequency shifted pulse of light?

Some may argue that the relationship between gravity and light is already well known. Let me be more specific. There is more to learn, by experiment, about phase/frequency shift of light AND gravitational time dilation. Particularly, can we synthesize a frequency shift so perfect that it will grip onto (couple with) the force of gravity?

The statement by Bohr indicates the difficulty, initially pointed out by Heisenberg, with the definition of the classical-quantum cut. As for Bell, if you have many identically prepared states the outcome is determined up to probability by the apparatus and the basis it selects on.

Unfortunately for Einstein he never succeeded in his goal of finding an underlying substrate to QM. Frankly, I think QM consistently resists these attempts. This matter seems to be for now a fact of life, and classical systems seem as fundamental as quantum systems. Dp-branes are classical objects.

"It is still not clear who is right, says John Norton, a philosopher based at the University of Pittsburgh, Pennsylvania. Norton is hesitant to express it, but his instinct - and the consensus in physics - seems to be that space and time exist on their own. The trouble with this idea, though, is that it doesn't sit well with relativity, which describes space-time as a malleable fabric whose geometry can be changed by the gravity of stars, planets and matter."

Jason Wolfe wrote: "Some may argue that the relationship between gravity and light is already well known. Let me be more specific. There is more to learn, by experiment, about phase/frequency shift of light AND gravitational time dilation."

David Morin: "The equivalence principle has a striking consequence concerning the behavior of clocks in a gravitational field. It implies that higher clocks run faster than lower clocks. If you put a watch on top of a tower, and then stand on the ground, you will see the watch on the tower tick faster than an identical watch on your wrist. When you take the watch down and compare it to the one on your wrist, it will show more time elapsed."

Is it true? Clever Einsteinians know that there is no gravitational time dilation. The gravitational redshift arises from the change in the speed of light signals in the presence of gravitation:

Banesh Hoffmann: "In an accelerated sky laboratory, and therefore also in the corresponding earth laboratory, the frequence of arrival of light pulses is lower than the ticking rate of the upper clocks even though all the clocks go at the same rate. (...) As a result the experimenter at the ceiling of the sky laboratory will see with his own eyes that the floor clock is going at a slower rate than the ceiling clock - even though, as I have stressed, both are going at the same rate. (...) The gravitational red shift does not arise from changes in the intrinsic rates of clocks. It arises from what befalls light signals as they traverse space and time in the presence of gravitation."

You said, "Is it true? Clever Einsteinians know that there is no gravitational time dilation. The gravitational redshift arises from the change in the speed of light signals in the presence of gravitation: "

How can we make a technological leap forward if we disagree on fundamental physics? The speed of light is invariant for the observer (and the emitter). Gravitational time dilation was proven experimentally.

http://en.wikipedia.org/wiki/Hafele%E2%80%93Keating_experiment

Hey, if you think you know more about space-time than the experts, then why don't you build your own global positioning system.

In my opinion you are an incompetent mathematician who is not able to do a very elementary and simple calculation. My one page paper is self-contained, and in fact it is only half a page long. Yet you have been claiming that

"Joy is unable to fix the gap between (6) and (7) in his one page paper."

In my opinion you are an incompetent mathematician who is not able to do a very elementary and simple calculation. My one page paper is self-contained, and in fact it is only half a page long. Yet you have been claiming that

"Joy is unable to fix the gap between (6) and (7) in his one page paper."

This is a shameless, unadulterated, maliciously generated, and vicious lie.

I have already explained my calculations in detail in some 12 of my papers. I have specifically addressed your incompetent and mistaken reading of my paper in my official response to you (see the attached paper).

Moreover, I have explicitely posted my calcualations in several different ways on this very blog. When I do post an explanantion that brings out your straw-man strategy in open, you have been erasing my posts from this blog. But let me try to explain, this one last time, the simple and elementary steps that leads to equation (7) from equation (6) in my one-page paper. Nothing we shall need for this derivation would differ from what is already contained in the one-page paper.

The right-most of equation (6) in the paper is

(1) E(a, b) = 1/n Sum_i { a_j B_j(L^i) } { b_k B_k(L^i) },

where large-n limit is understood. The basis elements B_j(L) appearing in this equation are the bivector basis within the algebra Cl(3,0). As such, they satisfy the bivector subalgebra

(2) B_j(L) B_k(L) = -delta_jk - eps_jkl B_l(L),

where L = +/-1 is a fair coin. Substituting (2) into (1) leads to the equation

Now this equation is written in the random basis { B_j(L) } chosen by Nature. From equations (1) and (2) it is evident that the basis chosen by Alice and Bob to observe their results differs from this basis. The basis chosen by Alice and Bob are in fact { B_j }, not { B_j(L) }. The relationship between these two basis is given by the isomorphism

(4) B_j(L) = L B_j,

where L = +/-1. This is not a new postulate. It has already been defined explicitly in my very first paper, and used in all subsequent papers. In the one-page paper it is stated explicitly, in the very second sentence of the paper, and used in the very first and second equations of the paper. Thus, given the fact that it is Alice and Bob who are evaluating the correlation between the results they have observed, we must re-express equation (3) in the basis { B_j } chosen by Alice and Bob. Using the isomorphism (4) this is easy to do, and the result is

The correlation, as observed and evaluated by Alice and Bob, is now easy to work out form (5). Since L is a fair coin, in the large-n limit they reduce to

(7) E(a, b) = -a.b.

Thus EPR correlation are nothing but local-realistic correlation between the points of a parallelized 3-sphere.

Now there are those who will still not be satisfied with this explicit demonstration. For them I must take baby steps to explain the above calculation once again. For these determined sceptics, let us consider only two terms: L^1 = +1 and L^2 = -1. Then equation (3) becomes