Category Archives: physics

Post navigation

It is not often you get to disagree with a genius. But if you read enough or attend enough lectures sooner or later some genius is going to say or write something that you can see is evidently false, or perhaps (being a bit more modest) you might think is merely intuitively false. So the other day I see this lecture by Nima Arkani-Hamed with the intriguing title “The Morality of Fundamental Physics”. It is a really good lecture, I recommend every young scientist watch it. (The “Arcane” my title alludes to, by the way, is a good thing, look up the word!) It will give you a wonderful sense of the culture of science and a feeling that science is one of the great ennobling endeavours of humanity. The way Arkani-Hamed describes the pursuit of science also gives you comfort as a scientist if you ever think you are not earning enough money in your job, or feel like you are “not getting ahead” — you should simply not care! — because doing science is a huge privilege, it is a reward unto itself, and little in life can ever be as rewarding as making a truly insightful scientific discovery or observation. No one can pay me enough money to ever take away that sort of excitement and privilege, and no amount of money can purchase you the brain power and wisdom to achieve such accomplishments. And one of the greatest overwhelming thrills you can get in any field of human endeavour is firstly the hint that you are near to turning arcane knowledge into scientific truth, and secondly when you actually succeed in this.

First, let me be deflationary about my contrariness. There is not a lot about fundamental physics that one can honestly disagree with Arkani-Hamed about on an intellectual level, at least not with violent assertions of falsehood. Nevertheless, fundamental physics is rife enough with mysteries that you can always find some point of disagreement between theoretical physicists on the foundational questions. Does spacetime really exist or is it an emergent phenomenon? Did the known universe start with a period of inflation? Are quantum fields fundamental or are superstrings real?

When you disagree on such things you are not truly having a physics disagreement, because these are areas where physics currently has no answers, so provided you are not arguing illogically or counter to known experimental facts, then there is a wide open field for healthy debate and genuine friendly disagreement.

Then there are deeper questions that perhaps physics, or science and mathematics in general, will never be able to answer. These are questions like: Is our universe Everettian? Do we live in an eternal inflation scenario Multiverse? Did all reality begin from a quantum fluctuation, and, if so, what the heck was there to fluctuate if there was literally nothing to begin with? Or can equations force themselves into existence from some platonic reality merely by brute force of their compelling beauty or structural coherence? Is pure information enough to instantiate a physical reality (the so-called “It from Bit” meme.

Some people disagree on whether such questions are amenable to experiment and hence science. The Everettian question may some day become scientific. But currently it is not, even though people like David Deutsch seem to think it is (a disagreement I would have with Deutsch). While some of the “deeper ” questions turn out to be stupid, like the “It from Bit” and “Equations bringing themselves to life” ideas. However, they are still wonderful creative ideas anyway, in some sense, since they put our universe into contrast with a dull mechanistic cosmos that looks just like a boring jigsaw puzzle.

The fact our universe is governed (at least approximately) by equations that have an internal consistency, coherence and even elegance and beauty (subjective though those terms may be) is a compelling reason for thinking there is something inevitable about the appearance of a universe like ours. But that is always just an emotion, a feeling of being part of something larger and transcendent, and we should not mistake such emotions for truth. By the same token mystics should not go around mistaking mystical experiences for proof of the existence of God or spirits. That sort of thinking is dangerously naïve and in fact anti-intellectual and incompatible with science. And if there is one truth I have learned over my lifetime, it is that whatever truth science eventually establishes, and whatever truths religions teach us about spiritual reality, wherever these great domains of human thought overlap they must agree, otherwise one or the other is wrong. In other words, whatever truth there is in religion, it must agree with science, at least eventually. If it contradicts known science it must be superstition. And if science contravenes the moral principles of religion it is wrong.

Religion can perhaps be best thought of in this way: it guides us to knowledge of what is right and wrong, not necessarily what is true and false. For the latter we have science. So these two great systems of human civilization go together like the two wings of a bird, or as in another analogy, like the two pillars of Justice, (1) reward, (2) punishment. For example, nuclear weapons are truths of our reality, but they are wrong. Science gives us the truth about the existence and potential for destruction of nuclear weapons, but it is religion which tells us they are morally wrong to have been fashioned and brought into existence, so it is not that we cannot, but just that we should not.

Back to the questions of fundamental physics: regrettably, people like to think these questions have some grit because they allow one to disbelieve in a God. But that’s not a good excuse for intellectual laziness. You have to have some sort of logical foundation for any argument. This often begins with an unproven assumption about reality. It does not matter where you start, so much, but you have to start somewhere and then be consistent, otherwise as elementary logic shows you would end up being able to prove (and disprove) anything at all. If you start with a world of pure information, then posit that spacetime grows out of it, then (a) you need to supply the mechanism of this “growth”, and (b) you also need some explanation for the existence of the world of pure information in the first place.

Then if you are going to argue for a theory that “all arises from a vacuum quantum fluctuation”, you have a similar scenario, where you have not actually explained the universe at all, you have just pushed back the existence question to something more elemental, the vacuum state. But a quantum vacuum is not a literal “Nothingness”, in fact is is quite a complicated sort of thing, and has to involve a pre-existing spacetime or some other substrate that supports the existence of quantum fields.

Further debate along these lines is for another forum. Today I wanted to get back to Nima Arkani-Hamed’s notions of morality in fundamental physics and then take issue with some private beliefs people like Arkani-Hamed seem to profess, which I think betray a kind of inconsistent (I might even dare say “immoral”) thinking.

Yes, there is a Morality in Science

Arkani-Hamed talks mostly about fundamental physics. But he veers off topic in places and even brings in analogies with morality in music, specifically in lectures by the great composer Leonard Bernstein, there are concepts in the way Bernstein describes the beauty and “inevitability” of passages in great music like Beethoven’s Fifth Symphony. Bernstein even gets close to saying that after the first four notes of the symphony almost the entire composition could be thought of as following as an inevitable consequence of logic and musical harmony and aesthetics. I do not think this is flippant hyperbole either, though it is somewhat exaggerated. The cartoon idea of Beethoven’s music following inevitable laws of aesthetics has an awful lot in common with the equally cartoon notion of the laws of physics having, in some sense, their own beauty and harmony such that it is hard to imagine any other set of laws and principles, once you start from the basic foundations.

I should also mention that some linguists would take umbrage at Arkani-Hamed’s use of the word “moral”. Really, most of what he lectures about is aesthetics, not morality. But I am happy to warp the meaning of the word “moral” just to go along with the style of Nima’s lecture. Still, you do get a sense from his lecture, that the pursuit of scientific truth does have a very close analogy to moral behaviour in other domains of society. So I think he is not totally talking about aesthetics, even though I think the analogy with Beethoven’s music is almost pure aesthetics and has little to do with morality. OK, those niggles aside, let’s review some of Arkani’Hamed’s lecture highlights.

The way Arkani-Hamed tells the story, there are ways of thinking about science that are not just “correct”, but more than correct, the best ways of thinking seem somehow “right”, whereby he means “right” in the moral sense. He gives some examples of how one can explain a phenomenon (e.g., the apparent forwards pivoting of a helium balloon suspended inside a boxed car) where there are many good explanations that are all correct (air pressure effects, etc) but where often there is a better deeper more morally correct way of reasoning (Einstein’s principle of equivalence — gravity is indistinguishable from acceleration, so the balloon has to “fall down”).

It really is entertaining, so please try watching the video. And I think Arkani-Hamed makes a good point. There are “right” ways of thinking in science, and “correct but wrong ways”. I guess, unlike human behaviour the scientifically “wrong” ways are not actually spiritually morally “bad”, as in “sinful”. But there is a case to be made that intellectually the “wrong” ways of thinking (read, “lazy thinking ways”) are in a sense kind of “sinful”. Not that we in science always sin in this sense of using correct but not awesomely deep explanations. I bet most scientists which they always could think in the morally good (deep) ways! Life would be so much better if we could. And no one would probably wish to think otherwise. It is part of the cultural heritage of science that people like Einstein (and at times Feynman, and others) knew of the morally good ways of thinking about physics, and were experts at finding such ways of thinking.

Usually, in brief moments of delight, most scientists will experience fleeting moments of being able to see the morally good ways of scientific thinking and explanation. But the default way of doing science is immoral, by in large, because it takes a tremendous amount of patience and almost mystical insight, to be able to always see the world of physics in the morally correct light — that is, in the deepest most meaningful ways — and it takes great courage too, because, as Arkani-Hamed points out, it takes a lot more time and contemplation to find the deeper morally “better” ways of thinking, and in the rush to advance one’s career and publish research, these morally superior ways of thinking often get by-passed and short-circuited. Einstein was one of the few physicists of the last century who actually managed, a lot of his time, to be patient and courageous enough to at least try to find the morally good explanations.

This leads to two wonderful quotations Arkani-Hamed offers, one from Einstein, and the other from a lesser known figure of twentieth century science, the mathematician Alexander Gröthendieck — who was probably an even deeper thinker than Einstein.

The years of anxious searching in the dark, with their intense longing, their intense alternations of confidence and exhaustion and the final emergence into the light—only those who have experienced it can understand it.
— Albert Einstein, describing some of the intellectual struggle and patience needed to discover the General Theory of Relativity.

“The … analogy that came to my mind is of immersing the nut in some softening liquid, and why not simply water? From time to time you rub so the liquid penetrates better, and otherwise you let time pass. The shell becomes more ﬂexible through weeks and months—when the time is ripe, hand pressure is enough, the shell opens like a perfectly ripened avocado!

“A different image came to me a few weeks ago. The unknown thing to be known appeared to me as some stretch of earth or hard marl, resisting penetration … the sea advances insensibly in silence, nothing seems to happen, nothing moves, the water is so far off you hardly hear it … yet it finally surrounds the resistant substance.”
— Alexander Gröthendieck, describing the process of grasping for mathematical truths.

Beautiful and foreboding — I have never heard of the mathematical unknown likened to a “hard marl” (sandstone) before!

So far all is good. There are many other little highlights in Arkani-Hamed’s lecture, and I should not write about them all, it is much better to hear them explained by the master.

So what is there to disagree with?

The Morally Correct Thinking in Science is Open-Minded

There are a number of characteristics of “morally correct” reasoning in science, or an “intellectually right way of doing things”. Arkani-Hamed seems to list most of the important things:

Trust: trust that there is a universal, invariant, human-independent and impersonal (objective) truth to natural laws.

Honesty: with others (no fraud) but also more importantly you need to be honest with yourself if you want to do good science.

Humility: who you are is irrelevant, only the content of your ideas is important.

Wisdom: we never pretend we have the whole truth, there is always uncertainty.

Perseverance: lack of certainty is not an excuse for laziness, we have to try our hardest to get to the truth, no matter how difficult the path.

Tolerance: it is extremely important to entertain alternative and dissenting ideas and to keep an open mind.

Justice: you cannot afford to be tolerant of dishonest or ill-formed ideas. It is indeed vitally important to be harshly judgemental of dishonest and intellectually lazy ideas. Moreover, one of the hallmarks of a great physicist is often said to be the ability to quickly check and to prove one’s own ideas to be wrong as soon as possible.

In this list I have inserted in bold the corresponding spiritual attributes that Professor Nima does not identify. But I think they are important to explicitly state. Because they provide a Rosetta Stone of sorts for translating the narrow scientific modes of behaviour into border domains of human life.

I think that’s a good list. There is, however, one hugely important morally correct way of doing science that Arkani-Hamed misses, and even fails to gloss over or hint at. Can you guess what it is?

Maybe it is telling of the impoverishment in science education, the cold objective dispassionate retelling of facts, in our society that I think not many scientists will even think of his one, but I do not excuse Arkani-Hamed for leaving it off his list, since in many ways it is the most important moral stance in all of science!

It is,

Love: the most important driver and motive for doing science, especially in the face of adversity or criticism, is a passion and desire for truth, a true love of science, a love of ideas, an aesthetic appreciation of the beauty and power of morally good ideas and explanations.

Well ok, I will concede this is perhaps implicit in Arkani-Hamed’s lecture, but I still cannot give him 10 out of 10 on his assignment because he should have made it most explicit, and highlighted it in bold colours.

One could point out many instances of scientists failing at these minimal scientific moral imperatives. Most scientists go through periods of denial, believing vainly in a pet theory and failing to be honest to themselves about the weaknesses of their ideas. There is also a vast cult of personality in science that determines a lot of funding allocation, academic appointments, favouritism, and general low level research corruption.

The point of Arkani-Hamed’s remarks is not that the morally good behaviours are how science is actually conducted in the everyday world, but rather it is how good science should be conducted and that from historical experience the “good behaviours” do seem to be rewarded with the best and brightest break-throughs in deep understanding. And I think Arkani-Hamed is right about this. It is amazing (or perhaps, to the point, not so amazing!) how many Nobel Laureates are “humble” in the above sense of putting greater stock in their ideas and not in their personal authority. Ideas win Nobel Prizes, not personalities.

So what’s the problem?

The problem is that while expounding on these simplistic and no-doubt elegant philosophical and aesthetic themes, he manages to intersperse his commentary with the claim, “… by the way, I am an atheist”.

OK, I know what you are probably thinking, “what’s the problem?” Normally I would not care what someone thinks regarding theism, atheism, polytheism, or any other “-ism”. People are entitled to their opinions, and all power to them. But as a scientist I have to believe there are fundamental truths about reality, and about a possible reality beyond what we perceive. There must even be truths about a potential reality beyond what we know, and maybe even beyond what we can possibly ever know.

Now some of these putative “truths” may turn out to be negative results. There may not be anything beyond physical reality. But if so, that’s a truth we should not hereby now and forever commit to believing. We should at least be open-minded to the possibility this outcome is false, and that the truth is rather that there is a reality beyond physical universe. Remember, open-mindedness was one of Arkani-Hamed’s prime “good behaviours” for doing science.

The discipline of Physics, by the way, has very little to teach us about such truths. Physics deals with physical reality, by definition, and it is an extraordinary disappointment to hear competent, and even “great”, physicists expound their “learned” opinions on theism or atheism and non-existence of anything beyond physical universes. These otherwise great thinkers are guilty of over-reaching hubris, in my humble opinion, and it depresses me somewhat. Even Feynman had such hubris, yet he managed expertly to cloak it in the garment of humility, “who am I to speculate on metaphysics,” is something he might have said (I paraphrase the great man). Yet by clearly and incontrovertibly stating “I do not believe in God” one is in fact making an extremely bold metaphysical statement. It is almost as if these great scientists had never heard of the concept of agnosticism, and somehow seem to be using the word “atheism” as a synonym. But no educated person would make such a gross etymological mistake. So it just leaves me perplexed and dispirited to hear so many claims of “I am atheist” coming from the scientific establishment.

Part of me wants to just dismiss such assertions or pretend that these people are not true scientists. But that’s not my call to make. Nevertheless, for me, a true scientist almost has to be agnostic. There seems very little other defensible position.

How on earth would any physicist ever know such things (as non-existence of other realms) are true as articles of belief? They cannot! Yet it is astounding how many physicists will commit quite strongly to atheism, and even belittle and laugh at scientists who believe otherwise. It is a strong form of intellectual dishonesty and corruption of moral thinking to have such closed-minded views about the nature of reality.

So I would dare to suggest that people like Nima Arkani-Hamed, who show such remarkable gifts and talents in scientific thinking and such awesome skill in analytical problem solving, can have the intellectual weakness to profess any version of atheism whatsoever. I find it very sad and disheartening to hear such strident claims of atheism among people I would otherwise admire as intellectual giants.

Yet I would never want to overtly act to “convert” anyone to my views. I think the process of independent search for truth is an important principle. People need to learn to find things out on their own, read widely, listen to alternatives, and weigh the evidence and logical arguments in the balance of reason and enlightened belief, and even then, once arriving at a believed truth, one should still question and consider that one’s beliefs can be over-turned in the light of new evidence or new arguments. Nima’s principle of humility, “we should never pretend we have the certain truth”.

Is Atheism Just Banal Closed-Mindedness?

The scientifically open-mind is really no different to the spiritually open-mind other than in orientation of topics of thought. Having an open-mind does not mean one has to be non-committal about everything. You cannot truly function well in science or in society without some grounded beliefs, even if you regard them all as provisional. Indeed, contrary to the cold-hearted objectivist view of science, I think most real people, whether they admit it or not (or lie to themselves perhaps) they surely practise their science with an idea of a “truth” in mind that they wish to confirm. The fact that they must conduct their science publicly with the Popperrian stances of “we only postulate things that can be falsified” is beside the point. It is perfectly acceptable to conduct publicly Popperian science while privately having a rich metaphysical view of the cosmos that includes all sorts of crazy, and sometimes true, beliefs about the way things are in deep reality.

Here’s the thing I think needs some emphasis: even if you regard your atheism as “merely provisional” this is still an unscientific attitude! Why? Well, because questions of higher reality beyond the physical are not in the province of science, not by any philosophical imperative, but just by plain definition. So science is by definition agnostic as regards the transcendent and metaphysical. Whatever exists beyond physics is neither here nor there for science. Now many self-proclaimed scientists regard this fact about definitions as good enough reason for believing firmly in atheism. My point is that this is nonsense and is a betrayal of scientific morals (morals, that is, in the sense of Arkani-Hamed — the good ways of thinking that lead to deeper insights). The only defensible logical and morally good way of reasoning from a purely scientific world view is that one should be at the basest level of philosophy positive in ontology and minimalist in negativity, and agnostic about God and spiritual reality. It is closed-minded and therefore, I would argue, counter to Arkani-Hamed’s principles of morals in physics, to be a committed atheist.

This is in contrast to being negative about ontology and positively minimalist, which I think is the most mistaken form of philosophy or metaphysics adopted by a majority of scientists, or sceptics, or atheists. The stance of positive minimalism, or ontological negativity, adopts, as unproven assumption, a position that whatever is not currently needed, or not currently observed, doe snot in fact exist. Or to use a crude sound-bite, such philosophy is just plain closed-mindedness. A harsh cartoon version of which is, “what I cannot understand or comprehend I will assume cannot exist”. This may be unfair in some instances, but I think it is a fairly reasonable caricature of general atheistic thought. I think is a lot fairer than the often given argument against religion which points to corruptions in religious practice as a good reason to not believe in God. There is of course absolutely no causal or logical connection to be made between human corruptions and the existence or non-existence of a putative God.

In my final analysis of Arkani-Hamed’s lecture, I have ended up not worrying too much about the fact he considers himself an atheist. I have to conclude he is a wee bit self-deluded, (like most of his similarly minded colleagues no doubt, yet, of course, they might ultimately be correct, and I might be wrong, my contention is that the way they are thinking is morally wrong, in precisely the sense Arkani-Hamed outlines, even if their conclusions are closer to the truth than mine).

Admittedly, I cannot watch the segments in his lecture where he expresses the beautiful ideas of universality and “correct ways of explaining things” without a profound sense of the divine beyond our reach and understanding. Sure, it is sad that folks like Arkani-Hamed cannot infer from such beauty that there is maybe (even if only possibly) some truth to some small part of the teachings of the great religions. But to me, the ideas expressed in his lecture are so wonderful and awe-inspiring, and yet so simple and obvious, they give me hope that many people, like Professor Nima himself, will someday appreciate the view that maybe there is some Cause behind all things, even if we can hardly ever hope to fully understand it.

My belief has always been that science is our path to such understanding, because through the laws of nature that we, as a civilization, uncover, we can see the wisdom and beauty of creation, and no longer need to think that it was all some gigantic accident or experiment in some mad scientists super-computer. Some think such wishy-washy metaphysics has no place in the modern world. After all, we’ve grown accustomed to the prevalence of evil in our world, and tragedy, and suffering, and surely if any divine Being was responsible then this would be a complete and utter moral paradox. To me though, this is a a profound misunderstanding of the nature of physical reality. The laws of physics give us freedom to grow and evolve. Without the suffering and death there would be no growth, no exercise of moral aesthetics, and arguably no beauty. Beauty only stands out when contrasted with ugliness and tragedy. There is a Yin and Yang to these aspects of aesthetics and misery and bliss. But the other side of this is a moral imperative to do our utmost to relieve suffering, to reduce poverty to nothing, to develop an ever more perfect world. For then greater beauty will stand out against the backdrop of something we create that is quite beautiful in itself.

Besides, it is just as equally wishy-washy to think the universe is basically accidental and has no creative impulse. People would complain either way. My positive outlook is that as long as there is suffering and pain in this world, it makes sense to at least imagine there is purpose in it all. How miserable to adopt Steven Wienberg’s outlook that the noble pursuit of science merely “lifts up above farce to at least the grace of tragedy”. That’s a terribly pessimistic negative sort of world view. Again, he might be right that there is no grand purpose or cosmic design, but the way he reasons to that conclusion seems, to me, to be morally poor (again, strictly, if you like, in the Arkani-Hamed morality of physics conception).

There seems, to me, to be no end to the pursuit of perfections. And given that, there will always be relative ugliness and suffering. The suffering of people in the distant future might seem like luxurious paradise to us in the present. That’s how I view things.

The Fine Tuning that Would “Turn You Religious”

Arkani-Hamed mentions another thing that I respectfully take a slight exception to — this is in a separate lecture at a Philosophy of Cosmology conference — in a talk, “Spacetime, Quantum Mechanics and the Multiverse”. Referring to the amazing coincidence that our universe has just the right cosmological constant to avoid space being empty and devoid of matter, and just the right Higgs boson mass to allow atoms heavier than hydrogen to form stably, is often, Arkani-Hamed points out, given as a kind of anthropic argument (or quasi-explanation) for our universe. The idea is that we see (measure) such parameters for our universe precisely, and really only, because if the parameters were not this way then we would not be around to measure them! Everyone can understand this reasoning. But it stinks! And off course it is not an explanation, such anthropic reasoning reduces to mere observation. Such reasonings are simple banal brute facts about our existence. But there is a setting in metaphysics where such reasoning might be the only explanation, as awful as it smells. That is, if our meta-verse is governed by something like Eternal Inflation, (or even by something more ontologically radical like Max Tegmark’s “Mathematical Multiverse”) whereby every possible universe is at some place or some meta-time, actually realised by inflationary big-bangs (or mathematical consequences in Tegmark’s picture) then it is really boring that we exist in this universe, since no matter how infinitesimally unlikely the vacuum state of our universe is, within the combinatorial possibilities of all possible inflationary universe bubbles (or all possible consistent mathematical abstract realities) there is, in these super-cosmic world views, absolutely nothing to prevent our infinitesimally (“zero probability measure”) universe from eventually coming into being from some amazingly unlikely big-bang bubble.

In a true multiverse scenario we thus get no really deep explanations, just observations. “The universe is this way because if it were not we would not be around to observe it.” The observation becomes the explanation. A profoundly unsatisfying end to physics! Moreover, such infinite possibilities and infinitesimal probabilities make standard probability theory almost impossible to use to compute anything remotely plausible about multiverse scenarios with any confidence (although this has not stopped some from publishing computations about such probabilities).

After discussing these issues, which Arkani-Hamed thinks are the two most glaring fine-tuning or “naturalness” problems facing modern physics, he then says something which at first seems reasonable and straight-forward, yet which to my ears also seemed a little enigmatic. To avoid getting it wrong let me transcribe what he says verbatim:

We know enough about physics now to be able to figure out what universes would look like if we changed the constants. … It’s just an interesting fact that the observed value of the cosmological constant and the observed value of the Higgs mass are close to these dangerous places. These are these two fine-tuning problems, and if I make the cosmological constant more natural the universe is empty, if I make the Higgs more natural the universe is devoid of atoms. If there was a unique underlying vacuum, if there was no anthropic explanation at all, these numbers came out of some underlying formula with pi’s and e’s, and golden ratios, and zeta functions and stuff like that in them, then [all this fine tuning] would be just a remarkably curious fact.… just a very interesting coincidence that the numbers came out this way. If this happened, by the way, I would start becoming religious. Because this would be our existence hard-wired into the DNA of the universe, at the level of the mathematical ultimate formulas.

So that’s the thing that clanged in my ears. Why do people need something “miraculous” in order to justify a sense of religiosity? I think this is a silly and profound misunderstanding about the true nature of religion. Unfortunately I cannot allow myself the space to write about this at length, so I will try to condense a little of what I mean in what will follow. First though, let’s complete the airing, for in the next breath Arkani-Hamed says,

On the other hand from the point of view of thinking about the multiverse, and thinking that perhaps a component of these things have an anthropic explanation, then of course it is not a coincidence, that’s were you’d expect it to be, and we are vastly less hard-wired into the laws of nature.

So I want to say a couple of things about all this fine-tuning and anthropomorphic explanation stuff. The first is that it does not really matter, for a sense of religiosity, if we are occupying a tiny infinitesimal region of the multiverse, or a vast space of mathematically determined inevitable universes. In fact, the Multiverse, in itself, can be considered miraculous. Just as miraculous as a putative formulaically inevitable cosmos. Not because we exist to observe it all, since that after-all is the chief banality of anthropic explanations, they are boring! But miraculous because a multiverse exists in the first place that harbours all of us, including the infinitely many possible doppelgängers of our universe and subtle and wilder variations thereupon. I think many scientists are careless in such attitudes when they appear to dismiss reality as “inevitable”. Nothing really, ultimately, is inevitable. Even a formulaic universe has an origin in the deep underlying mathematical structure that somehow makes it irresistible for the unseen motive forces of metaphysics to have given birth to It’s reality.

No scientific “explanation” can ever push back further than the principles of mathematical inevitability. Yet, there is always something further to say about origins of reality . There is always something proto-mathematical beyond. And probably something even more primeval beyond that, and so on, ad infinitum, or if you prefer a non-infinite causal regression then something un-caused must, in some atemporal sense, pre-exist everything. Yet scientists routinely dismiss or ignore such metaphysics. Which is why, I suspect, they fail to see the ever-present miracles about our known state of reality. Almost any kind of reality where there is a consciousness that can think and imagine the mysteries of it’s own existence, is a reality that has astounding miraculousness to it. The fact science seeks to slowly pull back the veils that shroud these mysteries does not diminish the beauty and profundity of it all, and in fact, as we have seen science unfold with it’s explanations for phenomena, it almost always seems elegant and simple, yet amazingly complex in consequences, such that if one truly appreciates it all, then there is no need whatsoever to look for fine-tuning coincidences or formulaic inevitabilities to cultivate a natural and deep sense of religiosity.

I should pause and define loosely what I mean by “religiosity”. I mean nothing too much more than what Einstein often articulated: a sense of our existence, our universe, being only a small part of something beyond our present understanding, a sense that maybe there is something more transcendent than our corner of the cosmos. No grand design is in mind here, no grand picture or theory of creation, just a sense of wonder and enlightenment at the beauty inherent in the natural world and in our expanding conscious sphere which interprets the great book of nature. (OK, so this is rather more poetic than what you might hope for, but I will not apologise for that. I think something gets lost if you remove the poetry from definitions of things like spirituality or religion. I think this is because if there really is meaning in such notions, they must have aspects that do ultimately lie beyond the reach of science, and so poetry is one of the few vehicles of communication that can point to the intended meanings, because differential equations or numerics will not suffice.)

OK, so maybe Arkani-Hamed is not completely nuts in thinking there is this scenario whereby he would contemplate becoming “religious” in the Einsteinian sense. And really, no where in this essay am I seriously disagreeing with the Professor. I just think that perhaps if scientists like Arkani-Hamed thought a little deeper about things, and did not have such materialistic lenses shading their inner vision, perhaps they would be able to see that miracles are not necessary for a deep and profound sense of religiosity or spiritual understanding or appreciation of our cosmos.

* * *

Just to be clear and “on the record”, my own personal view is that there must surely be something beyond physical reality. I am, for instance, a believer in the platonic view of mathematics: which is that humans, and mathematicians from other sentient civilizations which may exist throughout the cosmos, gain their mathematical understanding through a kind of discovery of eternal truths about realms of axiomatics and principles of numbers and geometry and deeper abstractions, none of which exist in any temporal pre-existing sense within our physical world. Mathematical theorems are thus not brought into being by human minds. They are ideas that exist independently of any physical universe. Furthermore, I happen to believe in something I would call “The Absolute Infinite”. I do not know what this is precisely, I just have an aesthetic sense of It, and It is something that might also be thought of as the source of all things, some kind of universal uncaused cause of all things. But to me, these are not scientific beliefs. They are personal beliefs about a greater reality that I have gleaned from many sources over the years. Yet, amazingly perhaps, physics and mathematics have been one of my prime sources for such beliefs.

The fact I cannot understand such a concept (as the Absolute Infinite) should not give me any pause to wonder if it truly exists or not. And I feel no less mature or more infantile for having such beliefs. If anything I pity the intellectually impoverished souls who cannot be open to such beliefs and speculations. I might point out that speculation is not a bad thing either, without speculative ideas where would science be? Stuck with pre-Copernican Ptolemy cosmology or pre-Eratosthenes physics I imagine, for speculation was needed to invent gizmos like telescopes and to wonder about how to measure the diameter of the Earth using just the shadow of a tall tower in Alexandria.

To imagine something greater than ourselves is always going to be difficult, and to truly understand such a greater reality is perhaps canonically impossible. So we aught not let such smallness of our minds debar us from truth. It is thus a struggle to keep an open-mind about metaphysics, but I think it is morally correct to do so and to resist the weak temptation to give in to philosophical negativism and minimalism about the worlds that potentially exist beyond ours.

Strangely, many self-professing atheists think they can imagine we live in a super Multiverse. I would ask them how they can believe in such a prolific cosmos and yet not also accept the potential existences beyond the physical? And not even “actual existence” just simply “potential existence”. I would then point out that as long as there is admitted potential reality and plausible truth to things beyond the physical, you cannot honestly commit to any brand of atheism. To my mind, even my most open-mind, this form of atheism would seem terribly dishonest and self-deceiving.

Exactly how physics and mathematics could inform my spiritual beliefs is hard to explain in a few words. Maybe sometime later there is an essay to be written on this topic. For now, all I will say is that like Nima Arkani-Hamed, I have a deep sense of the “correctness” of certain ways of thinking about physics, and sometimes mathematics too (although mathematics is less constrained). And similar senses of aesthetics draw me in like the unveiling of a Beethoven symphony to an almost inevitable realisation of some version of truth to the reality of worlds beyond the physical, worlds where infinite numbers reside, where the mind can explore unrestrained by bones and flesh and need for food or water. In such worlds greater beauty than on Earth resides.

Most scientists do not enter their chosen fields because the work is easy. They do their science mainly because it is challenging and rewarding when triumphant. Yet few scientists will ever taste the sweet dew drops of triumph — real world-changing success — in their lifetimes. So it is remarkable perhaps that the small delights in science are sustaining enough for the human soul to warrant persistence and hard endeavour in the face of mostly mediocre results and relatively few cutting edge break-throughs.

Still, I like to think that most scientists get a real kick out of re-discovering results that others before them have already uncovered. I do not think there is any diminution for a true scientist in having been late to a discovery and not having publication priority. In fact I believe this to be universally true for people who are drawn into science for aesthetic reasons, people who just want to get good at science for the fun of it and to better appreciate the beauty in this world. If you are of this kind you likely know exactly what I mean. You could tomorrow stumble upon some theorem proven hundreds of years ego by Gauss or Euler or Brahmagupta and still revel in the sweet taste of insight and understanding.

Going even further, I think such moments of true insight are essential in the flowering of scientific aesthetic sensibilities and the instilling of a love for science in young children, or young at heart adults. “So what?” that you make this discovery a few hundred years later than someone else? They had a birth head start on you! The victory is truly still yours. And “so what?” that you have a few extra giants’ shoulders to stand upon? You also saw through the haze and fog of much more information overload and Internet noise and thought-pollution, so you can savour the moment like the genius you are.

Such moments of private discovery go unrecorded and must surely occur many millions of times more frequently than genuinely new discoveries and break-throughs. Nevertheless, every such transient to invisible moment in human history must also be a little boost to the general happiness and welfare of all of humanity. Although only that one person may feel vibrant from their private moment of insight, their radiance surely influences the microcosm of people around them.

I cannot count how many such moments I have had. They are more than I will probably admit, since I cannot easily admit to any! But I think they occur quite a lot, in very small ways. However, back in the mid 1990’s I had, what I thought, was a truly significant glimpse into the infinite. Sadly it had absolutely nothing to do with my PhD research, so I could only write hurriedly rough notes on recycled printout paper during small hours of the morning when sleep eluded my body. To this day I am still dreaming about the ideas I had back then, and still trying to piece something together to publish. But it is not easy. So I will be trying to leak out a bit of what is in my mind in some of these WordPress pages. Likely what will get written will be very sketchy and denuded of technical detail. But I figure if I put the thoughts out into the Web maybe, somehow, some bright young person will catch them via Internet osmosis of a sort, and take them to a higher level.

There are a lot of threads to knit together, and I hardly know where to start. I have already started writing perhaps half a dozen manuscripts, none finished, most very sketchy. And this current writing is yet another forum I have begun.

The latest bit of reading I was doing gave me a little shove to start this topic anew. It happens from time to time that I return to studying Clifford Geometric Algebra (“GA” for short). The round-about way this happened last week was this:

Weary from reading a Complex Analysis book that promised a lot but started to get tedious: so for a light break YouTube search for a physics talk, and find Twistors and Spinors talks by Sir Roger Penrose. (Twistor Theory is heavily based on Complex Analysis so it was a natural search to do after finishing a few chapters of the mathematics book).

Find out the Twistor Diagram efforts of Andrew Hodges have influenced Nima Arkani-Hamed and even Ed Witten to obtain new cool results crossing over twistor theory with superstring theory and scattering amplitude calculations (the “Amplituhedron” methods).

That stuff is ok to dip into, but it does not really advance my pet project of exploring topological geon theory. So I look for some more light reading and rediscover papers from the Cambridge Geometric Algebra Research Group (Lasenby, Doran, Gull). And start re-reading Gull’s paper on electron paths and tunnelling and the Dirac theory inspired by David Hestene’s work

The Gull paper mentions criticisms of the Dirac theory that I had forgotten. In the geometric algebra it is clear that solving the Dirac equation gives not positively charge anti-electrons, but unphysical negative frequency solutions with negative charge and negative mass. So they are not positrons. It’s provoking that the authors claim this problem is not fully resolved by second quantisation, but rather perhaps just gets glossed over? I’m not sure what to think of this. (If the negative frequencies get banished by second quantisation why not just conclude first quantisation is not nature’s real process?)

Still, whatever the flaws in Dirac theory, the electron paths paper has tantalising similarities with the Bohm pilot wave theory electron trajectories. And there is also a reference to the Statistical Interpretation of Quantum Mechanics (SIQM) due to Ballentine (and attributed also as Einstein’s preferred interpretation of QM).

It gets me thinking again of how GA might be helpful in my problems with topological geons. But I shelve this thought for a bit.

Reading Ballentine’s paper is pretty darn interesting. It dates from 1970, but it is super clear and easy to read. I love that in a paper. The gist of it is that an absolute minimalist interpretation of quantum mechanics would drop Copenhagen ideas and view the wave function as more like a description of what could happen in nature, tat is, the wave functions are descriptions of statistical ensembles of identically prepared experiments or systems in nature. (Sure, no two systems are ever prepared in the exact same initial state, but that hardly matters when you are only doing statistics rather than precise deterministic modelling.)

So Ballentine was suggesting the wave functions are;

not a complete description of an individual particle, but rather

better thought of as a description of an ensemble of identically prepared states.

This is where I ended up, opening my editor to draft a OneOverEpsilon post.

So here’s the thing I like about the ensemble interpretation and how the geometric algebra reworking of Dirac theory adds to a glimmer of clarity about what might be happening with the deep physics of our universe. For a start the ensemble interpretation is transparently not a complete theoretical framework, since it is a statistical theory it does not pretend to be a theory of reality. Whatever is responsible for the statistical behaviour of quantum systems is still an open question in SIQM. The Bohm-like trajectories that the geometric algebra solutions to the Dirac theory are able to compute as streamline plots are illuminating in this respect, since they seem to clearly show that what the Dirac wave equation is modelling is almost certainly not the behaviour a single particle. (One could guess this from Schrödinger theory as well, but I guess physicists were already lured into believing in the literal wave-particle duality meme well before Bohm was able to influence anyone’s thinking.)

Also, it is possible (I do not really know for sure) that the negative frequency solutions in Dirac theory can be viewed as merely an artifact of the statistical ensemble framework. No single particle acts truly in accordance with the Dirac wave equation. So there is no real reason to get ones pants in a twist about the awful appearance of negative frequencies.

(For those in-the-know: the Dirac theory negative frequency solutions turn out to have particle currents in the reverse spatial direction to their momenta, so that’s not a backwards time propagating anti-particle, it is a forwards in time propagating negative mass particle. That’s a particle that’d fall upwards in a gravitational field if the principle of equivalence holds universally. As an aside note: it is a bit funky that this cannot be tested experimentally since no one can yet clump enough anti-matter together to test which way it accelerates in a gravitational field. But I presume the sign of particle inertial mass can be checked in the lab, and, so far, all massive particles known to science at least are known to have positive inertial mass.)

And as a model of reality the Dirac equation has therefore, certain limitations and flaws. It can get some of the statistics correct for particular experiments, but a statistical model always has limits of applicability. This is neither a defense or a critique of Dirac theory. My view is that it would be a bit naïve to regard Dirac theory as the theory of electrons, and naïve to think it should have no flaws. At best such wave-function models are merely a window frame for a particular narrow view out into our universe. Maybe I am guilty of a bit of sophistry or rhetoric here, but that’s ok for a WordPress blog I think … just puttin’ some ideas “out there”.

Then another interesting confluence is that one of Penrose’s big projects in Twistor theory was to do away with the negative frequency solutions in 2-Spinor theory. And I think, from recall, he succeeded in this some time ago with the extension of twistor space to include the two off-null halves. Now I do not know how this translates into real-valued geometric algebra, but in the papers of Doran, Lasenby and Gull you can find direct translations of twistor objects into geometric algebra over real numbers. So there has to be in there somewhere a translation of Penrose’s development in eliminating the negative frequencies.

So do you feel a new research paper on Dirac theory in the wind just there? Absolutely you should! Please go and write it for me will you? I have my students and daughters’ educations to deal with and do not have the free time to research off-topic too much. So I hope someone picks up on this stuff. Anyway, this is where maybe the GA reworking of Dirac theory can borrow from twistor theory to add a little bit more insight.

There’s another possible confluence with the main unsolved problem in twistor theory. The Twistor theory programme is held back (stalled?) a tad (for 40 years) by the “googly problem” as Penrose whimsically refers to it. The issue is one of trying to find self-dual solutions of Einstein’s vacuum equations (as far as I can tell, I find it hard to fathom twistor theory so I’m not completely sure what the issue is). The “googly problem” stood for 40 years, and in essence is the problem of “finding right-handed interacting massless fields (positive helicity) using the same twistor conventions that give rise to left-handed fields (negative helicity)”. Penrose maybe has a solution dubbed Palatial Twistor Theory which you might be able to read about here: “On the geometry of palatial twistor theory” by Roger Penrose, and also lighter reading here: “Michael Atiya’s Imaginative Mind” by Siobhan Roberts in Quanta Magazine.

If you do not want to read those articles then the synopsis, I think, is that twistor theory has some problematic issues in gravitation theory when it comes to chirality (handedness), which is indeed a problem since obtaining a closer connection between relativity and quantum theory was a prime motive behind the development of twistor theory. So if twistor theory cannot fully handle left and right-handed solutions to Einstein’s equations it might be said to have failed to fulfill one it’s main animating purposes.

So ok, to my mind there might be something the geometric algebra translation of twistor theory can bring to bear on this problem, because general relativity is solved in fairly standard fashion with geometric algebra (that’s because GA is a mathematical framework for doing real space geometry, and handles Lorentzian metrics as simply as Euclidean, not artificially imposed complex analytic structure is required). So if the issues with twistor theory are reworked in geometric algebra then some bright spark should be able to do the job twistor theory was designed do do.

By the way, the great beauty and advantage Penrose sees in twistor theory is the grounding of twistor theory in complex numbers. The Geometric Algebra Research Group have pointed out that this is largely a delusion. It turns out that complex analysis and holomorphic functions are just a sector of full spacetime algebra. Spacetime algebra, and in fact higher dimensional GA, have a concept of monogenic functions which entirely subsume the holomorphic (analytic) functions of 2D complex analysis. Complex numbers are also completely recast for the better as encodings of even sub-algebras of the full Clifford–Geometric Algebra of real space. In other words, by switching languages to geometric algebra the difficulties that arise in twistor theory should (I think) be overcome, or at least clarified.

If you look at the Geometric Algebra Research Group papers you will see how doing quantum mechanics or twistor theory with complex numbers is really a very obscure way to do physics. Using complex analysis and matrix algebra tends to make everything a lot harder to interpret and more obscure. This is because matrix algebra is a type of encoding of geometric algebra, but it is not a favourable encoding, it hides the clear geometric meanings in the expressions of the theory.

* * *

So far all I have described is a breezy re-awakening of some old ideas floating around in my head. I rarely get time these days to sit down and hack these ideas into a reasonable shape. But there are more ideas I will try to write down later that are part of a patch-work that I think is worth exploring. It is perhaps sad that over the years I had lost the nerve to work on topological geon theory. Using spacetime topology to account for most of the strange features of quantum mechanics is however still my number one long term goal in life. Whether it will meet with success is hard to discern, perhaps that is telling: if I had more confidence I would simply abandon my current job and dive recklessly head-first into geon theory.

Before I finish up this post I want to thus outline very, very breezily and incompletely, the basic idea I had for topological geon theory. It is fairly simplistic in many ways. There is however new impetus from the past couple of years developments in the Black Hole firewall paradox debates: the key idea from this literature has been the “ER=EPR” correspondence hypothesis, which is that quantum entanglement (EPR) might be almost entirely explained in terms of spacetime wormholes (ER: Einstein-Rosen bridges). This ignited my interest because back in 1995/96 I had the idea that Planck scale wormholes in spacetime can allow all sorts of strange and gnarly advance causation effects on the quantum (Planckian) space and time scales. It seemed clear to me that such “acausal” dynamics could account for a lot of the weird correlations and superpositions seen in quantum physics, and yet fairly simply so by using pure geometry and topology. It was also clear that if advanced causation (backwards time travel or closed timelike curves) are admitted into physics, even if only at the Planck scale, then you cannot have a complete theory of predictive physics. Yet physics would be deterministic and basically like general relativity in the 4D block universe picture, but with particle physics phenomenology accounted for in topological properties of localised regions of spacetime (topological 4-geons). The idea, roughly speaking, is that fundamental particles are non-trivial topological regions of spacetime. The idea is that geons are not 3D slices of space, but are (hypothetically) fully 4-dimensional creatures of raw spacetime topology. Particles are not apart from spacetime. Particles are not “fields that live in spacetime”, no! Particles are part of spacetime. At least that was the initial idea of Geon Theory.

Wave mechanics, or even quantum field theory, are often perceived to be mysterious because they either have to be interpreted as non-deterministic (when one deals with “wave function collapse”) or as semi-deterministic but incomplete and statistical descriptions of fundamental processes. When physicists trace back where the source of all this mystery lies they are often led to some version of non-locality. And if you take non-locality at face value it does seem rather mysterious given that all the models of fundamental physical processes involve discrete localised particle exchanges (Feynman diagrams or their stringy counterparts). One is forced to use tricks like sums over histories to obtain numerical calculations that agree with experiments. But no one understand why such calculational tricks are needed, and it leads to a plethora of strange interpretations, like Many Worlds Theory, Pilot Waves, and so on. A lot of these mysteries I think dissolve away when the ultimate source of non-locality is found to be deep non-trivial topology in spacetime which admits closed time-like curves (advanced causation, time travel). To most physicists such ideas appear nonsensical and outrageous. With good reason of course, it is very hard to make sense of a model of the world which allows time travel, as decades of scifi movies testify! But geon theory doe snot propose unconstrained advanced causation (information from the future influences events in the past). On the contrary, geon theory is fundamentally limited in outrageousness by the assumption the closed time-like curves are restricted to something like the Planck scale. I should add that this is a wide open field of research. No one has worked out much at all on the limits and applicability of geon theory. For any brilliant young physicists or mathematicians this is a fantastic open playground to explore.

The only active researcher I know in this field is Mark Hadley. It seemed amazing to me that after publishing his thesis (also around 1994/95 independently of my own musings) no one seemed to take up his ideas and run with them. Not even Chris Isham who refereed Hadley’s thesis. The write-up of Hadley’s thesis in NewScientist seemed to barely cause a micro-ripple in the theoretical physics literature. I am sure sociologists of science could explain why, but to me, at the time, having already discovered the same ideas, I was perplexed.

To date no one has explicitly spelt out how all of quantum mechanics can be derived from geon theory. Although Hadley I surmise, completed 90% of this project! The final 10% is incredibly difficult though — it would necessitate deriving something like the Standard Model of particle physics from pure 4D spacetime topology — no easy feat when you consider high dimensional string theory has not really managed the same job despite hundreds of geniuses working on it for over 35 years. My thinking has been that string theory involves a whole lot of ad hockery and “code bloat” to borrow a term from computer science! If string theory was recast in terms of topological geons living as part of spacetime, rather than as separate to spacetime, then I suspect great advances could be made. I really hope someone will see these hints and connections and do something momentous with them. Maybe some maverick like that surfer dude Garett Lisi might be able to weigh in and provide some fire power?

In the mean time geometric algebra has so not been applied to geon theory, but GA blends in with these ideas since it seems, to me, to be the natural language for geometric physics. If particle phenomenology boils down to spacetime topology, then the spacetime algebra techniques should find exciting applications. The obstacle is that so far spacetime algebra has only been developed for physics in spaces with trivial topology.

Another connection is with “combinatorial spacetime” models — the collection of ideas for “building up spacetime” from discrete combinatorial structures (spin foams, or causal networks, causal triangulations, and all that stuff). My thinking is that all these methods are unnecessary, but hint at interesting directions where geometry meets particle physics because (I suspect) such combinatorial structure approaches to quantum gravity are really only gross approximations to the spacetime picture of topological geon theory. It is in the algebra which arises from non-trivial spacetime topology and it’s associated homology that (I suspect) combinatorial spacetime pictures derive their use.

Naturally I think the combinatorial structure approaches are not fundamental. I think topology of spacetime is what is fundamental.

* * *

That probably covers enough of what I wanted to get off my chest for now. There is a lot more to write, but I need time to investigate these things so that I do not get too speculative and vague and vacuously philosophical.

What haunts me most nights when I try to dream up some new ideas to explore for geon theory (and desperately try to find some puzzles I can actually tackle) is not that someone will arrive at the right ideas before me, but simply that I never will get to understand them before I die. I do not want to be first. I just want to get there myself without knowing how anyone else has got to the new revolutionary insights into spacetime physics. I had the thrill of discovering geon theory by myself, independently of Mark Hadley, but now there has been this long hiatus and I am worried no one will forge the bridges from geon theory to particle physics while I am still alive.

I have this plan for what I will do when/if I do hear such news. It is the same method my brother Greg is using with Game of Thrones. He is on a GoT television and social media blackout until the books come out. He’s a G.R.R. Martin purest you see. But he still wants to watch the TV adaptation later on for amusement (the books are waaayyy better! So he says.) It is surprisingly easy to enforce such a blackout. Sports fans will know how. Any follower of All Black Rugby who misses an AB test match knows the skill of doing a media blackout until they get to watch their recording or replay. It’s impossible to watch an AB game if you know the result ahead of time. Rugby is darned exciting, but a 15-aside game has too many stops and starts to warrant sitting through it all when you already know the result. But when you do not know the result the build-up and tension are terrific. I think US Americans have something similar in their version of Football, since American Football has even more stop/start, it would be excruciatingly boring to sit through it all if you knew the result. But strangely intense when you do not know!

So knowing the result of a sports contest ahead of time is more catastrophic than a movie or book plot spoiler. It would be like that if there is a revolution in fundamental physics involving geon theory ideas. But I know I can do a physics news blackout fairly easily now that I am not lecturing in a physics department. And I am easily enough of an extreme introvert to be able to isolate my mind from the main ideas, all I need is a sniff, and I will then be able to work it all out for myself. It’s not like any ordinary friend of mine is going to be able to explain it to me!

If geon theory turns out to have any basis in reality I think the ideas that crack it all open to the light of truth will be among the few great ideas of my generation (the post Superstring generation) that could be imagined. If there are greater ideas I would be happy to know them in time, but with the bonus of not needing a physics news blackout! If it’s a result I could never have imagined then it’d be worth just savouring the triumph of others.

At this stage of life a dude like me can enter a debate about the foundations of quantum mechanics with little trepidation. There is a chance someone will put forward proposals that are just too technically difficult to understand, but there is a higher chance of getting either something useful out of the debate or obtaining some amusement and hilarity. The trick is to be a little detached and open-minded while retaining a decent dose of scepticism.

Inescapable Non-locality

Recently I was watching a lecture by Sheldon Goldstein (a venerable statesman of physics) who was speaking about John Stewart Bell’s contributions to the foundations of quantum mechanics. Bell was, like Einstein, sceptical of the conventional interpretations that gave either too big a role for “observers” and the “measurement process” or swept such issues aside by appealing to Many Worlds or some other fanciful untestable hypotheses.

What Bell ended up showing was a theory for a class of experiments that would prove the physics of our universe is fundamental non-local. Bell was actually after experimental verification that we cannot have local hidden variable theories. Hidden variables being things in physics that we cannot observe. Bell hated the idea of unobservable physics (and Einstein would have agreed, (me too, but that’s irrelevant)). The famous “Bell’s Inequalities” are a set of relations referring to experimental results that will give clear different numbers for outcomes of experiments if our universe’s physics is inherently non-local, or classical-with-hidden-variables. The hidden variables are used to model the weirdness of quantum mechanics.

Hidden variable theories attempt to use classical physics, and possibly strict locality (no signals going faster than light, and even no propagation of information faster than light) to explain fundamental physical processes. David Bohm came up with the most complete ideas for hidden variables theories, but his, and all subsequent attempts, had some very strange features that seemed to be always needed in order to explain the results of the particular types of experiments that John Bell had devised. In Bohm’s theories he uses a feature called a Pilot Wave, which is an information carrying wave that physicists can only indirectly observe via it’s influence on experimental outcomes. We only get to see the statistics and probabilities induced by Bohm’s pilot waves. They spread out everywhere and they thus link space-like separated regions of the universe between which no signals faster than light could ever travel between. This has the character of non-locality but without requiring relativity violating information signalling faster than light, so the hope was one could use pilot waves to get a local hidden variables theory that would agree with experiments.

Goldstein tells us that Bell set out to show it was impossible to have a local hidden variables theory, but he ended up showing you could not have any local theory — at all! — all theories have to have some non-locality. Or rather, what the Bell Inequalities ended up proving (via numerous repeated experiments which measured conformance to the Bell inequalities) was that the physics in our universe could never be local, whatever theory one devises to model reality it has to be non-local. So it has to have some way for information to get from one region to another faster than light.

That is what quantum mechanics assumes, but without giving us any mechanism to explain it. A lot of physicists would just say, “It’s just the way our world is”, or they might use some exotic fanciful physics, like Many Worlds, to try to explain non-locality.

History records that Bell’s theorems were tested in numerous types of experiments, some with photons, some with electrons, some with entire atoms, and all such experiments have confirmed quantum mechanics and non-locality and have dis-proven hidden variables and locality. For the record, one may still believe in hidden variables, but the point is that if even your hidden variables theory has to be non-local then you lose all the motivation for believing in hidden variables. Hidden variables were designed to try to avoid non-locality. That was almost the only reason for postulating hidden variables. Why would you want to build-in to the foundations of a theory something unobservable? Hidden variables were a desperation in this sense, a crazy idea designed to do mainly just one thing — remove non-locality. So Bell and the experiments showed this project has failed.

Now would you agree so far? I hope not. Hidden variables are not too much more crazy then any of the “standard interpretations” of quantum mechanics, of which there are a few dozen varieties, all fairly epistemologically bizarre. Most other interpretations have postulates that are considerably more radical than hidden variables postulates. Indeed, one of the favourable things about a non-local hidden variables theory is that it would give the same predications as quantum mechanics but without a terribly bizarre epistemology. Nevertheless, HV theories have fallen out of favour because people do not like nature to have hidden things that cannot be observed. This is perhaps an historical prejudice we have inherited from the school of logical positivism, and maybe for that reason we should be more willing to give it up! But the prejudice is quite persistent.

Quantum Theory without Observers

Goldstein raises some really interesting points when he starts to talk about the role of measurement and the role of observers. He points out that physicists are mistaken when they appeal to observers and some mysterious “measurement process” in their attempts to rectify the interpretations of quantum mechanics. It’s a great point that I have not heard mentioned very often before. According to Goldstein, a good theory of physics should not mention macroscopic entities like observers or measurement apparatus, because such things should be entirely dependent upon—and explained by—fundamental elementary processes.

This demand seems highly agreeable to me. It is a nice general Copernican principle to remove ourselves from the physics needed to explain our universe. And it is only a slightly stronger step to also remove the very vague and indiscreet notion of “measurement”.

The trouble is that in basic quantum mechanics one deals with wave functions or quantum fields (more generally) that fundamentally cannot account for the appearance of our world of experience. The reason is that these tools only give us probabilities for all the various ways things can happen over time, we get probabilities and nothing else from quantum theory. What actually happens in time is not accounted for by just giving the probabilities. This is often a called the “Measurement Problem” of quantum mechanics. It is not truly a problem. It is a fundamental incompleteness. The problem is that standard quantum theory has absolutely no mechanism for explaining the appearance of classical reality that we observe.

So this helps explain why a lot of quantum interpretation philosophy injects the notions of “observer” and “measurement” into the foundations of physics. It seems to be necessary for proving an account of the real semi-classical appearance of our world. We are not all held in ghostly superpositions because we all observe and “measure” each other, constantly. Or maybe our body cells are enough, they are “observing each other” for us? Or maybe a large molecule has “observational power” and is sufficient? Goldstein, correctly IMHO, argues this is all bad philosophy. Our scientific effort should be spent on trying to complete quantum theory or find a better more complete theory or framework for fundamental physics.

Here’s Goldstein encapsulating this:

It’s not that you don’t want observers in physics. Observers are in the real world and physics better account for the fact that there are observers. But observers, and measurement, and vague notions like that, and, not just vague, even macroscopic notions, they just seem not to belong in the very formulation of what could be regarded as a fundamental physical theory.

There should be no axioms about “measurement”. Here is one passage that John Bell wrote about this:

The concept of measurement becomes so fuzzy on reflection that it is quite surprising to have it appearing in physical theory at the most fundamental level. … Does not any analysis of measurement require concepts more fundamental than measurement? And should not the fundamental theory be about these more fundamental concepts?

Rise of the Wormholes

I need to explain one more set of ideas before making the note for this post.

There is so much to write about ER=EPR, and I’ve written a few posts about ER=EPR so far, but not enough. The gist of it, recall, is that the fuss in recent decades over the “Black Hole Information Paradox” or the “Black Hole Firewall” have been incredibly useful in leading a group of theoreticians towards a basic dim inchoate understanding that the non-locality in quantum mechanics is somehow related to wormhole bridges in spacetime. Juan Maldacena and Leonard Susskind have pioneered this approach to understanding quantum information.

A lot of the weirdness on quantum mechanics turns out to be just geometry and topology of spacetime.

The “EPR”=”Einstein-Podolsky-Rosen-Bohm thought experiments”, precisely the genesis of the ideas that John Bell devised his Bell Inequalities for testing quantum theory, and which prove that physics involves fundamentally non-local interactions.

The “ER=”Einstein-Rosen wormhole bridges”. Wormholes are a science fiction device for time travel or fast interstellar travel. The idea is that you might imagine creating a spacetime wormhole by pinching off a thread of spacetime like the beginnings of a black hole, but then reconnecting the pinched end somewhere else in space, maybe a long time or distance separation away, and keep the pinched end open at this reconnection region. So you can make this wormhole bridge a space length or time interval short-cut between two perhaps vastly separated regions of spacetime.

It seems that if you have an extremal version of a wormhole that is essentially shrunk down to zero radius, so it cannot be traversed by any mass, then this minimalistic wormhole still acts as a conduit of information. These provide the non-local connections between spacelike separated points in spacetime. Basically the ends of the ER=EPR wormholes are like particles, and they are connected by a wormhole that cannot be traversed by any actual particle.

Entanglement and You

So now we come to the little note I wanted to make.

I agree with Goldstein that we aught not artificially inject the concept of an observer or a “measurement process” into the heart of quantum mechanics. We should avoid such desperations, and instead seek to expand our theory to encompass better explanations of classical appearances in our world.

The interesting thing is that when we imagine how ER=EPR wormholes could influence our universe, by connecting past and future, we might end up with something much more profound than “observers” and “measurements”. We might end up with an understanding of how human consciousness and our psychological sense of the flow of time emerges from fundamental physics. All without needing to inject such transcendent notions into the physics. Leave the physics alone, let it be pristine, but get it correct and then maybe amazing things can emerge.

I do not have such a theory worked out. But I can give you the main idea. After all, I would like someone to be working on this, and I do not have the time or technical ability yet, so I do not want the world of science to wait for me to get my act together.

First: it would not surprise me if, in future, a heck of a lot of quantum theory “weirdness” was explained by ER=EPR like principles. If you abstract a little and step back from any particular instance of “quantum weirdness”, (like wave-particle duality or superposition or entanglement in any particular experiment) then what we really see is that most of the weirdness is due to non-locality. Now, this might take various guises, but if there is one mechanism for non-locality then it is a good bet something like this mechanism is at work behind most instances of non-locality that arise in quantum mechanics.

Secondly: the main way in which ER=EPR wormholes account for non-local effects is via pure information connecting regions of spacetime via the extremal wormholes. And what is interesting about this is that this makes a primitive form of time travel possible. Only information can “time travel” via these wormholes, but that might be enough to explain a lot of quantum mechanics.

Thirdly: although it is unlikely time travel effects can ever propagate up to macroscopic physics, because we just cannot engineer large enough wormholes, the statistical effects of the minimalistic ER+EPR wormholes might be enough to account for enough correlation between past and future that we might be able to eventually prove, in principle, that information gets to us from our future, at least at the level of fundamental quantum processes.

Now here’s the more speculative part: I think what might emerge from such considerations is a renewed description of the old Block Universe concept from Einstein’s general relativity (GR). Recall, in GR, time is more or less placed on an equal theoretical footing to space. This means past and future are all connected and exist whether we know it or not. Our future is “out there in time” and we just have not yet travelled into it. And we cannot travel back to our past because the bridges are not possible, the only wormhole bridges connecting past to future over macroscopic times are those minimal extremal ER=EPR wormholes that provide the universe with quantum entanglement phenomena and non-locality.

So I do not know what the consequences of such developments will be. But I can imagine some possibilities. One is that although we cannot access our future, or travel back to our past, the information from such regions in the Block Universe are tenuously connected to us nonetheless. Such connections are virtually impossible for us to exploit usefully because we could never confirm what we are dealing with until the macroscopic future “arrives” so to speak. So although we know it is not complete, we will still have to end up using quantum mechanics probability amplitude mathematics to make predictions about physics. In other words, quantum mechanics models our situation with respect to the world, not the actual state of the world from an atemporal Block Universe perspective. It’s the same problem with the time travel experiment conducted in 1994 in the laboratory under the supervision of Günter Nimtz, whose lab sent analogue signals encoding Mozart’s 40th Symphony into the future (by a few milliseconds).

For that experiment there are standard explanations using Maxwell’s theory of electromagnetism that show no particles travel faster than light into the future. Nevertheless, Nimtz’s laboratory got a macroscopic recording of bits of information from Mozart’s 40th Symphony out of one back-end of a tunnelling apparatus before it was sent into the front-end of the apparatus. The interesting thing to me is not about violation of special relativity or causality. (You might think the physicists could violate causality because one of them could wait at the back-end and when they hear Mozart come out they could tell their colleague to send Beethoven instead, thus creating a paradox. But they could not do this because they could not send a communication fast enough in real time to warn their colleague to send Beethoven’s Fifth instead of Mozart.) Sadly that aspect of the experiment was the most controversial, but it was not the most interesting thing. Many commentators argued about the claimed violations of SR, and there are some good arguments about photon “group velocity” being able to transmit a signal faster than light without any particular individual photon needing to go faster than light.

(Actually many of Nimtz’s experiments used electron tunnelling, not photon tunnelling, but the general principles are the same.)

All the “wave packet” and “group velocity” explanations of Nimtz’s time travel experiments are, if you ask me, merely attempts to reconcile the observations with special relativity. They all, however, use collective phenomena, either waves, or group packets. But we all know photons are not waves, they are particles (many still debate this, but just bear out my argument). The wave behaviour of fundamental particles is in fact a manifestation of quantum mechanics. Maxwell’s theory is, thus, only phenomenological. It describes electromagnetic waves, and photons get interpreted (unfortunately) as modes of such waves. But this is mistaken. Photons collectively can behave as Maxwell’s waves, but Maxwell’s theory is describing a fictional reality. Maxwell’s theory only approximates what photons actually do. They do not, in Maxwell’s theory, impinge on photon detectors like discrete quanta. And yet we all know this is what light actually does! It violates Maxwell’s theory every day!

So what, I think, is truly interesting about Nimtz’s experiments is that they were sensitive enough to give us a window into wormhole traversal. Quantum tunnelling is nothing more than information traversal though ER=EPR type wormholes. At least that’s my hypothesis. It is a non-classical effect, and Maxwell’s theory only accounts for it via the fiction that photons are waves. A wrong explanation can often fully explain the facts of course!

Letting Things Be

What Goldstein, and Bohm, and later John Stewart Bell wanted to do is explain the world. They knew quantum field theory does not explain the world. It does not tell us why things come to be what they are. Why a measurement pointer ends up pointing in particular direction rather than any one of the other superposed states of pointer orientation the quantum theory tells us it aught to be in. Such outcomes or predictions are what David Bohm referred to as “local Beables”. Goldstein explains more in his seminar: “John Bell and the Foundations of Quantum Mechanics” Sesto, Italy 2014, (https://www.youtube.com/watch?v=RGbpvKahbSY).

My favourite idea, one I have been entertaining for over twenty years, in fact ever since 1995 when I read Kip Thorne’s book about classical general relativity and wormholes, is that the wormholes (or technically “closed timelike curves”) are where all the ingredients are for explaining quantum mechanics from a classical point of view. Standard twentieth century quantum theory does not admit wormholes. But if you ignore quantum theory and start again from classical dynamics, but allow ER=EPR wormholes to exist, then I think most of quantum mechanics can be recovered without the need for un-explained axiomatic superpositions and wave-function collapse (the conventional explanation for “measurements” and classical appearances). In other words, quantum theory, like Maxwell’s EM theory, is only a convenient fictional model of our physics. You see, when you naturally have information going backwards and forwards in time you cannot avoid superpositions of state. But when a stable time-slice emerges or “crystallizes” out of this mess of acausal dynamics, then it should look like a measurement has occurred. But no such miracle happens, it simply emerges or crystallizes naturally from the atemporal dynamics. (I use the term “crystallize” advisedly here, it is not a literal crystallization, but something abstractly similar, and George Ellis uses it in a slightly different take on the Block Universe concept, so I figure it is a fair term to use).

Also, is it possible that atemporal dynamics will tend to statistically “crystallize” something like Bohm’s pilot wave guide potential. If you know a little about Bohmian mechanics you know the pilot wave is postulated as a real potential, something that just exists in our universe’s physics. Yet is has no other model alike, it is not a quantum field, it is not a classical filed, it is what it is. But what if there is no need for such a postulate? How could it be avoided? My idea is that maybe the combined statistical effects of influences propagating forward and backward in time give rise to an effective potential much like the Bohm pilot wave or Schrödinger wave function. Either way, both constructs in conventional or Bohmian quantum mechanics might be just necessary fictions we need to describe, in one way or another, the proper complete Block Universe atemporal spacetime dynamics induced by the existence of spacetime wormholes. I could throw around other ideas, but the main one is that wormholes endow spacetime with a really gnarly stringy sort of topology that has, so far, not been explored enough by physicists.

Classically you get non-locality when you allow wormholes. That’s the quickest summary I can give you. So I will end here.

Do you like driving? I hate it. Driving fast and dangerous in a computer game is ok, but a quick and ephemeral thrill. But for real driving, to and from work, I have a long commute, and no amount of podcasts or music relieves the tiresomeness. Driving around here I need to be on constant alert, there are so many cockroaches (motor scooters) to look out for, and here in Thailand over half the scooter drivers do no t wear helmets, and I cannot drive 50 metres before seeing a young child driven around on a scooter without a helmet. Neither parent nor child will have a helmet. Mothers even cradle infants while hanging on at rear on a scooter. It might not be so bad if the speeds were slow, but they are not. That’s partly why I find driving exhausting. It is stressful to be so worried about so many other people.

True to the title it was illuminating. Watching Witten’s popular lectures is always good value. Mostly I find everything he presents I have heard or read about elsewhere, but never in so much seemingly understandable depth and insight. It is really lovely to hear Witten talk about the φ3 quantum field theory as a natural result of quantising gravity in 1-dimension. He describes this as one of nature’s rhymes: patterns at one scale or domain get repeated in others.

Then he describes how the obstacle to a quantum gravity theory in spacetime via a quantum field theory is the fact that in quantum mechanics states do not correspond to operators. He draws this as a Feynman diagram where a deformation of spacetime is indicated by a kink in a Feynman graph line. That’s an operator. Whereas states in quantum mechanics do not have such deformations, since they are points.

An operator describing a perturbation, like a deformation in the spacetime metric, appears as an internal line in a Feynman diagram, not an external line.

So that’s really nice isn’t it?

I had never heard the flaw of point particle quantum field theory given in such a simple and eloquent way. (The ultraviolet divergences are mentioned later by Witten.)

Then Witten does a similar thing for my understanding of how 2D conformal field theory relates to string theory and quantised gravity. In 2-dimensions there is a correspondence between operators and states in the quantum theory, and it is illustrated schematically by the conformal mapping that takes a point in a 2-manifold to a tube sticking out of the manifold.

The point being (excuse the pun) the states are the slices through this conformal geometry, and so deformations of the states are now equivalent to deformations of operators, and we have the correspondence needed for a quantum theory of gravity.

This is all very nice, but 3/4 of the way through his talk it still leaves some mystery to me.

I still do not quite grok how this makes string theory background-free. The string world sheet is quantize-able and you get from this either a conformal field theory or quantum gravity, but how is this background-independent quantum gravity?

I find I have to rewind and watch Witten’s talk a number of times to put all the threads together, and I am still missing something. Since I do not have any physicist buddies at my disposal to bug and chat to about this I either have to try physicsforums or stackexchange or something to get some more insight.

So I rewound a few times and I am pretty certain Witten starts out using a Riemannian metric on a string, and then on a worldsheet. Both are already embedded in a spacetime. So he is not really describing quantum gravity in spacetime. He is describing a state-operator correspondence in a quantum gravity performed on string world sheets. Maybe in the end this comes out in the wash as equivalent to quantising general relativity? I cannot tell. In any case, everyone knows string theory yields a graviton. So in some sense you can say, “case closed up to phenomenology”, haha! Still, a lovely talk and a nice pre-bedtime diversion. But I persisted through to the end of the lecture — delayed sleep experiment.

My gut reaction was that Witten is using some slight of hand. The Conformal Field Theory maybe is background-free, since it is derived from quantum mechanics of the string world sheets. But the stringy gravity theory still has the string worldsheet fluffing around in a background spacetime. Does it not? Witten is not clear on this, though I’m sure in his mind he knows what he is talking about. Then, like he read my mind, Witten does give a partial answer to this.

What Witten gets around to saying is that if you go back earlier in his presentation where he starts with a quantum field theory on a 1D line, then on a 2d-manifold, the spacetime he uses, he claims, was arbitrary. So this partially answers my objections. He is using a background spacetime to kick-start the string/CFT theory, which he admits. But then he does the slight-of-hand and says

“what is more fundamental is the 2d conformal field theory that might be described in terms of a spacetime but not necessarily.”

So my take on this is that what Witten is saying is (currently) most fundamental in string theory is the kick-starter 2d conformal field theory. Or the 2d manifold that starts out as the thing you quantise deformations on to get a phenomenological field theory including quantised gravity. But this might not even be the most fundamental structure. You start to get the idea that string/M-theory is going to moprh into a completely abstract model. The strings and membranes will end up not being fundamental. Which is perhaps not too bad.

I am not sure what else you need to start with a conformal field theory. But surely some kind of proto-primordial topological space is needed. Maybe it will eventually connect back to spin foams or spin networks or twistors. Haha! Wouldn’t that be a kick in the guts for string theorists, to find their theory is really built on top of twistor theory! I think twistors give you quite a bit more than a 2d conformal field, but maybe a “bit more” is what is needed to cure a few of the other ills that plague string theory phenomenology.

* * *

For what it’s worth, I actually think there is a need in fundamental physics to explain even more fundamental constructs, such as why do we need to start with a Lagrangian and then sum it’s action over all paths (or topologies if you are doing a conformal field theory)? This entire formalism, in my mind, needs some kind of more primitive justification.

Moreover, I think there is a big problem in field theory per se. My view is that spacetime is more fundamental than the fields. Field theory is what should “emerge” from a fundamental theory of spacetime physics, not the other way around. Yet “the other way round”, — i.e., fields first, then spacetime — seems to be what a lot of particle or string theorists seem to be suggesting. I realize this is thoroughly counter to the main stream of thought in modern physics, but I cannot help it, I’m really a bit of a classicist at heart. I do not try to actively swim against the stream, it’s just in this case that’s where I find my compass heading. Nevertheless, Witten’s ideas and the way he elaborates them are pretty insightful. Maybe I am unfair. I have heard Weinberg mention the fields are perhaps not fundamental.

* * *

OK, that’s all for now. I have to go and try to tackle Juan Maldacena’s talk now. He is not as easy to listen to though, but since this will be a talk for a general audience it might be comprehensible. Witten might be delightfully nerdy, but Maldacena is thoroughly cerebral and hard to comprehend. Hoping he takes it easy on his audience.

I have a post prepared to upload in a bit that will announce a possible hiatus from this WordPress blog. The reason is just that I found a cool book I want to try to absorb, The Princeton Companion to Mathematics by Gowers, Barrow-Green and Leader. Doubtless I will not be able to absorb it all in one go, so I will likely return to blogging periodically. But there is also teaching and research to conduct, so this book will slow me down. The rest of this post is a light weight brain-dump of some things that have been floating around in my head.

Recently, while watching a lecture on topology I was reminded that a huge percentage of the writings of Archimedes were lost in the siege of Alexandria. The Archimedean solids were rediscovered by Johannes Kepler, and we all know what he was capable of! Inspiring Isaac Newton is not a bad epitaph to have for one’s life.

The general point about rediscovery is a beautiful thing. Mathematics, more than other sciences, has this quality whereby a young student can take time to investigate previously established mathematics but then take breaks from it to rediscover theorems for themselves. How many children have rediscovered Pythagoras’ theorem, or the Golden Ratio, or Euler’s Formula, or any number of other simple theorems in mathematics?

Most textbooks rely on this quality. It is also why most “Exercises” in science books are largely theoretical. Even in biology and sociology. They are basically all mathematical, because you cannot expect a child to go out and purchase a laboratory set-up to rediscover experimental results. So much textbook teaching is mathematical for this reason.

I am going to digress momentarily, but will get back to the education theme later in this article.

The entire cosmos itself has sometimes been likened to an eternal rediscovery. The theory of Eternal Inflation postulates that our universe is just one bubble in a near endless ocean of baby and grandparent and all manner of other universes. Although, recently, Alexander Vilenkin and Audrey Mithani found that a wide class of inflationary cosmological models are unstable, meaning that could not have arisen from a pre-existing seed. There had to be a concept of an initial seed. This kind of destroys the “eternal” in eternal inflation. Here’s a Discover magazine account: “What Came Before the Big Bang? — Cosmologist Alexander Vilenkin believes the Big Bang wasn’t a one-off event”. Or you can click this link to hear Vilenkin explain his ideas himself: FQXi: Did the Universe Have a Beginning? Vilenkin seems to be having a rather golden period of originality over the past decade or so, I regularly come across his work.

If you like the idea of inflationary cosmology you do not have to worry too much though. You still get the result that infinitely many worlds could bubble out of an initial inflationary seed.

Below is my cartoon rendition of eternal inflation in the realm of human thought:

Oh to be a bubble thoughtoverse of the Wittenesque variety.

Quantum Fluctuations — Nothing Cannot Fluctuate

One thing I really get a bee in my bonnet about are the endless recountings in the popular literature about the beginning of the universe is the naïve idea that no one needs to explain the origin of the Big Bang and inflatons because “vacuum quantum fluctuations can produce a universe out of nothing”. This sort of pseudo-scientific argument is so annoying. It is a cancerous argument that plagues modern cosmology. And even a smart person like Vilenkin suffers from this disease. Here I quote him from a quote in another article on the PBS NOVA website::

Vilenkin has no problem with the universe having a beginning. “I think it’s possible for the universe to spontaneously appear from nothing in a natural way,” he said. The key there lies again in quantum physics—even nothingness fluctuates, a fact seen with so-called virtual particles that scientists have seen pop in and out of existence, and the birth of the universe may have occurred in a similar manner.
Source: http://www.pbs.org/wgbh/nova/blogs/physics/2012/06/in-the-beginning/

At least you have to credit Vilenkin with the brains to have said it is only “possible”. But even that caveat is fairly weaselly. My contention is that out of nothing you cannot get anything, not even a quantum fluctuation. People seem to forget quantum field theory is a background-dependent theory, it requires a pre-existing spacetime. There is no “natural way” to get a quantum fluctuation out of nothing. I just wish people would stop insisting on this sort of non-explanation for the Big Bang. If you start with not even spacetime then you really cannot get anything, especially not something as loaded with stuff as an inflaton field. So one day in the future I hope we will live in a universe where such stupid arguments are nonexistent nothingness, or maybe only vacuum fluctuations inside the mouths of idiots.

There are other types of fundamental theories, background-free theories, where spacetime is an emergent phenomenon. And proponents of those theories can get kind of proud about having a model inside their theories for a type of eternal inflation. Since their spacetimes are not necessarily pre-existing, they can say they can get quantum fluctuations in the pre-spacetime stuff, which can seed a Big Bang. That would fit with Vilenkin’s ideas, but without the silly illogical need to postulate a fluctuation out of nothingness. But this sort of pseudo-science is even more insidious. Just because they do not start with a presumption of a spacetime does not mean they can posit quantum fluctuations in the structure they start with. I mean they can posit this, but it is still not an explanation for the origins of the universe. They still are using some kind of structure to get things started.

Probably still worse are folks who go around flippantly saying that the laws of physics (the correct ones, when or if we discover them) “will be so compelling they will assert their own existence”. This is basically an argument saying, “This thing here is so beautiful it would be a crime if it did not exist, in fact it must exist since it is so beautiful, if no one had created it then it would have created itself.” There really is nothing different about those two statements. It is so unscientific it makes me sick when I hear such statements touted as scientific philosophy. These ideas go beyond thought mutation and into a realm of lunacy.

I think the cause of these thought cancers is the immature fight in society between science and religion. These are tensions in society that need not exist, yet we all understand why they exist. Because people are idiots. People are idiots where their own beliefs are concerned, by in large, even myself. But you can train yourself to be less of an idiot by studying both sciences and religions and appreciating what each mode of human thought can bring to the benefit of society. These are not competing belief systems. They are compatible. But so many believers in religion are falsely following corrupted teachings, they veer into the domain of science blindly, thinking their beliefs are the trump cards. That is such a wrong and foolish view, because everyone with a fair and balanced mind knows the essence of spirituality is a subjective view-point about the world, one deals with one’s inner consciousness. And so there is no room in such a belief system for imposing one’s own beliefs onto others, and especially not imposing them on an entire domain of objective investigation like science. And, on the other hand, many scientists are irrationally anti-religious and go out of their way to try and show a “God” idea is not needed in philosophy. But in doing so they are also stepping outside their domain of expertise. If there is some kind of omnipotent creator of all things, It certainly could not be comprehended by finite minds. It is also probably not going to be amenable to empirical measurement and analysis. I do not know why so many scientists are so virulently anti-religious. Sure, I can understand why they oppose current religious institutions, we all should, they are mostly thoroughly corrupt. But the pure abstract idea of religion and ethics and spirituality is totally 100% compatible with a scientific worldview. Anyone who thinks otherwise is wrong! (Joke!)

Also, I do not favour inflationary theory for other reasons. There is no good theoretical justification for the inflaton field other than the theory of inflation prediction of the homogeneity and isotropy of the CMB. You’d like a good theory to have more than one trick! You know. Like how gravity explains both the orbits of planets and the way an apple falls to the Earth from a tree. With inflatons you have this quantum field that is theorised to exist for one and only one reason, to explain homogeneity and isotropy in the Big Bang. And don’t forget, the theory of inflation does not explain the reason the Big Bang happened, it does not explain its own existence. If the inflaton had observable consequences in other areas of physics I would be a lot more predisposed to taking it seriously. And to be fair, maybe the inflaton will show up in future experiments. Most fundamental particles and theoretical constructs began life as a one-trick sort of necessity. Most develop to be a touch more universal and will eventually arise in many aspects of physics. So I hope, for the sake of the fans of cosmic inflation, that the inflaton field does have other testable consequences in physics.

In case you think that is an unreasonable criticism, there are precedents for fundamental theories having a kind of mathematically built-in explanation. String theorists, for instance, often appeal to the internal consistency of string theory as a rationale for its claim as a fundamental theory of physics. I do not know if this really flies with mathematicians, but the string physicists seem convinced. In any case, to my knowledge the inflation does not have this sort of quality, it is not a necessary ingredient for explaining observed phenomena in our universe. It does have a massive head start on being a candidate sole explanation for the isotropy and homogeneity of the CMB, but so far that race has not yet been completely run. (Or if it has then I am writing out of ignorance, but … you know … you can forgive me for that.)

Anyway, back to mathematics and education.

You have to love the eternal rediscovery built-in to mathematics. It is what makes mathematics eternally interesting to each generation of students. But as a teacher you have to train the nerdy children to not bother reading everything. Apart from the fact there is too much to read, they should be given the opportunity to read a little then investigate a lot, and try to deduce old results for themselves as if they were fresh seeds and buds on a plant. Giving students a chance to catch old water as if it were fresh dewdrops of rain is a beautiful thing. The mind that sees a problem afresh is blessed, even if the problem has been solved centuries ago. The new mind encountering the ancient problem is potentially rediscovering grains of truth in the cosmos, and is connecting spiritually to past and future intellectual civilisations. And for students of science, the theoretical studies offer exactly the same eternal rediscovery opportunities. Do not deny them a chance to rediscover theory in your science classes. Do not teach them theory. Teach them some theoretical underpinnings, but then let them explore before giving the game away.
With so much emphasis these days on educational accountability and standardised tests there is a danger of not giving children these opportunities to learn and discover things for themselves. I recently heard an Intelligence2 “Intelligence Squared” debate on academic testing. One crazy women from the UK government was arguing that testing, testing, and more testing — “relentless testing” were her words — was vital and necessary and provably increased student achievement.

Yes, practising tests will improve test scores, but it is not the only way to improve test scores. And relentless testing will improve student gains in all manner of mindless jobs out there is society that are drill-like and amount to going through routine work, like tests. But there is less evidence that relentless testing improves imagination and creativity.

Let’s face it though. Some jobs and areas of life require mindlessly repetitive tasks. Even computer programming has modes where for hours the normally creative programmer will be doing repetitive but possibly intellectually demanding chores. So we should not agitate and jump up and down wildly proclaiming tests and exams are evil. (I have done that in the past.)

Yet I am far more inclined towards the educational philosophy of the likes of Sir Ken Robinson, Neil Postman, and Alfie Kohn.

My current attitude towards tests and exams is the following:

Tests are incredibly useful for me with large class sizes (120+ students), because I get a good overview of how effective the course is for most students, as well as a good look at the tails. Here I am using the fact test scores (for well designed tests) do correlate well with student academic aptitudes.

My use of tests is mostly formative, not summative. Tests give me a valuable way of improving the course resources and learning styles.

Tests and exams suck as tools for assessing students because they do not assess everything there is to know about a student’s learning. Tests and exams correlate well with academic aptitudes, but not well with other soft skills.

Grading in general is a bad practise. Students know when they have done well or not. They do not need to be told. At schools if parents want to know they should learn to ask their children how school is going, and students should be trained to be honest, since life tends to work out better that way.

Relentless testing is deleterious to the less academically gifted students. There is a long tail in academic aptitude, and the students in this tail will often benefit from a kinder and more caring mode of learning. You do not have to be soft and woolly about this, it is a hard core educational psychology result: if you want the best for all students you need to treat them all as individuals. For some tests are great, terrific! For others tests and exams are positively harmful. You want to try and figure out who is who, at least if you are lucky to have small class sizes.

For large class sizes, like at a university, do still treat all students individually. You can easily do this by offering a buffet of learning resources and modes. Do not, whatever you do, provide a single-mode style of lecture+homework+exam course. That is ancient technology, medieval. You have the Internet, use it! Gather vast numbers of resources of all different manners of approach to your subject you are teaching, then do not teach it! Let your students find their own way through all the material. This will slow down a lot of students — the ones who have been indoctrinated and trained to do only what they are told — but if you persist and insist they navigate your course themselves then they should learn deeper as a result.

Solving the “do what I am told” problem is in fact the very first job of an educator in my opinion. (For a long time I suffered from lack of a good teacher in this regard myself. I wanted to please, so I did what I was told, it seemed simple enough. But … Oh crap, … the day I found out this was holding me back, I was furious. I was about 18 at the time. Still hopelessly naïve and ill-informed about real learning.) If you achieve nothing else with a student, transitioning them from being an unquestioning sponge (or oily duck — take your pick) to being self-motivated and self-directed in their learning is the most valuable lesson you can ever give them. So give them it.

So I use a lot of tests. But not for grading. For grading I rely more on student journal portfolios. All the weekly homework sets are quizzes though, so you could criticise the fact I still use these for grading. As a percentage though, the Journals are more heavily weighted (usually 40% of the course grade). There are some downsides to all this.

It is fairly well established in research that grading using journals or subjective criteria is prone to bias. So unless you anonymise student work, you have a bias you need to deal with somehow before handing out final grades.

Grading weekly journals, even anonymously, takes a lot of time, about 15 to 20 times the hours that grading summative exams takes. So that’s a huge time commitment. So you have to use it wisely by giving very good quality early feedback to students on their journals.

I still haven’t found out how to test the methods easily. I would like to know quantitatively how much more effective journal portfolios are compared to exam based assessments. I am not a specialist education researcher, and I research and write a about a lot of other things, so this is taking me time to get around to answering.

I have not solved the grading problem, for now it is required by the university, so legally I have to assign grades. One subversive thing I am following up on is to refuse to submit singular grades. As a person with a physicists world-view I believe strongly in the role of sound measurement practice, and we all know a single letter grade is not a fair reflection on a student’s attainment. At a minimum a spread of grades should be given to each student, or better, a three-point summary, LQ, Median, UQ. Numerical scaled grades can then be converted into a fairer letter grade range. And GPA scores can also be given as a central measure and a spread measure.

I can imagine many students will have a large to moderate assessment spread, and so it is important to give them this measure, one in a few hundred students might statistically get very low grades by pure chance, when their potential is a lot higher. I am currently looking into research on this.

OK, so in summary: even though institutions require a lot of tests you can go around the tests and still given students a fair grade while not sacrificing the true learning opportunities that come from the principle of eternal rediscovery. Eternal rediscovery is such an important idea that I want to write an academic paper about it and present at a few conferences to get people thinking about the idea. No one will disagree with it. Some may want to refine and adjust the ideas. Some may want concrete realizations and examples. The real question is, will they go away and truly inculcate it into their teaching practices?

He has my confused when he tries to explain the apparent low entropy Big Bang cosmology. He uses his own brand of relational quantum mechanics I think, but it comes out sounding a bit circular or anthropomorphic. Yet earlier in his lectures he often takes pains to deny anthropomorphic views.

So it is quite perplexing when he tries to explain our perception of an arrow of time by claiming that, “it is what makes us us.” Let me quote him, so you can see for yourself. He starts out by claiming the universe starts in a low entropy state only form our relative point of view. Entropy is an observer dependent concept. It depends on how you coarse grain your physics. OK, I buy that. We couple to the physical external fields in a particular way, and this is what determines how we perceive or coarse grain our slices of the universe. So how we couple to the universe supposedly explains way wee see the apparent entropy we perceive. If by some miracle we coupled more like antiparticles effectively travelling in the reverse time direction then we’d see entropy quite differently, one imagines. So anyway, Rovelli then summarizes:

[On slides: Entropy increase (passage of time) depend on the coarse graining, hence the subsystem, not the microstate of the world.] … “Those depend on the way we couple to the rest of the universe. Why do we couple to the rest of the universe in this way? Because if we didn’t couple to the rest of the universe this way we wouldn’t be us. Us as things, as biological entities that very much live in time coupled in a manner such that the past moves towards the future in a precise sense … which sense? … the one described by the Second Law of Thermodynamics.”

You see what I mean?

Maybe I am unfairly pulling this out of a rushed conference presentation, and to be more balanced and fair I should read his paper instead. If I have time I will. But I think a good idea deserves a clear presentation, not a rush job with a lot of vague wishy-washy babble, or obscuring in a blizzard of words and jargon.

OK, so here’s an abstract from an arxiv paper where Rovelli states things in written English:

” Phenomenological arrows of time can be traced to a past low-entropy state. Does this imply the universe was in an improbable state in the past? I suggest a different possibility: past low-entropy depends on the coarse-graining implicit in our definition of entropy. This, in turn depends on our physical coupling to the rest of the world. I conjecture that any generic motion of a sufficiently rich system satisfies the second law of thermodynamics, in either direction of time, for some choice of macroscopic observables. The low entropy of the past could then be due to the way we couple to the universe (a way needed for us doing what we do), hence to our natural macroscopic variables, rather than to a strange past microstate of the world at large.”

That’s a little more precise, but still no clearer on import. He is still really just giving an anthropocentric argument.

I’ve always thought science was at it’s best when removing the human from the picture. The problem for our universe should not be framed as one of “why do we see an arrow of time?” because, as Rovelli points out, for complex biological systems like ourselves there really is no other alternative. If we did not perceive an arrow of time we would be defined out of existence!

The problem for our universe should be simply, “why did our universe begin (from any arbitrary sentient observer’s point of view) with such low entropy?”

But even that version has the whiff of observer about it. Also, you just define the “beginning” as the end that has the low entropy, then you are done, no debate. So I think there is a more crystalline version of what cosmology should be seeking an explanation for, which is simply, “how can any universe ever get started (from either end of a singularity) in a low entropy state?”

But even there you have a notion of time, which we should remove, since “start” is not a proper concept unless one already is talking about a universe. So the barest question of all perhaps, (at least the barest that I can summon) is, “how do physics universes come to exist?”

This does not even explicitly mention thermodynamics or an arrow of time. But within the question those concepts are embedded. One needs to carefully define “physics” and “physics universes”. But once that is done then you have a slightly better philosophy of physics project.

More hard core physicists however will never stoop to tackle such a question. They will tend to drift towards something where a universe is already posited to exist and has had a Big Bang, and then they will fret and worry about how it could have a low entropy singularity.

It is then tempting to take the cosmic Darwinist route. But although I love the idea, it is another one of those insidious memes that is so alluring but in the cold dead hours of night, when the vampires of popular physics come to devour your life blood seeking converts, seems totally unsatisfying and anaemic. The Many Worlds Interpretation has it’s fangs sunk into a similar vein, which I’ve written about before.

* * *

Going back to Rovelli’s project, I have this problem for him to ponder. What if there is no way for any life, not even in principle, to couple to the universe other than via the way we humans do, through interaction with strings (or whatever they are) via Hamiltonians and mass-energy? If this is true, and I suspect it is, then is not Rovelli’s “solution” to the low entropy Big Bang a bit meaningless?

I have a pithy way of summarising my critique of Rovelli. I would just point out:

The low entropy past is not caused by us. We are the consequence.

So I think it is a little weak for Rovelli to conjecture that the low entropy past is “due to the way we couple to the universe.” It’s like saying, “I conjecture that before death one has to be born.” Well, … duuuuhhh!

The reason my photo is no longer on Facebook is due to the way I coupled to my camera.

I am an X-gener due to the way my parents coupled to the universe.

You see what I’m getting at? I might be over-reaching into excessive sarcasm, but my point is just that none of this is good science. They are not explanations. It is just story-telling. Still, Rovelli does give an entertaining story if you are a physics geek.

So I had a read of Rovelli’s paper and saw the more precise statement of his conjecture:

Rovelli’s Conjecture: “Any generic microscopic motion of a sufficiently rich system satisfies the second law (in either time direction) for a suitable choice of macroscopic observables.“

That’s the sort of conjecture that says nothing. The problem is the “sufficiently rich” clause together with the “suitable choice” clause. You can generate screeds of conjectures with such a pair of clauses. The conjecture only has “teeth” if you define what you mean by “sufficiently rich” and if a “suitable choice” can be identified or motivated as plausible. Because otherwise you are not saying anything useful. For example, “Any sufficiently large molecule will be heavier than a suitably chosen bowling ball.”

* * *

Rovelli does provide a toy example to illustrate his notions in classical mechanics. He has yellow balls and red balls. The yellow balls have an attractor which gives them a natural second law of thermodynamic arrow of time. The same box also has red balls with a different attractor which gives them the opposite arrow of time according to the second law. (Watching the conference video for this is better than reading the arxiv paper.) But “so what?”

Rovelli has constructed a toy universe that has entities that would experience opposite time directions if they were conscious. But there are so many things wrong with this example it cannot be seriously considered as a bulwark for Rovelli’s grander project. For starters, what is the nature of his Red and Yellow attractors? If they are going to act complicated enough to imbue the toy universe with anything resembling conscious life then the question of how the arrow of time arises is not answered, it just gets pushed back to the properties of these mysterious Yellow and Red attractors.

And if you have only such a toy universe without any noticeable observers then what is the point of discussing an arrow of time? It is only a concept that a mind external to that world can contemplate. So I do not see the relevance of Rovelli’s toy model for our much more complicated universe which has internal minds that perceive time.

You could say, in principle the toy model tells us there could be conscious observers in our universe who are experiencing life but in the reverse time direction to ourselves, they remember our future but not our past, we remember their future but not their past. Such dual time life forms would find it incredibly hard to communicate, due to this opposite wiring of memory.

But I would argue that Rovelli’s model does not motivate such a possibility, for the same reason as before. Constructing explicit models of different categories of billiard balls each obeying a second law of thermodynamics in opposite time directions in the same system is one thing, but not much can be inferred from this unless you add in a whole lot of further assumptions about what Life is, metabolism, self-replication, and all that. But if you do this the toy model becomes a lot less toy-like and in fact terribly hard to explicitly construct. Maybe Stephen Wolfram’s cellular automata can do the trick? But I doubt it.

I should stop harping on this. Let me just record my profound dissatisfaction with Rovelli’s attempt to demystify the arrow of time.

* * *

If you ask me, we are not at a sufficiently mature enough juncture in the history of cosmology and physics to be able to provide a suitable explanation for the arrow of time.

So I have Smith’s Conjecture:

“At any sufficiently advanced enough juncture in the history of science, enough knowledge will have accumulated to enable physicists to provide a suitable explanation for the arrow of time.“

Facetiousness aside, I really do think that trying to explain the low entropy big bang is a bit premature. It would be much better to be patient and wait for more information about our universe before attempting to launch into the arrow of time project. The reason I believe so is because I think the ultimate answers about such cosmological questions are external to our observable universe.

But even whether they are external or internal there is a wider problem to do with the nature of time and our universe. We do not know if our universe actually had a beginning, a true genesis, or whether it has always existed.

If the universe had a beginning then the arrow of time problem is the usually low entropy puzzle problem. But if the universe had no beginning then the arrow of time problem becomes a totally different question. There is even a kind of intermediate problem that occurs if our universe had a start but within some sort of wider meta-cosmos. Then the problem is much harder, that of figuring out the laws of this putative metaverse. Imagine the hair-pulling of cosmologists who discover this latter possibility as a fact about their universe (but I would envy them the shear ability to discover the fact, it’d be amazing).

So until we know such a fundamental question I do not see a lot of fruitfulness in pursuing the arrow of time puzzle. It’s a counting your chickens before they hatch situation. Or should I say, counting your microstates before they batch.

Aside: While searching for a nice picture to illuminate this post I came across a nice freehand SVG sketch of Shaun Maguire’s. He’s a postdoc at Caltech and writes nicely in a blog there: Quantum Frontiers. If you are more a physics/math geek than a philosophy/physics geek then you will enjoy his blog. I found it very readable, not stunning poetic prose, but easy-going and sufficiently high on technical content to hold my interest.

That has to do with black hole firewalls, which digresses away from Wald’s talk.

It is not true to say Wald’s talk is plain and simple, since the topic is advanced, only a second course on general relativity would cover the details. And you need to get through a lot of mathematical physics in a first course of general relativity. But what I mean is that Wald is such a knowledgeable and clear thinker that he explains everything crisply and understandably, like a classic old-school teacher would. It is not flashy, but damn! It is tremendously satisfying and enjoyable to listen to. I could hit the pause button and read his slides then rewind and listen to his explanation and it just goes together so sweetly. He neither repeats his slides verbatim, not deviates from them confusingly. However, I think if I were in the audience I would be begging for a few pauses of silence to read the slides. So the advantage is definitely with the at-home Internet viewer.

Now if you are still reading this post you should be ashamed! Why did you not go and download the talk and watch it?

I loved Wald’s lucid discussion of the Generalised Second Law (which is basically a redefinition of entropy, which is that generalised entropy should be the sum of thermodyanmics entropy plus black hole entropy or black hole surface area.)

Then he gives a few clear arguments that provide strong reasons for regarding the black hole area formula as equivalent to an entropy, one of which is that in general relativity dynamic instability is equivalent to thermodynamic instability, hence the link between the dynamic process of black hole area increase is directly connected to black hole entropy. (This is in classical general relativity.)

But then he puts the case that the origin of black hole entropy is not perfectly clear, because black hole entropy does not arise out of the usual ergodicity in statistical mechanics systems, whereby a system in an initial special state relaxes via statistical processes towards thermal equilibrium. Black holes are non-ergodic. They are fairly simple beasts that evolve deterministically. “The entropy for a black hole arises because it has a future horizon but no past horizon,” is how Wald explains it. In other words, black holes do not really “equilibrate” like classical statistical mechanics gases. Or at least, they do not equilibrate to a thermal temperature ergodically like a gas, they equilibrate dynamically and deterministically.

Wald’s take on this is that, maybe, in a quantum gravity theory, the detailed microscopic features of gravity (foamy spacetime?) will imply some kind of ergodic process underlying the dynamical evolution of black holes, which will then heal the analogy with statistical mechanics gas entropy.

This is a bit mysterious to me. I get the idea, but I do not see why it is a problem. Entropy arises in statistical mechanics, but you do not need statistically ergodic processes to define entropy. So I did not see why Wald is worried about the different equilibration processes viz. black holes versus classical gases. They are just different ways of defining an entropy and a Second Law, and it seems quite natural to me that they therefore might arise from qualitatively different processes.

But hold onto you hats. Wald next throws me a real curve ball.

Smaller then the Planck Scale … What?

Wald’s next concern about a breakdown of the analogy between statistical gas entropy and dynamic black hole entropy is a doozie. He worries about the fact the vacuum fluctuations in a conventional quantum field theory are basically ignored in statistical mechanics, yet they cannot (or should not?) be ignored in general relativity, since, for instance, the ultra-ultra-high energy vacuum fluctuations in the early universe get red-shifted by the expansion of the universe into observable features we can now measure.

Wald is talking here about fluctuations on a scale smaller than the Planck length!

To someone with my limited education you begin by thinking, “Oh, that’s ok, we all know (one says knowingly not really knowing) that stuff beyond the Plank scale is not very clearly defined and has this sort of ‘all bets are off’ quality about it. So we do not need to worry about it yet until there is a theory covering the Planck scale.”

But if I understand it correctly, what Wald is saying is that what we see in the cosmic background radiation, or maybe in some other observations (Wald is not clear on this), corresponds to such red shifted modes, so we literally might be seeing fluctuations that were originated on a scale smaller than the Planck length if we probe the cosmic background radiation to highly ultra-red shifted wavelengths.

That was a bit of an eye-opener for me. I was previously not aware of any physics that potentially probed beyond the Planck scale. I wonder if anyone else thought this is surprising? Maybe if I updated my physics education I’d find out that it is not so surprising.

In any case, Wald does not discuss this, since his point is about the black hole case where at the black hole horizon a similar shifting of modes occurs with ultra-high energy vacuum fluctuations near the horizon getting red shifted far from the black hole into “real” observable degrees of freedom.

Wald talks about this as a kind of “creation of new degrees of freedom”. And of course this does not occur in statistical gas mechanics where there are a fixed number of degrees of freedom, so again the analogy he wants between black hole thermodynamics and classical statistical mechanics seems to break down.

There is some cool questioning going on here though. The main problem with the vacuum fluctuations Wald points out is that one does not know how to count the states in the vacuum. So the implicit idea there, which Wald does not mention, is that maybe there is a way to count states of the vacuum, which might then heal the thermodynamics analogy Wald is pursuing. My own (highly philosophical, and therefore probably madly wrong) speculation would be that quantum field theory is only an effective theory, and that a more fundamental theory of physics with spacetime as the only real field and particle physics states counted in a background-free theory kind of way, might, might yield some way of calculating vacuum states.

Certainly, I would imagine that if field theory is not the ultimate theory, then the whole idea of vacuum field fluctuations gets called into suspicion. The whole notion of a zero-point background field vacuum energy becomes pretty dubious altogether if you no longer have a field theory as the fundamental framework for physics. But of course I am just barking into the wind hoping to see a beautiful background-free framework for physics.

Like the previous conundrum of ergodicity and equilibration, I do not see why this degree of freedom issue is a big problem. It is a qualitative difference which breaks the strong analogy, but so what? Why is that a pressing problem? Black holes are black holes, gases are gases, they ought to be qualitatively distinct in their respective thermodynamics. The fact there is the strong analogy revealed by Bekenstein, Hawking, Carter, and others is beautiful and does reveal general universality properties, but I do not see it as an area of physics where a complete unification is either necessary or desired.

What I do think would be awesome, and super-interesting, would be to understand the universality better. This would be to ask further (firstly) why there is a strong analogy, and (secondly) explain why and how it breaks down.

* * *

This post was interrupted by an apartment moving operation, so I ran out of steam on my consciousness stream, so will wrap it up here.

After spending a week debating with myself about various Many Worlds philosophy issues and other quantum cosmology questions, today I saw Joel Primack’s presentation at the Philosophy of Cosmology International Conference, on the topic of Cosmological Structure Formation. And so for a change I was speechless.

Thus I doubt I can write much that illumines Primack’s talk better than if I tell you just to go and watch it.

He, and colleagues, have run supercomputer simulations of gravitating dark matter in our universe. From their public website Bolshoi Cosmological Simulations they note: “The simulations took 6 million cpu hours to run on the Pleiades supercomputer — recently ranked as seventh fastest of the world’s top 500 supercomputers — at NASA Ames Research Center.”

MD4 Gas density distribution of the most massive galaxy cluster (cluster 001) in a high resolution resimulation, x-y-projection. (Kristin Riebe, from the Bolshoi Cosmological Simulations.)

The filamentous structure formation is awesome to behold. At times they look like living cellular structures in the movies that Primack has produced. Only the time steps in his simulations are probably about 1 million year steps. for example, on simulation is called the Bolshio-Planck Cosmological Simulation — Merger Tree of a Large Halo. If I am reading this page correctly these simulations visualize 10 billion Sun sized halos. The unit they say they resolve is “1010 Msun halos”. Astronomers will often use a symbol M⊙ to represent a unit of one solar mass (equal to our Sun’s mass). But I have never seen that unit “M⊙ halo” used before, so I’m just guessing it means the finest structure resolvable in their movie still images would be maybe a Sun-sized object, or a solar system sized bunch of stuff. This is dark matter they are visualizing, so the stars and planets we can see just get completely obscured in these simulations (since the star-like matter is less than a few percent of the mass).

True to my word, that’s all I will write for now about this piece of beauty. I need to get my speech back.

* * *

Oh, but I do just want to hasten to say the image above I pasted in there is NOTHING compared to the movies of the simulations. You gotta watch the Bolshoi Cosmology movies to see the beauty!

OK, last post I was a bit hasty saying Simon Saunders undermined Max Tegmark. Saunders eventually finds his way to recover a theory of probability from his favoured Many Worlds Interpretation. But I do think he over-analyses the theory of probability. Maybe he is under-analysing it too in some ways.

What the head-scratchers seem to want is a Unified Theory of Probability. Something that gives what we intuitively know is a probability but cannot mathematically formalise in a way that deals with all reality. Well, I think this is a bit of a chimera. Sure, I’d like a unified theory too. But sometimes you have to admit reality, even abstract mathematical Platonic reality, does not always present us with a unified framework for everything we can intuit.

What’s more, I think probability theorists have come pretty close to a unified framework for probability. It might seem patchwork, it might merge frequentist ideas with Bayesian ideas, but if you require consistency across domains then apply the patchwork so that on overlaps you have agreement, then I suspect (I cannot be sure) that probability theory as experts understand it today, if fairly comprehensive. Arguing that frequentism should always work is a bit like arguing that Archimedean calculus should always work. Pointing out deficiencies in Bayesian probability does not mean there is no overarching framework for probability, since where Bayesianism does not work probably frequentism, or some other combinatorics, will.

Suppose you even have to deal with a space of transfinite cardinality and there is ignorance about where you are, then I think in the future someone will come up with measures on infinite spaces of various cardinality. They might end up with something that is a bit trivial (all probabilities become 0 or 1 for transfinite measures, perhaps?), but I think someone will do it. All I’m saying is that it is way too early in the history of mathematics to say we need to throw up our hands and appeal to physics and Many Worlds.

* * *

That was along intro. I really meant to kick off this post with a few remarks about Max Tegmark’s second lecture at the Oxford conference series on Cosmology and Quantum Foundations. He claims to be a physicist, but puts on a philosophers hat when he claims, “I am only my atoms”. Meaning he believes consciousness arises or emerges merely from some “super-complex processes” in brains.

I like Max Tegmark, he seems like a genuinely nice guy, and is super smart. But here he is plain stupid. (I’m hyperbolising naturally, but I still think it’s dopey what he believes.)

It is one thing to say your totality is your atoms, but quite another to take consciousness as a phenomenon seriously and claim it is just physics. Especially, I think, if your interpretation of quantum reality is the MWI. Why is that? Because MWI has no subjectivity. But, if you are honest, or if you have thought seriously about consciousness at all, and what the human mind is capable of, then without being arrogant or anthropocentric, you have to admit that whatever consciousness is, (and I do not know what it is just let me say, but whatever it is) it is an intrinsically subjective phenomenon.

You can find philosophers who deny this, but most of them are just denying the subjectiveness of consciousness in order to support their pet theory of consciousness (which is often grounded in physics). So those folks have very little credibility. I am not saying consciousness cannot be explained by physics. All I am saying is that if consciousness is explained by physics then our notion of physics needs to expand to include subjective phenomena. No known theories of physics have such ingredients.

It is not like you need a Secret Sauce to explain consciousness. But whatever it is that explains consciousness, it will have subjective sauce in it.

OK, I know I can come up with a MWI rebuff. In a MWI ontology all consistent realities exist due to Everettian branching. So I get behaviour that is arbitrarily complex in some universes. In those universes am I not bound to feel conscious? In other branches of the Everett multiverse I (not me actually, but my doppelgänger really, one who branched from a former “me”) do too many dumb things to be considered consciously sentient in the end, even though up to a point they seemed pretty intelligent.

The problem with this sort of “anything goes” so that in some universe consciousness will arise, is that it is naïve or ignorant. It commits the category error of assuming behaviour equates to inner subjective states. Well, that’s wrong. Maybe in some universes behaviour maps perfectly onto subjective states, and so there is no way to prove the independent reality of subjective phenomenon. But even that is no argument against the irreducibility of consciousness. Because any conscious agent who knows of (at least) their own subjective reality, they will know their universes branch is either not all explained by physics, or physics must admit some sort of subjective phenomenon into it’s ontology.

Future philosophers might describe it as merely a matter of taste, one of definitions. But for me, I like to keep my physics objective. Ergo, for me, consciousness (at least the sort I know I have, I cannot speak for you or Max Tegmark) is subjective, at least in some aspects. It sure manifests in objective physics thanks to my brain and senses, but there is something irreducibly subjective about my sort of consciousness. And that is something objectively real physics cannot fully explain.

What irks me most though, are folks like Tegmark who claim folks like me are arrogant in thinking we have some kind of secret sauce (by this presumably he means a “soul” or “spirit” that guides conscious thought). I think quite the converse. It is arrogant to think you can get consciousness explained by conventional physics and objective processes in brains. Height of physicalist arrogance really.

For sure, there are people who take the view human beings are special in some way, and a lot of such sentiments arise from religiosity.

But people like me come to the view that consciousness is not special, but it is irreducibly subjective. We come to this believing in science. But we also come without prejudices. So, in my humble view, if consciousness involves only physics you can say it must be some kind of special physics. That’s not human arrogance. Rather, it is an honest assessment of our personal knowledge about consciousness and more importantly about what consciousness allows us to do.

To be even more stark. When folks like Tegmark wave their hands and claim consciousness is probably just some “super complex brain process”, then I think it is fair to say that they are the ones using implicit secret sauce. Their secret sauce is of the garden variety atoms and molecules variety of course. You can say, “well, we are ignorant and so we cannot know how consciousness can be explained using just physics”. And that’s true. But (a) it does not avoid the problem of subjectivity, and (b) you can be just as ignorant about whether physics is all their is to reality. Over the years I have developed sense that it is far more arrogant to think physical reality is the only reality. I’ve tried to figure out how sentient subjective consciousness, and mathematical insight, and ideal Platonic forms in my mind can be explained by pure physics. I am still ignorant. But I do strongly postulate that there has to be some element of subjective reality involved in at least my form of consciousness. I say that in all sincerity and humility. And I claim it is a lot more humble than the position of philosophers who echo Tegmark’s view on human arrogance.

Thing is, you can argue no one understands consciousness, so no one can be certain what it is, but we can be fairly certain about what it isn’t. What it is not is a purely objectively specifiable process.

A philosophical materialist can then argue that consciousness is an illusion, it is a story the brain replays to itself. I’ve heard such ideas a lot, and they seem to be very popular at preset even though Daniel Dennett and others wrote about them more than 20 years ago. And the roots of the meme “consciousness is an illusion” is probably even centuries older than that, which you can confirm if you scour the literature.

The problem is you can then clearly discern a difference in definitions. The consciousness is an illusion folks use quite a different definition of consciousness compared to more onologically open-minded philosophers.

* * *

On to other topics …

* * *

Is Decoherence Faster than Light? (… yep, probably)

There is a great sequence in Max Tegmark’s talk where he explains why decoherence of superpositions and entanglement is just about, “the fastest process in nature!” He presents an illustration with a sugar cube dissolving in a cup of coffee. The characteristic times for relevant physical processes go as follows,

Fluctuations — changes in correlations between clusters of molecules.

Dissipation — time for about half the energy added by the sugar to be turned into heat. Scales by roughly the number of molecules in the sugar, so it takes on the order of N collisions on average.

Dynamics — changes in energy.

Information — changes in entropy.

Decoherence — takes only one collision. So about 1025 times faster than dissipation.

(I’m just repeating this with no independent checks, but this seems about right.)

This also gives a nice characterisation of classical versus quantum regimes:

Mostly Classical — when τdeco≪τdyn≤τdyn.

Mostly Quantum — when τdyn≪τdeco, τdiss.

See if you can figure out why this is a good characterisation of regimes?

Here’s a screenshot of Tegmark’s characterisations:

The explanation is that in a quantum regime you have entanglement and superposition, uncertainty is high, dynamics evolves without any change in information, and hence also with essentially no dissipation. Classically you get a disturbance in the quantum and all coherence is lost almost instantaneously, and yeah, it goes faster than light because with decoherence nothing physical is “going” it is a not a process, rather decoherence refers to a state of possible knowledge, and that can change instantaneously without any signal transfer, at least according to some theories like MWI or Copenhagen.

I should say that in some models decoherence is a physically mediated process, and in such theories it would take a finite time, but it is still fast. Such environmental decoherence is a feature of gravitational collapse theories for example. Also, the ER=EPR mechanism of entanglement would have decoherence mediated by wormhole destruction, which is probably something that can appear to happen instantaneously from the point of view of certain observers. But the actual snapping of a wormhole bridge is not a faster than light process.

I also liked Tegmark’s remark that,

“We realise the reason that big things tend to look classical isn’t because they are big, it’s just because big things tend to be harder to isolate.”

* * *

And in case you got the wrong impression earlier, I really do like Tegmark. In his sugar cube in coffee example his faint Swedish accent gives way for a second to a Feynmanesque “cawffee”. It’s funny. Until you here it you don’t realise that very few physicists actually have a Feynman accent. It’s cool Tegmark has a little bit of it, and maybe not surprising as he often cites Feynman as one of his heroes (ah, yeah, what physicist wouldn’t? Well, actually I do know a couple who think Feynman was a terrible influence on physics teaching, believe it or not! They mean well, but are misguided of course! ☻).

* * *

The Mind’s Role Play

Next up: Tegmark’s take on explaining the low entropy of our early universe. This is good stuff.

Background: Penrose and Carroll have critiqued Inflationary Big Bang cosmology for not providing an account for why there is an arrow of time, i.e., why did the universe start in an extremely low entropy state.

(I have not seen Carroll’s talk, but I think it is on my playlist. So maybe I’ll write about it later.) But I am familiar with Penrose’s ideas. Penrose takes a fairly conservative position. He takes the Second Law of Thermodynamics seriously. He cannot see how even the Weyl Curvature Hypothesisexplains the low entropy Big Bang. (I think WCH is just a description, not an explanation.)

Penrose does have a few ideas abut how to explain things with his Conformal Cyclic Cosmology ideas. I find them hugely appealing. But I will not discuss them here. Just go read his book.

What I want to write about here is Tegmark and his Subject-Object-Environment troika. In particular, why does he need to bring the mind and observation into the picture? I think he could give his talk and get across all the essentials without mentioning the mind.

But here is my problem. I just do not quite understand how Tegmark goes from the correct position on entropy, which is that is is a coarse graining concept, to his observer-measurement dependence. I must be missing something in his chain of reasoning.

So first: entropy is classically a measure of the multiplicity of a system, i.e., how many microstates in an ensemble are compatible with a given macroscopic state. And there is a suitable generalisation to quantum physics given by von Neumann.

If you fine grain enough then most possible states of the universe are unique and so entropy measured on such scales is extremely low. Basically, you only pick up contributions from degenerate states. Classically this entropy never really changes, because classically an observer is irrelevant. Now, substitute for “Observer” the more general “any process that results in decoherence”. Then you get a reason why quantum mechanically entropy can decrease. To whit: in a superposition there are many states compatible with prior history. When a measurement is made (for “measurement” read, “any process resulting in decoherence”) then entropy naturally will decrease on average (except for perhaps some unusual highly atypical cases).

Here’s what I am missing. All that I just said previously is local. Whereas, for the universe as a whole, globally, what is decoherence? It is not defined. and so what is global entropy then? There is no “observer” (read: “measurement process”) that collapses or decohere’s our whole universe. At least none we know of. So it all seems nonsense to talk about entropy on a cosmological scale.

To me, perhaps terribly naïvely, there is a meaning for entropy within a universe in localised sub-systems where observations can in principle be made on the system. “Counting states” to put it crudely. But for the universe (or Multiverse if you prefer) taken as a whole, what meaning is there to the concept of entropy? I would submit there is no meaning to entropy globally. The Second Law triumphs right? I mean, for a closed isolated system you cannot collapse states and get decoherence, at least not from without, so it just evolves unitarily with constant entropy as far as external observers can tell, or if you coarse grain into ensembles then the Second Law emerges, on average, even for unitary time evolution.

Perhaps what Tegmark was on about was that if you have external observer disruptions then entropy reduces (you get information about the state). But does this not globally just increase entropy since globally now the observer’s system is entangled with the previously closed and isolated system. But who ever bothers to compute this global entropy? My guess is it would obey the Second Law. I have no proof, just my guess.

Of course, with such thoughts in my head it was hard to focus on what Tegmark was really saying, but in the end his lecture seems fairly simple. Inflation introduces decoherence and hences lowers quantum mechanical entropy. So if you do not worry about classical entropy, just focus on the quantum states, then apparently inflationary cosmology can “explain” the low entropy Big Bang.

Only, if you ask me, this is no explanation. It is just “yet another” push-back. Because Inflationary cosmology is incomplete, it does not deal with the pre-inflationary universe. In other words, the pre-inflationary universe has to also have some entropy if you are going to be consistent with taking Tegmarks’ side. So however much inflation reduces entropy, you still have the initial pre-inflationary entropy to account for, which now becomes the new “ultimate source” of or arrow of time. Maybe it has helped to push the unexplained entropy a lot higher? But then you get into the realm of, “what is ‘low’ entropy in cosmological terms?” What does it mean to say the unexplained pre-inflationary entropy is high enough to not worry about? I dunno’. Maybe Tegmark is right? Maybe pre-inflation entropy (disorder) is so high by some sort of objectively observer independent measure (is that possible?) that you literally no longer have to fret about the origin of the arrow of time? Maybe inflation just wipes out all disorder and gives us a proverbial blank slate?

But then I do fret about it. Doesn’t Penrose come in at this point and give baby Tegmark a lesson in what inflation can and cannot do to entropy? Good gosh! It’s just about enough confusion to drive one towards the cosmological anthropic principle out of desperation for closure.

So despite Tegmark’s entertaining and informative lecture, I still don’t think anyone other than Penrose has ever given a no-push-back argument for the arrow of time. I guess I’ll have to watch Tegmark’s talk again, or read a paper on it for greater clarity and brevity.

Continuing my ad hoc review of Cosmology and Quantum Foundations, I come to Max Tegmark and Simon Saunders, who were the two main champions of Many Worlds Interpretations present at this conference. But before discussing ideas arising from their talks, I want to mention an addendum to the Hidden Variables and de Broglie-Bohm pilot wave theory that I totally coincidentally came across the night after writing the previous post (“Gaddamit! Where’d You Put My Variables”).

Fluid Dynamics and Oil Droplets Model de Broglie-Bohm Pilot Waves

Oil droplets surfing ripples on a fluid surface exhibit two-slit interference. Actually not! They follow chaotic trajectories that reproduce interference patterns only statistically, but there is no superposition at all for the oil droplet, only for the wave ripples. Remarkably similar qualitatively to de Broglie-Bohm pilot wave theory.

You delicately place oil droplets on an immiscible fluid surface (water I suppose) and the droplets bounce around creating waves in the fluid surface. Then, lo and behold! Send an oil droplet through a double slit barrier and it goes through one slit right! Shocking! But then hold on to your skull … after traversing the slit the oil droplet then chaotically meanders around surfing on the wave ripples spreading out from the double slit that the oil droplet was actually responsible for generating before it got to the slits.

Do this for many oil droplets and you will see the famous statistical build-up of interference pattern at a distance radius, but here with classical oil droplets that can be observed to smithereens without destroying superposition of the fluid waves, so you get purely classical double slit interference. Just like the de Broglie-Bohm pilot wave theory predicts for the Bohmian mechanics view of quantum mechanics. I say, “jut like” because clearly this is macroscopic in scale and the mechanism of pilot waves is totally different to the quantum regime. Nonetheless, it is a clear condensed matter physics model for pilot wave Bohmian quantum mechanics.

(There is a recent decades trend in condensed matter physics where phenomenon qualitatively similar to quantum mechanics or black hole phenomenology, or even string theory, can be modelled in solid state or condensed matter systems. It’s a fascinating thing. No one really has an explanation for such quasi-universality in physics. I guess, when different systems of underlying equations give similar asymptotic behaviour then you have a chance of observing such universality in disparate and seemingly unrelated physical systems. One example Susskind mentions in his theoretical Minimum lectures is the condensed matter systems that model Majorana fermions. It’s just brilliantly fascinating stuff. I was going to write separate article about this. Maybe later. I’ll just mention that although such condensed matter models have to be taken with a grain of salt, to whatever extent they can recapitulate the physics of quantum systems you have this tantalising possibility of being able to construct low energy desktop experiments that might, might, be able to explore extreme physics such as superstring regimes and black hole phenomenology, only with safe and relatively affordable experiments. I’m no futurist, but as protein biology promises to be the biology of the 21st century, maybe condensed matter physics is poised to take over from particle accelerators as the main physics laboratory for the 1st century? It’d be kinda’ cool wouldn’t it?)

The oil droplet experiments are not a perfect model for Bohmian mechanics since these pilot waves do not carry other quantum degrees of freedom like spin or charge.

Normally I would scoff at this and say, “nice, but so what?” Physics, and science in general, is rife with examples of disparate systems that display similarity or universality. It does not mean the fundamental physics is the same. And in the oil droplet pilot wave experiments we clearly have a hell of a lot of quantum mechanics phenomenology absent.

But I did not scoff at this one.

The awesome thing about this oil droplet interference experiment is that there is a clear mechanism that can recapitulate a lot of the same phenomenology at the Planck scale, and hence offers an intriguing and tantalising alternative explanation for quantum mechanics as an effective theory that emerges from a more fundamental of Plank scale spacetime dynamics (geometrodynamics to borrow the terminology of Wheeler and Misner). Hell, I will not even mention “quantum gravity”, since that’d take me too fa afield, but dropping that phrase in here is entirely appropriate.

The clear Planck scale phenomenology I am speaking of is the model of spacetime as a superfluid. It will support non-dissipative pilot waves, which are therefore nothing less than subatomic gravitational waves of a sort. Given the weakness of gravity you can imagine how fragile are the superpositions of these spacetime or gravitational pilot waves. Not hard to destroy coherent states.

Then, of course, we already have the emerging theory of ER=EPR which explains entanglement using a type of geometrodynamics. If you start to package together everything that you can get out of geometrodynamics then you being to see a jigsaw puzzle filling in that hints maybe the whole gamut of quantum physics phenomenology at the Planck scale can be largely adequately explained using spacetime geometry and topology.

One big gap in geometrodynamics is the phenomenology of particle physics. Gauge symmetries, charges, and the rest. It will take a brave and fortified physicist to tackle all these problems. If you read my blog you will realise I am a total fan of such approaches. Even if they are wrong, I think they are huge fun to contemplate and play with, even if only as mathematical diversions. So I encourage any young mathematically talented physicists to dare to go in to active research on geometrodynamics.

The Many Worlds Guys

So what about Tegmark and Saunders? Well, by this point I kind of exhausted myself today and forgot what I was going to write about. Saunders mentioned something about frequentist probability having serious issues and that Frequentism could not be a philosophical basis for probability theory. I think that’s a bit unfair. Frequentism works in many practical cases. I don’t think it has to be an over-arching theory of probability. It works when it works.

Same in lots of science. Fourier transforms work on periodic signals, and FT’s can compress non-periodic signals too, but not perfectly. Newtonian physics works bloody well in many circumstances, but is not an all-encompassing theory of mechanics. Natural selection works to explain variation and speciation in living systems, but it is not the whole story, it cannot happen without some supporting mechanism like DNA replication and protein synthesis. You cannot explain speciation using Natural selection alone, it’s just not possible, Natural selection is too general and weak to be a full explanatory theory.

It’s funny too. Saunders seems to undermine a lot of what Tegmark was trying to argue in the previous talk at the conference. Tegmark was explicitly using frequentist counting in his arguments that Copenhagen is no better or worse than Many Worlds from a probabilistic perspective. I admit I do not really know what Saunders was on about. If you can engineer a proper measure than you can do probability. I think maybe Tegmark can justify some sort of MWI space measures. Again, I do not really know much about measure theory for MWI space. Maybe it is an open problem and Tegmark is stretching credibility a bit?