Semi-annual exam grading this week. I am trying to migrate more each semester to journal portfolio grading. This semester I managed to get approval for exams worth 0% of course grades. But I made them Pass/Fail, which is probably a bit rough on students. So I also had an “earned pass” criteria, which meant students had to complete weekly journals, forum discussions, and homework quiz sets, to “earn a pass” in case they failed both exams. This works quite well.

The downside is that with 15 weeks of journals to review and forum posts to read and send feedback on, for every student, the total hours I spend on assessment exceeds the time I am being paid for lecturing. (It is about 450 hours for a class of 60 students. And I estimate I am only paid for 60 hours of assessment work, because that is all the office time I am given to submit grades after final exams are over. And it seems to me most other lecturers work some magic to finish their grading in about 12 hours, I do not know how they do it.)

So I am going to request next semester for dropping exams altogether, and instead getting quality control through short weekly tests in lecture class where exam conditions will be simulated. This will force me to grade tests each week, so at the end of term the exam grading will not take so long. But it does not reduce the assessment hours, in fact I think it will increase my overall work burden. So I will also need to scale back journal portfolios to bi-weekly instead of weekly. I will also probably need to make the short tests bi-weekly too, since, with 120 students, grading tests each week will overload my hours.

The problem is not that I dislike being under-paid for my work, I could care less about money. What I do not like is wasting time and not being able to spend more time on research and course quality improvements and developing better educational software. Actually, I do not consider assessment a waste of time. But it is tedious and depressing work sometimes. So I really just think I personally need to be smarter about how I allocate my time, and overloading on assessment is decreasing the time I could be spending on course quality improvements, so ultimately I am hindering improving student learning by spending too much time on assessment.

That’s enough moaning! What I really want to blog about today is the problem with tests and exams as assessments, and some of the issues of freedom in learning that are stifled by tests and exams, and how to do things better without abandoning the good uses for tests.

So ok, I think I have been subjected to enough education to exercise my opinion!

To get you warmed up, consider what you are doing as a teacher if you have a prescribed syllabus with prescribed materials and resources and no freedom of selection for students. When students are not permitted to fire up Firefox or Chrome to search for their own learning resources, what is this?. What you are doing then is called censorship. And that is probably the most polite word for it.

In the past it was not censorship, it was in fact liberation! But times have changed. Teachers used to be the fountains of wisdom and guidance. They would gather resources, or purchase textbooks, and thereby give students access to a wide world. But now there is no need for that, and teachers who continue prescribing textbooks and using the same resources for all students, they are now ironically the censors. They are limiting student freedom. The Internet has changed the world this much! It has turned liberators into censors overnight. Amazing.

So please, if you are a teacher read this and share it. If you are studying to become a teacher then please do not become a censor. Learn how to give your students freedom and structured guidance. If you are already a teacher please do not continue being a censor.

Teaching to the Tests, “Hello-oh!?”

One interesting thing I have learned (or rather had confirmed) is that university teaching is far superior to high school teaching in a few ways.

You, the lecturer, get to structure the course however you want, provided you meet fairly minimal general university requirements.

Because of that structural freedom you can teach to the tests! This is a good thing!

“What’s that?” you say. How can teaching to the tests be a good thing? Hell, it is something I wrote dozens of paragraphs railing against when I was doing teacher training courses, and in later blogs. And despite not liking to admit it, it is what most high school teachers end up doing in New Zealand. It is a tragedy. But why? And why or when and how can teaching to the tests actually be a good thing?

The answer, and I think the only way teaching to tests is natural and good, is when the teacher has absolute control over both the test format and the classroom atmosphere and methods.

First of all, I like using tests or exams to get feedback about what basics students have learned. But I do not use these results to judge students. A three hour exam is only a snapshot. I can never fit in all the course content into such a short exam, so it would be unfair to use the exam to judge students who did well in learning topics in the course that will not appear in my exam papers. And students could be “having a bad day”, if I tested them another day their score could go up or down significantly. So I realise exams and tests are terrific for gathering course outcome quality information. But you are a bit evil, in my opinion, if you use exams and tests as summative assessments. Summative assessments should be feedback to students, but not used for grading or judgemental purposes. Instead, the only fair way to grade and judge students is by using quality weekly or “whole semester assessments.

Secondly, if a teacher is biased then “whole semester” assessments (like journal portfolios) can be terribly insecure and unreliable. So you need to try to anonymise work before you grade it, so as to eliminate overt bias. And you might think you are not biased, but believe me, the research will tell you that you are most certainly biased, you cannot help it, it is subconscious and therefore beyond your immediate conscious control. But you can proactively consciously control bias by eliminating it’s source, which is knowing which student’s work you are currently grading.

You can later think about “correcting” such anonymised grades on a case-by-case basis by allowing for known student learning impairments. But you should not bias your grades a priori by knowing which student you are grading at the time. A’ight?! Biased teachers are well-documented. Teachers need to be close to students and form strong relationships, that is a proven good learning requirement. But it works against accurate and unbiased assessment. So you need to anonymise student work prior to grading. This could mean getting rid of hand-written work, favouring electronic submissions.

If you use tests wisely you can use them as both student and teacher assessment vehicles. Students should not feel too much stress with short weekly tests. They should not be swatting for them, the tests should naturally extend learning done in class or from previous weekly homework. If you control the format and content of tests then you can design your teaching to match. So if you like highly creative and cognitive learning styles you can administer cognitive testing with lots of imagination required. If you prefer a more kinesthetic learning style for another topic you can make the test kinesthetic. You can suit and tailor your teaching style to naturally match the topic and then also the follow-up tests.

This sort of total control is not possible in schools under present day state-wide run standards-based exams. That’s why such exam regimes are evil and inefficient and terrible for promoting good learning.

With teacher-run lessons + tests you get the best of all worlds. If one teacher is slack, their students get disadvantaged for sure, but they would anyway under a standards-based regime. The difference with teacher-run courses is that the teacher’s exams and course content can be examined, rather than the students getting examined, and so ultimate education quality control rests upon the administrator who should get to examine the teacher resources and test formats and content. That’s the way to run state-based exams. You examine the teachers, not the students.

There can even be a second tier of filtering and quality control. The school itself can assess the teacher quality. Then slack teachers can be sent to state-wide authorities of assessment. We need to remember the state employees are the teachers, not the students. So we should at least first worry about assessing teacher quality, not student quality. Our present schools systems, around the world, backwards all this have. 😉 I know educators mean well. But they need to listen to Sir Ken Robinson and Alfie Kohn a bit harder.

So in the foreseeable future, sadly, I will not be returning to secondary school teaching. Never under the present national standards regime anyway. It basically would make me an ordinary teacher. But I have extraordinary talents. The NZQA run system would effectively dull my talents and would mask them from expression. Under the current NZQA system which most schools are mandated to follow, I would be a really horrid teacher. I would not be teaching to the tests, and my students would likely not acquire grades that reflect their learning.

It is not impossible to teach students creatively and with fun and inspiration and still help them acquire good grades under NCEA. But it is really, really hard, and I am not that good a teacher. The real massive and obvious flaw in New Zealand is that teachers think they can all do this. But they cannot. They either end up teaching to the tests, and their students get reasonable grades, but average learning, or they buck the system and teach however they damn please and their students get poor grades. I would guess only about 1% or 3% of teachers have the genius and skill and long fought-for expertise to run a truly creative and imaginary learning experience and also get students who can ace the NCEA exams.

If, as a nation of people who love education, we cannot have all teachers be the geniuses who can do this, and if it requires exceptionally gifted teachers to do this, then why oh why are we forcing them to use the NCEA or similar exam regimes? If you do not have all teachers being such geniuses, then, I think, morally and ethically you are bound to not using a standards-based summative assessment system for judging students. You instead need to unleash the raw talent of all teachers by giving them freedom to teach in a style they enjoy, because this will naturally reflect in the brightness and happiness and learning of their students. And to check on the quality of your education system you must assess these teachers, not their students.

The tragedy is, for me, that I think I would enjoy secondary school teaching a lot more than university lecturing if the free-to-learn system I propose was in place. The younger children have a brightness and brilliance that is captivating. So it is a real pleasure to teach them and guide them along their way. These bright lights seem to become dulled when they become young adults. Or maybe that’s just the effect that school has on them?

* * *

So, the thing is, I see no reason why high school teaching cannot be more like university teaching. Please give the teachers the control over both their course style and their assessments. This will make everyone happier and less stressed. Test the teacher quality ahead of student quality at the national level. Make education about empowering students to discover their interests, and not to follow by rote the content provided by the teachers. And definitely not content dictated and remanded by a state-run government institution. If the government desire accountability of schools, they should look at teacher quality, not student quality. With good teachers you can trust them to get the most from their students, right! That’s a statement not a question!

There are many good references I should provide, but I will just give you one that hits most points I made above:

Do you like driving? I hate it. Driving fast and dangerous in a computer game is ok, but a quick and ephemeral thrill. But for real driving, to and from work, I have a long commute, and no amount of podcasts or music relieves the tiresomeness. Driving around here I need to be on constant alert, there are so many cockroaches (motor scooters) to look out for, and here in Thailand over half the scooter drivers do no t wear helmets, and I cannot drive 50 metres before seeing a young child driven around on a scooter without a helmet. Neither parent nor child will have a helmet. Mothers even cradle infants while hanging on at rear on a scooter. It might not be so bad if the speeds were slow, but they are not. That’s partly why I find driving exhausting. It is stressful to be so worried about so many other people.

True to the title it was illuminating. Watching Witten’s popular lectures is always good value. Mostly I find everything he presents I have heard or read about elsewhere, but never in so much seemingly understandable depth and insight. It is really lovely to hear Witten talk about the φ3 quantum field theory as a natural result of quantising gravity in 1-dimension. He describes this as one of nature’s rhymes: patterns at one scale or domain get repeated in others.

Then he describes how the obstacle to a quantum gravity theory in spacetime via a quantum field theory is the fact that in quantum mechanics states do not correspond to operators. He draws this as a Feynman diagram where a deformation of spacetime is indicated by a kink in a Feynman graph line. That’s an operator. Whereas states in quantum mechanics do not have such deformations, since they are points.

An operator describing a perturbation, like a deformation in the spacetime metric, appears as an internal line in a Feynman diagram, not an external line.

So that’s really nice isn’t it?

I had never heard the flaw of point particle quantum field theory given in such a simple and eloquent way. (The ultraviolet divergences are mentioned later by Witten.)

Then Witten does a similar thing for my understanding of how 2D conformal field theory relates to string theory and quantised gravity. In 2-dimensions there is a correspondence between operators and states in the quantum theory, and it is illustrated schematically by the conformal mapping that takes a point in a 2-manifold to a tube sticking out of the manifold.

The point being (excuse the pun) the states are the slices through this conformal geometry, and so deformations of the states are now equivalent to deformations of operators, and we have the correspondence needed for a quantum theory of gravity.

This is all very nice, but 3/4 of the way through his talk it still leaves some mystery to me.

I still do not quite grok how this makes string theory background-free. The string world sheet is quantize-able and you get from this either a conformal field theory or quantum gravity, but how is this background-independent quantum gravity?

I find I have to rewind and watch Witten’s talk a number of times to put all the threads together, and I am still missing something. Since I do not have any physicist buddies at my disposal to bug and chat to about this I either have to try physicsforums or stackexchange or something to get some more insight.

So I rewound a few times and I am pretty certain Witten starts out using a Riemannian metric on a string, and then on a worldsheet. Both are already embedded in a spacetime. So he is not really describing quantum gravity in spacetime. He is describing a state-operator correspondence in a quantum gravity performed on string world sheets. Maybe in the end this comes out in the wash as equivalent to quantising general relativity? I cannot tell. In any case, everyone knows string theory yields a graviton. So in some sense you can say, “case closed up to phenomenology”, haha! Still, a lovely talk and a nice pre-bedtime diversion. But I persisted through to the end of the lecture — delayed sleep experiment.

My gut reaction was that Witten is using some slight of hand. The Conformal Field Theory maybe is background-free, since it is derived from quantum mechanics of the string world sheets. But the stringy gravity theory still has the string worldsheet fluffing around in a background spacetime. Does it not? Witten is not clear on this, though I’m sure in his mind he knows what he is talking about. Then, like he read my mind, Witten does give a partial answer to this.

What Witten gets around to saying is that if you go back earlier in his presentation where he starts with a quantum field theory on a 1D line, then on a 2d-manifold, the spacetime he uses, he claims, was arbitrary. So this partially answers my objections. He is using a background spacetime to kick-start the string/CFT theory, which he admits. But then he does the slight-of-hand and says

“what is more fundamental is the 2d conformal field theory that might be described in terms of a spacetime but not necessarily.”

So my take on this is that what Witten is saying is (currently) most fundamental in string theory is the kick-starter 2d conformal field theory. Or the 2d manifold that starts out as the thing you quantise deformations on to get a phenomenological field theory including quantised gravity. But this might not even be the most fundamental structure. You start to get the idea that string/M-theory is going to moprh into a completely abstract model. The strings and membranes will end up not being fundamental. Which is perhaps not too bad.

I am not sure what else you need to start with a conformal field theory. But surely some kind of proto-primordial topological space is needed. Maybe it will eventually connect back to spin foams or spin networks or twistors. Haha! Wouldn’t that be a kick in the guts for string theorists, to find their theory is really built on top of twistor theory! I think twistors give you quite a bit more than a 2d conformal field, but maybe a “bit more” is what is needed to cure a few of the other ills that plague string theory phenomenology.

* * *

For what it’s worth, I actually think there is a need in fundamental physics to explain even more fundamental constructs, such as why do we need to start with a Lagrangian and then sum it’s action over all paths (or topologies if you are doing a conformal field theory)? This entire formalism, in my mind, needs some kind of more primitive justification.

Moreover, I think there is a big problem in field theory per se. My view is that spacetime is more fundamental than the fields. Field theory is what should “emerge” from a fundamental theory of spacetime physics, not the other way around. Yet “the other way round”, — i.e., fields first, then spacetime — seems to be what a lot of particle or string theorists seem to be suggesting. I realize this is thoroughly counter to the main stream of thought in modern physics, but I cannot help it, I’m really a bit of a classicist at heart. I do not try to actively swim against the stream, it’s just in this case that’s where I find my compass heading. Nevertheless, Witten’s ideas and the way he elaborates them are pretty insightful. Maybe I am unfair. I have heard Weinberg mention the fields are perhaps not fundamental.

* * *

OK, that’s all for now. I have to go and try to tackle Juan Maldacena’s talk now. He is not as easy to listen to though, but since this will be a talk for a general audience it might be comprehensible. Witten might be delightfully nerdy, but Maldacena is thoroughly cerebral and hard to comprehend. Hoping he takes it easy on his audience.

I have a post prepared to upload in a bit that will announce a possible hiatus from this WordPress blog. The reason is just that I found a cool book I want to try to absorb, The Princeton Companion to Mathematics by Gowers, Barrow-Green and Leader. Doubtless I will not be able to absorb it all in one go, so I will likely return to blogging periodically. But there is also teaching and research to conduct, so this book will slow me down. The rest of this post is a light weight brain-dump of some things that have been floating around in my head.

Recently, while watching a lecture on topology I was reminded that a huge percentage of the writings of Archimedes were lost in the siege of Alexandria. The Archimedean solids were rediscovered by Johannes Kepler, and we all know what he was capable of! Inspiring Isaac Newton is not a bad epitaph to have for one’s life.

The general point about rediscovery is a beautiful thing. Mathematics, more than other sciences, has this quality whereby a young student can take time to investigate previously established mathematics but then take breaks from it to rediscover theorems for themselves. How many children have rediscovered Pythagoras’ theorem, or the Golden Ratio, or Euler’s Formula, or any number of other simple theorems in mathematics?

Most textbooks rely on this quality. It is also why most “Exercises” in science books are largely theoretical. Even in biology and sociology. They are basically all mathematical, because you cannot expect a child to go out and purchase a laboratory set-up to rediscover experimental results. So much textbook teaching is mathematical for this reason.

I am going to digress momentarily, but will get back to the education theme later in this article.

The entire cosmos itself has sometimes been likened to an eternal rediscovery. The theory of Eternal Inflation postulates that our universe is just one bubble in a near endless ocean of baby and grandparent and all manner of other universes. Although, recently, Alexander Vilenkin and Audrey Mithani found that a wide class of inflationary cosmological models are unstable, meaning that could not have arisen from a pre-existing seed. There had to be a concept of an initial seed. This kind of destroys the “eternal” in eternal inflation. Here’s a Discover magazine account: “What Came Before the Big Bang? — Cosmologist Alexander Vilenkin believes the Big Bang wasn’t a one-off event”. Or you can click this link to hear Vilenkin explain his ideas himself: FQXi: Did the Universe Have a Beginning? Vilenkin seems to be having a rather golden period of originality over the past decade or so, I regularly come across his work.

If you like the idea of inflationary cosmology you do not have to worry too much though. You still get the result that infinitely many worlds could bubble out of an initial inflationary seed.

Below is my cartoon rendition of eternal inflation in the realm of human thought:

Oh to be a bubble thoughtoverse of the Wittenesque variety.

Quantum Fluctuations — Nothing Cannot Fluctuate

One thing I really get a bee in my bonnet about are the endless recountings in the popular literature about the beginning of the universe is the naïve idea that no one needs to explain the origin of the Big Bang and inflatons because “vacuum quantum fluctuations can produce a universe out of nothing”. This sort of pseudo-scientific argument is so annoying. It is a cancerous argument that plagues modern cosmology. And even a smart person like Vilenkin suffers from this disease. Here I quote him from a quote in another article on the PBS NOVA website::

Vilenkin has no problem with the universe having a beginning. “I think it’s possible for the universe to spontaneously appear from nothing in a natural way,” he said. The key there lies again in quantum physics—even nothingness fluctuates, a fact seen with so-called virtual particles that scientists have seen pop in and out of existence, and the birth of the universe may have occurred in a similar manner.
Source: http://www.pbs.org/wgbh/nova/blogs/physics/2012/06/in-the-beginning/

At least you have to credit Vilenkin with the brains to have said it is only “possible”. But even that caveat is fairly weaselly. My contention is that out of nothing you cannot get anything, not even a quantum fluctuation. People seem to forget quantum field theory is a background-dependent theory, it requires a pre-existing spacetime. There is no “natural way” to get a quantum fluctuation out of nothing. I just wish people would stop insisting on this sort of non-explanation for the Big Bang. If you start with not even spacetime then you really cannot get anything, especially not something as loaded with stuff as an inflaton field. So one day in the future I hope we will live in a universe where such stupid arguments are nonexistent nothingness, or maybe only vacuum fluctuations inside the mouths of idiots.

There are other types of fundamental theories, background-free theories, where spacetime is an emergent phenomenon. And proponents of those theories can get kind of proud about having a model inside their theories for a type of eternal inflation. Since their spacetimes are not necessarily pre-existing, they can say they can get quantum fluctuations in the pre-spacetime stuff, which can seed a Big Bang. That would fit with Vilenkin’s ideas, but without the silly illogical need to postulate a fluctuation out of nothingness. But this sort of pseudo-science is even more insidious. Just because they do not start with a presumption of a spacetime does not mean they can posit quantum fluctuations in the structure they start with. I mean they can posit this, but it is still not an explanation for the origins of the universe. They still are using some kind of structure to get things started.

Probably still worse are folks who go around flippantly saying that the laws of physics (the correct ones, when or if we discover them) “will be so compelling they will assert their own existence”. This is basically an argument saying, “This thing here is so beautiful it would be a crime if it did not exist, in fact it must exist since it is so beautiful, if no one had created it then it would have created itself.” There really is nothing different about those two statements. It is so unscientific it makes me sick when I hear such statements touted as scientific philosophy. These ideas go beyond thought mutation and into a realm of lunacy.

I think the cause of these thought cancers is the immature fight in society between science and religion. These are tensions in society that need not exist, yet we all understand why they exist. Because people are idiots. People are idiots where their own beliefs are concerned, by in large, even myself. But you can train yourself to be less of an idiot by studying both sciences and religions and appreciating what each mode of human thought can bring to the benefit of society. These are not competing belief systems. They are compatible. But so many believers in religion are falsely following corrupted teachings, they veer into the domain of science blindly, thinking their beliefs are the trump cards. That is such a wrong and foolish view, because everyone with a fair and balanced mind knows the essence of spirituality is a subjective view-point about the world, one deals with one’s inner consciousness. And so there is no room in such a belief system for imposing one’s own beliefs onto others, and especially not imposing them on an entire domain of objective investigation like science. And, on the other hand, many scientists are irrationally anti-religious and go out of their way to try and show a “God” idea is not needed in philosophy. But in doing so they are also stepping outside their domain of expertise. If there is some kind of omnipotent creator of all things, It certainly could not be comprehended by finite minds. It is also probably not going to be amenable to empirical measurement and analysis. I do not know why so many scientists are so virulently anti-religious. Sure, I can understand why they oppose current religious institutions, we all should, they are mostly thoroughly corrupt. But the pure abstract idea of religion and ethics and spirituality is totally 100% compatible with a scientific worldview. Anyone who thinks otherwise is wrong! (Joke!)

Also, I do not favour inflationary theory for other reasons. There is no good theoretical justification for the inflaton field other than the theory of inflation prediction of the homogeneity and isotropy of the CMB. You’d like a good theory to have more than one trick! You know. Like how gravity explains both the orbits of planets and the way an apple falls to the Earth from a tree. With inflatons you have this quantum field that is theorised to exist for one and only one reason, to explain homogeneity and isotropy in the Big Bang. And don’t forget, the theory of inflation does not explain the reason the Big Bang happened, it does not explain its own existence. If the inflaton had observable consequences in other areas of physics I would be a lot more predisposed to taking it seriously. And to be fair, maybe the inflaton will show up in future experiments. Most fundamental particles and theoretical constructs began life as a one-trick sort of necessity. Most develop to be a touch more universal and will eventually arise in many aspects of physics. So I hope, for the sake of the fans of cosmic inflation, that the inflaton field does have other testable consequences in physics.

In case you think that is an unreasonable criticism, there are precedents for fundamental theories having a kind of mathematically built-in explanation. String theorists, for instance, often appeal to the internal consistency of string theory as a rationale for its claim as a fundamental theory of physics. I do not know if this really flies with mathematicians, but the string physicists seem convinced. In any case, to my knowledge the inflation does not have this sort of quality, it is not a necessary ingredient for explaining observed phenomena in our universe. It does have a massive head start on being a candidate sole explanation for the isotropy and homogeneity of the CMB, but so far that race has not yet been completely run. (Or if it has then I am writing out of ignorance, but … you know … you can forgive me for that.)

Anyway, back to mathematics and education.

You have to love the eternal rediscovery built-in to mathematics. It is what makes mathematics eternally interesting to each generation of students. But as a teacher you have to train the nerdy children to not bother reading everything. Apart from the fact there is too much to read, they should be given the opportunity to read a little then investigate a lot, and try to deduce old results for themselves as if they were fresh seeds and buds on a plant. Giving students a chance to catch old water as if it were fresh dewdrops of rain is a beautiful thing. The mind that sees a problem afresh is blessed, even if the problem has been solved centuries ago. The new mind encountering the ancient problem is potentially rediscovering grains of truth in the cosmos, and is connecting spiritually to past and future intellectual civilisations. And for students of science, the theoretical studies offer exactly the same eternal rediscovery opportunities. Do not deny them a chance to rediscover theory in your science classes. Do not teach them theory. Teach them some theoretical underpinnings, but then let them explore before giving the game away.
With so much emphasis these days on educational accountability and standardised tests there is a danger of not giving children these opportunities to learn and discover things for themselves. I recently heard an Intelligence2 “Intelligence Squared” debate on academic testing. One crazy women from the UK government was arguing that testing, testing, and more testing — “relentless testing” were her words — was vital and necessary and provably increased student achievement.

Yes, practising tests will improve test scores, but it is not the only way to improve test scores. And relentless testing will improve student gains in all manner of mindless jobs out there is society that are drill-like and amount to going through routine work, like tests. But there is less evidence that relentless testing improves imagination and creativity.

Let’s face it though. Some jobs and areas of life require mindlessly repetitive tasks. Even computer programming has modes where for hours the normally creative programmer will be doing repetitive but possibly intellectually demanding chores. So we should not agitate and jump up and down wildly proclaiming tests and exams are evil. (I have done that in the past.)

Yet I am far more inclined towards the educational philosophy of the likes of Sir Ken Robinson, Neil Postman, and Alfie Kohn.

My current attitude towards tests and exams is the following:

Tests are incredibly useful for me with large class sizes (120+ students), because I get a good overview of how effective the course is for most students, as well as a good look at the tails. Here I am using the fact test scores (for well designed tests) do correlate well with student academic aptitudes.

My use of tests is mostly formative, not summative. Tests give me a valuable way of improving the course resources and learning styles.

Tests and exams suck as tools for assessing students because they do not assess everything there is to know about a student’s learning. Tests and exams correlate well with academic aptitudes, but not well with other soft skills.

Grading in general is a bad practise. Students know when they have done well or not. They do not need to be told. At schools if parents want to know they should learn to ask their children how school is going, and students should be trained to be honest, since life tends to work out better that way.

Relentless testing is deleterious to the less academically gifted students. There is a long tail in academic aptitude, and the students in this tail will often benefit from a kinder and more caring mode of learning. You do not have to be soft and woolly about this, it is a hard core educational psychology result: if you want the best for all students you need to treat them all as individuals. For some tests are great, terrific! For others tests and exams are positively harmful. You want to try and figure out who is who, at least if you are lucky to have small class sizes.

For large class sizes, like at a university, do still treat all students individually. You can easily do this by offering a buffet of learning resources and modes. Do not, whatever you do, provide a single-mode style of lecture+homework+exam course. That is ancient technology, medieval. You have the Internet, use it! Gather vast numbers of resources of all different manners of approach to your subject you are teaching, then do not teach it! Let your students find their own way through all the material. This will slow down a lot of students — the ones who have been indoctrinated and trained to do only what they are told — but if you persist and insist they navigate your course themselves then they should learn deeper as a result.

Solving the “do what I am told” problem is in fact the very first job of an educator in my opinion. (For a long time I suffered from lack of a good teacher in this regard myself. I wanted to please, so I did what I was told, it seemed simple enough. But … Oh crap, … the day I found out this was holding me back, I was furious. I was about 18 at the time. Still hopelessly naïve and ill-informed about real learning.) If you achieve nothing else with a student, transitioning them from being an unquestioning sponge (or oily duck — take your pick) to being self-motivated and self-directed in their learning is the most valuable lesson you can ever give them. So give them it.

So I use a lot of tests. But not for grading. For grading I rely more on student journal portfolios. All the weekly homework sets are quizzes though, so you could criticise the fact I still use these for grading. As a percentage though, the Journals are more heavily weighted (usually 40% of the course grade). There are some downsides to all this.

It is fairly well established in research that grading using journals or subjective criteria is prone to bias. So unless you anonymise student work, you have a bias you need to deal with somehow before handing out final grades.

Grading weekly journals, even anonymously, takes a lot of time, about 15 to 20 times the hours that grading summative exams takes. So that’s a huge time commitment. So you have to use it wisely by giving very good quality early feedback to students on their journals.

I still haven’t found out how to test the methods easily. I would like to know quantitatively how much more effective journal portfolios are compared to exam based assessments. I am not a specialist education researcher, and I research and write a about a lot of other things, so this is taking me time to get around to answering.

I have not solved the grading problem, for now it is required by the university, so legally I have to assign grades. One subversive thing I am following up on is to refuse to submit singular grades. As a person with a physicists world-view I believe strongly in the role of sound measurement practice, and we all know a single letter grade is not a fair reflection on a student’s attainment. At a minimum a spread of grades should be given to each student, or better, a three-point summary, LQ, Median, UQ. Numerical scaled grades can then be converted into a fairer letter grade range. And GPA scores can also be given as a central measure and a spread measure.

I can imagine many students will have a large to moderate assessment spread, and so it is important to give them this measure, one in a few hundred students might statistically get very low grades by pure chance, when their potential is a lot higher. I am currently looking into research on this.

OK, so in summary: even though institutions require a lot of tests you can go around the tests and still given students a fair grade while not sacrificing the true learning opportunities that come from the principle of eternal rediscovery. Eternal rediscovery is such an important idea that I want to write an academic paper about it and present at a few conferences to get people thinking about the idea. No one will disagree with it. Some may want to refine and adjust the ideas. Some may want concrete realizations and examples. The real question is, will they go away and truly inculcate it into their teaching practices?

He has my confused when he tries to explain the apparent low entropy Big Bang cosmology. He uses his own brand of relational quantum mechanics I think, but it comes out sounding a bit circular or anthropomorphic. Yet earlier in his lectures he often takes pains to deny anthropomorphic views.

So it is quite perplexing when he tries to explain our perception of an arrow of time by claiming that, “it is what makes us us.” Let me quote him, so you can see for yourself. He starts out by claiming the universe starts in a low entropy state only form our relative point of view. Entropy is an observer dependent concept. It depends on how you coarse grain your physics. OK, I buy that. We couple to the physical external fields in a particular way, and this is what determines how we perceive or coarse grain our slices of the universe. So how we couple to the universe supposedly explains way wee see the apparent entropy we perceive. If by some miracle we coupled more like antiparticles effectively travelling in the reverse time direction then we’d see entropy quite differently, one imagines. So anyway, Rovelli then summarizes:

[On slides: Entropy increase (passage of time) depend on the coarse graining, hence the subsystem, not the microstate of the world.] … “Those depend on the way we couple to the rest of the universe. Why do we couple to the rest of the universe in this way? Because if we didn’t couple to the rest of the universe this way we wouldn’t be us. Us as things, as biological entities that very much live in time coupled in a manner such that the past moves towards the future in a precise sense … which sense? … the one described by the Second Law of Thermodynamics.”

You see what I mean?

Maybe I am unfairly pulling this out of a rushed conference presentation, and to be more balanced and fair I should read his paper instead. If I have time I will. But I think a good idea deserves a clear presentation, not a rush job with a lot of vague wishy-washy babble, or obscuring in a blizzard of words and jargon.

OK, so here’s an abstract from an arxiv paper where Rovelli states things in written English:

” Phenomenological arrows of time can be traced to a past low-entropy state. Does this imply the universe was in an improbable state in the past? I suggest a different possibility: past low-entropy depends on the coarse-graining implicit in our definition of entropy. This, in turn depends on our physical coupling to the rest of the world. I conjecture that any generic motion of a sufficiently rich system satisfies the second law of thermodynamics, in either direction of time, for some choice of macroscopic observables. The low entropy of the past could then be due to the way we couple to the universe (a way needed for us doing what we do), hence to our natural macroscopic variables, rather than to a strange past microstate of the world at large.”

That’s a little more precise, but still no clearer on import. He is still really just giving an anthropocentric argument.

I’ve always thought science was at it’s best when removing the human from the picture. The problem for our universe should not be framed as one of “why do we see an arrow of time?” because, as Rovelli points out, for complex biological systems like ourselves there really is no other alternative. If we did not perceive an arrow of time we would be defined out of existence!

The problem for our universe should be simply, “why did our universe begin (from any arbitrary sentient observer’s point of view) with such low entropy?”

But even that version has the whiff of observer about it. Also, you just define the “beginning” as the end that has the low entropy, then you are done, no debate. So I think there is a more crystalline version of what cosmology should be seeking an explanation for, which is simply, “how can any universe ever get started (from either end of a singularity) in a low entropy state?”

But even there you have a notion of time, which we should remove, since “start” is not a proper concept unless one already is talking about a universe. So the barest question of all perhaps, (at least the barest that I can summon) is, “how do physics universes come to exist?”

This does not even explicitly mention thermodynamics or an arrow of time. But within the question those concepts are embedded. One needs to carefully define “physics” and “physics universes”. But once that is done then you have a slightly better philosophy of physics project.

More hard core physicists however will never stoop to tackle such a question. They will tend to drift towards something where a universe is already posited to exist and has had a Big Bang, and then they will fret and worry about how it could have a low entropy singularity.

It is then tempting to take the cosmic Darwinist route. But although I love the idea, it is another one of those insidious memes that is so alluring but in the cold dead hours of night, when the vampires of popular physics come to devour your life blood seeking converts, seems totally unsatisfying and anaemic. The Many Worlds Interpretation has it’s fangs sunk into a similar vein, which I’ve written about before.

* * *

Going back to Rovelli’s project, I have this problem for him to ponder. What if there is no way for any life, not even in principle, to couple to the universe other than via the way we humans do, through interaction with strings (or whatever they are) via Hamiltonians and mass-energy? If this is true, and I suspect it is, then is not Rovelli’s “solution” to the low entropy Big Bang a bit meaningless?

I have a pithy way of summarising my critique of Rovelli. I would just point out:

The low entropy past is not caused by us. We are the consequence.

So I think it is a little weak for Rovelli to conjecture that the low entropy past is “due to the way we couple to the universe.” It’s like saying, “I conjecture that before death one has to be born.” Well, … duuuuhhh!

The reason my photo is no longer on Facebook is due to the way I coupled to my camera.

I am an X-gener due to the way my parents coupled to the universe.

You see what I’m getting at? I might be over-reaching into excessive sarcasm, but my point is just that none of this is good science. They are not explanations. It is just story-telling. Still, Rovelli does give an entertaining story if you are a physics geek.

So I had a read of Rovelli’s paper and saw the more precise statement of his conjecture:

Rovelli’s Conjecture: “Any generic microscopic motion of a sufficiently rich system satisfies the second law (in either time direction) for a suitable choice of macroscopic observables.“

That’s the sort of conjecture that says nothing. The problem is the “sufficiently rich” clause together with the “suitable choice” clause. You can generate screeds of conjectures with such a pair of clauses. The conjecture only has “teeth” if you define what you mean by “sufficiently rich” and if a “suitable choice” can be identified or motivated as plausible. Because otherwise you are not saying anything useful. For example, “Any sufficiently large molecule will be heavier than a suitably chosen bowling ball.”

* * *

Rovelli does provide a toy example to illustrate his notions in classical mechanics. He has yellow balls and red balls. The yellow balls have an attractor which gives them a natural second law of thermodynamic arrow of time. The same box also has red balls with a different attractor which gives them the opposite arrow of time according to the second law. (Watching the conference video for this is better than reading the arxiv paper.) But “so what?”

Rovelli has constructed a toy universe that has entities that would experience opposite time directions if they were conscious. But there are so many things wrong with this example it cannot be seriously considered as a bulwark for Rovelli’s grander project. For starters, what is the nature of his Red and Yellow attractors? If they are going to act complicated enough to imbue the toy universe with anything resembling conscious life then the question of how the arrow of time arises is not answered, it just gets pushed back to the properties of these mysterious Yellow and Red attractors.

And if you have only such a toy universe without any noticeable observers then what is the point of discussing an arrow of time? It is only a concept that a mind external to that world can contemplate. So I do not see the relevance of Rovelli’s toy model for our much more complicated universe which has internal minds that perceive time.

You could say, in principle the toy model tells us there could be conscious observers in our universe who are experiencing life but in the reverse time direction to ourselves, they remember our future but not our past, we remember their future but not their past. Such dual time life forms would find it incredibly hard to communicate, due to this opposite wiring of memory.

But I would argue that Rovelli’s model does not motivate such a possibility, for the same reason as before. Constructing explicit models of different categories of billiard balls each obeying a second law of thermodynamics in opposite time directions in the same system is one thing, but not much can be inferred from this unless you add in a whole lot of further assumptions about what Life is, metabolism, self-replication, and all that. But if you do this the toy model becomes a lot less toy-like and in fact terribly hard to explicitly construct. Maybe Stephen Wolfram’s cellular automata can do the trick? But I doubt it.

I should stop harping on this. Let me just record my profound dissatisfaction with Rovelli’s attempt to demystify the arrow of time.

* * *

If you ask me, we are not at a sufficiently mature enough juncture in the history of cosmology and physics to be able to provide a suitable explanation for the arrow of time.

So I have Smith’s Conjecture:

“At any sufficiently advanced enough juncture in the history of science, enough knowledge will have accumulated to enable physicists to provide a suitable explanation for the arrow of time.“

Facetiousness aside, I really do think that trying to explain the low entropy big bang is a bit premature. It would be much better to be patient and wait for more information about our universe before attempting to launch into the arrow of time project. The reason I believe so is because I think the ultimate answers about such cosmological questions are external to our observable universe.

But even whether they are external or internal there is a wider problem to do with the nature of time and our universe. We do not know if our universe actually had a beginning, a true genesis, or whether it has always existed.

If the universe had a beginning then the arrow of time problem is the usually low entropy puzzle problem. But if the universe had no beginning then the arrow of time problem becomes a totally different question. There is even a kind of intermediate problem that occurs if our universe had a start but within some sort of wider meta-cosmos. Then the problem is much harder, that of figuring out the laws of this putative metaverse. Imagine the hair-pulling of cosmologists who discover this latter possibility as a fact about their universe (but I would envy them the shear ability to discover the fact, it’d be amazing).

So until we know such a fundamental question I do not see a lot of fruitfulness in pursuing the arrow of time puzzle. It’s a counting your chickens before they hatch situation. Or should I say, counting your microstates before they batch.

Aside: While searching for a nice picture to illuminate this post I came across a nice freehand SVG sketch of Shaun Maguire’s. He’s a postdoc at Caltech and writes nicely in a blog there: Quantum Frontiers. If you are more a physics/math geek than a philosophy/physics geek then you will enjoy his blog. I found it very readable, not stunning poetic prose, but easy-going and sufficiently high on technical content to hold my interest.

That has to do with black hole firewalls, which digresses away from Wald’s talk.

It is not true to say Wald’s talk is plain and simple, since the topic is advanced, only a second course on general relativity would cover the details. And you need to get through a lot of mathematical physics in a first course of general relativity. But what I mean is that Wald is such a knowledgeable and clear thinker that he explains everything crisply and understandably, like a classic old-school teacher would. It is not flashy, but damn! It is tremendously satisfying and enjoyable to listen to. I could hit the pause button and read his slides then rewind and listen to his explanation and it just goes together so sweetly. He neither repeats his slides verbatim, not deviates from them confusingly. However, I think if I were in the audience I would be begging for a few pauses of silence to read the slides. So the advantage is definitely with the at-home Internet viewer.

Now if you are still reading this post you should be ashamed! Why did you not go and download the talk and watch it?

I loved Wald’s lucid discussion of the Generalised Second Law (which is basically a redefinition of entropy, which is that generalised entropy should be the sum of thermodyanmics entropy plus black hole entropy or black hole surface area.)

Then he gives a few clear arguments that provide strong reasons for regarding the black hole area formula as equivalent to an entropy, one of which is that in general relativity dynamic instability is equivalent to thermodynamic instability, hence the link between the dynamic process of black hole area increase is directly connected to black hole entropy. (This is in classical general relativity.)

But then he puts the case that the origin of black hole entropy is not perfectly clear, because black hole entropy does not arise out of the usual ergodicity in statistical mechanics systems, whereby a system in an initial special state relaxes via statistical processes towards thermal equilibrium. Black holes are non-ergodic. They are fairly simple beasts that evolve deterministically. “The entropy for a black hole arises because it has a future horizon but no past horizon,” is how Wald explains it. In other words, black holes do not really “equilibrate” like classical statistical mechanics gases. Or at least, they do not equilibrate to a thermal temperature ergodically like a gas, they equilibrate dynamically and deterministically.

Wald’s take on this is that, maybe, in a quantum gravity theory, the detailed microscopic features of gravity (foamy spacetime?) will imply some kind of ergodic process underlying the dynamical evolution of black holes, which will then heal the analogy with statistical mechanics gas entropy.

This is a bit mysterious to me. I get the idea, but I do not see why it is a problem. Entropy arises in statistical mechanics, but you do not need statistically ergodic processes to define entropy. So I did not see why Wald is worried about the different equilibration processes viz. black holes versus classical gases. They are just different ways of defining an entropy and a Second Law, and it seems quite natural to me that they therefore might arise from qualitatively different processes.

But hold onto you hats. Wald next throws me a real curve ball.

Smaller then the Planck Scale … What?

Wald’s next concern about a breakdown of the analogy between statistical gas entropy and dynamic black hole entropy is a doozie. He worries about the fact the vacuum fluctuations in a conventional quantum field theory are basically ignored in statistical mechanics, yet they cannot (or should not?) be ignored in general relativity, since, for instance, the ultra-ultra-high energy vacuum fluctuations in the early universe get red-shifted by the expansion of the universe into observable features we can now measure.

Wald is talking here about fluctuations on a scale smaller than the Planck length!

To someone with my limited education you begin by thinking, “Oh, that’s ok, we all know (one says knowingly not really knowing) that stuff beyond the Plank scale is not very clearly defined and has this sort of ‘all bets are off’ quality about it. So we do not need to worry about it yet until there is a theory covering the Planck scale.”

But if I understand it correctly, what Wald is saying is that what we see in the cosmic background radiation, or maybe in some other observations (Wald is not clear on this), corresponds to such red shifted modes, so we literally might be seeing fluctuations that were originated on a scale smaller than the Planck length if we probe the cosmic background radiation to highly ultra-red shifted wavelengths.

That was a bit of an eye-opener for me. I was previously not aware of any physics that potentially probed beyond the Planck scale. I wonder if anyone else thought this is surprising? Maybe if I updated my physics education I’d find out that it is not so surprising.

In any case, Wald does not discuss this, since his point is about the black hole case where at the black hole horizon a similar shifting of modes occurs with ultra-high energy vacuum fluctuations near the horizon getting red shifted far from the black hole into “real” observable degrees of freedom.

Wald talks about this as a kind of “creation of new degrees of freedom”. And of course this does not occur in statistical gas mechanics where there are a fixed number of degrees of freedom, so again the analogy he wants between black hole thermodynamics and classical statistical mechanics seems to break down.

There is some cool questioning going on here though. The main problem with the vacuum fluctuations Wald points out is that one does not know how to count the states in the vacuum. So the implicit idea there, which Wald does not mention, is that maybe there is a way to count states of the vacuum, which might then heal the thermodynamics analogy Wald is pursuing. My own (highly philosophical, and therefore probably madly wrong) speculation would be that quantum field theory is only an effective theory, and that a more fundamental theory of physics with spacetime as the only real field and particle physics states counted in a background-free theory kind of way, might, might yield some way of calculating vacuum states.

Certainly, I would imagine that if field theory is not the ultimate theory, then the whole idea of vacuum field fluctuations gets called into suspicion. The whole notion of a zero-point background field vacuum energy becomes pretty dubious altogether if you no longer have a field theory as the fundamental framework for physics. But of course I am just barking into the wind hoping to see a beautiful background-free framework for physics.

Like the previous conundrum of ergodicity and equilibration, I do not see why this degree of freedom issue is a big problem. It is a qualitative difference which breaks the strong analogy, but so what? Why is that a pressing problem? Black holes are black holes, gases are gases, they ought to be qualitatively distinct in their respective thermodynamics. The fact there is the strong analogy revealed by Bekenstein, Hawking, Carter, and others is beautiful and does reveal general universality properties, but I do not see it as an area of physics where a complete unification is either necessary or desired.

What I do think would be awesome, and super-interesting, would be to understand the universality better. This would be to ask further (firstly) why there is a strong analogy, and (secondly) explain why and how it breaks down.

* * *

This post was interrupted by an apartment moving operation, so I ran out of steam on my consciousness stream, so will wrap it up here.

A great cartoon of Aaron giving everyone access to JSTOR. Whoever drew this needs crediting, but I can only make out their last name “Pinn”. I grabbed this from Google image search. So thanks Mr or Ms Pinn.

So that this is not a large departure from my recent trend in blog topics, I wanted to share a few thoughts about similar “easy arguments” in quite different fields.

The “Nothing to Show” Argument Against Publishing

This is an argument I’ve used all my life to avoid publishing. I hate people criticising my work. So I normally tell supervisors or colleagues that I have nothing of interest to publish. This is an extraordinary self-destructive thing to do in academia, it basically kills one’s career. But there are a few reasons I do not worry.

Firstly, I truly do not like publishing for the sake of academic advancement. Secondly, I have a kind of inner repulsion against publishing anything I think is stupid or trivial or boring. Thirdly, I am quite lazy, and if I am going to fight to get something published it should be worth the fight, or should be such good quality work that it will not be difficult to publish somewhere. Fourth, I dislike being criticised so much I will sometimes avoid publishing just to avoid having to deal with reviewer critiques. That’s a pretty immature and childish sensitivity, and death for an academic career, but with a resigned sigh I have to admit that’s who I am, at least for now, a fairly childish immature old dude.

There might be a few other reasons. A fifth I can think of is that I wholeheartedly agree with Aaron Swartz’s Guerilla Open Access Manifesto, which proclaims the credo of free and open access to publicly funded research for all peoples of all nations. That’s not a trivial manifesto. You could argue that the public of the USA funds research that should then be free and open, but only to the public of the USA, and likewise for other countries. But Swartz was saying that the tax payers of the respective countries have already paid for the research, the researcher’s have been fully compensated, and scientists do not get any royalties from journal articles anyway, and therefore their research results should be free for all people of all nations to use. Why this is important is the democratising of knowledge, and perhaps more importantly the unleashing of human potential and creativity. If someone in Nigeria is denied access to journals in the USA then that person is denied the chance to potentially use that research and contribute to the sum total of human knowledge. We should not restrict anyone such rights.

OK, that was a bit of a diversion. The point is, I would prefer to publish my work in open-access journals. I forget why that’s related to my lack of publishing … I did have some reason in mind before I went on that rant.

I’ve read a lot of total rubbish in journals, and I swear to never inflict such excrement on other people’s eyes. So anything I publish would be either forced by a supervisor, or will be something I honestly think is worth publishing, something that will help to advance science. It is not out of pure altruism that I hesitate to publish my work, although that is part of it. The impulse against publishing is closer to a sense of aesthetics. Not wanting to release anything in my own name that is un-artful. I’m not an artist, but I have been born or raised with an artistic temperament, much to my detriment I believe. Artless people have a way of getting on much better in life. But there it is, somewhere in my genes and in my nurturing.

So I should resolve to never use the “Nothing to Show” argument. I have to get my research out in the open, let it be criticised, maybe some good will come of it.

The “Nothing to Fear” Argument Against Doing Stupid Stuff

Luckily I am not prone to this argument. If you truly have nothing to fear, then by all means … but often this sort of argument means you personally do not mind suffering whatever it is that’s in store, and that use of the argument can be fatal. So if you ever hear you inner or outer voice proclaiming “I have nothing to fear …” then take a breath and pause, make sure there truly is nothing to fear (but then, why would you be saying this out loud?). There is not much more to write about it. But feel free to add comments.

The “Nothing to Lose” Argument in Favour of Being Bold

This is normally a very good argument and perhaps the best use of the “Nothing to …” genre. If you truly have nothing to lose then you are not confounding this with the “Nothing to Fear” stupidity. So what more needs to be said?

After spending a week debating with myself about various Many Worlds philosophy issues and other quantum cosmology questions, today I saw Joel Primack’s presentation at the Philosophy of Cosmology International Conference, on the topic of Cosmological Structure Formation. And so for a change I was speechless.

Thus I doubt I can write much that illumines Primack’s talk better than if I tell you just to go and watch it.

He, and colleagues, have run supercomputer simulations of gravitating dark matter in our universe. From their public website Bolshoi Cosmological Simulations they note: “The simulations took 6 million cpu hours to run on the Pleiades supercomputer — recently ranked as seventh fastest of the world’s top 500 supercomputers — at NASA Ames Research Center.”

MD4 Gas density distribution of the most massive galaxy cluster (cluster 001) in a high resolution resimulation, x-y-projection. (Kristin Riebe, from the Bolshoi Cosmological Simulations.)

The filamentous structure formation is awesome to behold. At times they look like living cellular structures in the movies that Primack has produced. Only the time steps in his simulations are probably about 1 million year steps. for example, on simulation is called the Bolshio-Planck Cosmological Simulation — Merger Tree of a Large Halo. If I am reading this page correctly these simulations visualize 10 billion Sun sized halos. The unit they say they resolve is “1010 Msun halos”. Astronomers will often use a symbol M⊙ to represent a unit of one solar mass (equal to our Sun’s mass). But I have never seen that unit “M⊙ halo” used before, so I’m just guessing it means the finest structure resolvable in their movie still images would be maybe a Sun-sized object, or a solar system sized bunch of stuff. This is dark matter they are visualizing, so the stars and planets we can see just get completely obscured in these simulations (since the star-like matter is less than a few percent of the mass).

True to my word, that’s all I will write for now about this piece of beauty. I need to get my speech back.

* * *

Oh, but I do just want to hasten to say the image above I pasted in there is NOTHING compared to the movies of the simulations. You gotta watch the Bolshoi Cosmology movies to see the beauty!

OK, last post I was a bit hasty saying Simon Saunders undermined Max Tegmark. Saunders eventually finds his way to recover a theory of probability from his favoured Many Worlds Interpretation. But I do think he over-analyses the theory of probability. Maybe he is under-analysing it too in some ways.

What the head-scratchers seem to want is a Unified Theory of Probability. Something that gives what we intuitively know is a probability but cannot mathematically formalise in a way that deals with all reality. Well, I think this is a bit of a chimera. Sure, I’d like a unified theory too. But sometimes you have to admit reality, even abstract mathematical Platonic reality, does not always present us with a unified framework for everything we can intuit.

What’s more, I think probability theorists have come pretty close to a unified framework for probability. It might seem patchwork, it might merge frequentist ideas with Bayesian ideas, but if you require consistency across domains then apply the patchwork so that on overlaps you have agreement, then I suspect (I cannot be sure) that probability theory as experts understand it today, if fairly comprehensive. Arguing that frequentism should always work is a bit like arguing that Archimedean calculus should always work. Pointing out deficiencies in Bayesian probability does not mean there is no overarching framework for probability, since where Bayesianism does not work probably frequentism, or some other combinatorics, will.

Suppose you even have to deal with a space of transfinite cardinality and there is ignorance about where you are, then I think in the future someone will come up with measures on infinite spaces of various cardinality. They might end up with something that is a bit trivial (all probabilities become 0 or 1 for transfinite measures, perhaps?), but I think someone will do it. All I’m saying is that it is way too early in the history of mathematics to say we need to throw up our hands and appeal to physics and Many Worlds.

* * *

That was along intro. I really meant to kick off this post with a few remarks about Max Tegmark’s second lecture at the Oxford conference series on Cosmology and Quantum Foundations. He claims to be a physicist, but puts on a philosophers hat when he claims, “I am only my atoms”. Meaning he believes consciousness arises or emerges merely from some “super-complex processes” in brains.

I like Max Tegmark, he seems like a genuinely nice guy, and is super smart. But here he is plain stupid. (I’m hyperbolising naturally, but I still think it’s dopey what he believes.)

It is one thing to say your totality is your atoms, but quite another to take consciousness as a phenomenon seriously and claim it is just physics. Especially, I think, if your interpretation of quantum reality is the MWI. Why is that? Because MWI has no subjectivity. But, if you are honest, or if you have thought seriously about consciousness at all, and what the human mind is capable of, then without being arrogant or anthropocentric, you have to admit that whatever consciousness is, (and I do not know what it is just let me say, but whatever it is) it is an intrinsically subjective phenomenon.

You can find philosophers who deny this, but most of them are just denying the subjectiveness of consciousness in order to support their pet theory of consciousness (which is often grounded in physics). So those folks have very little credibility. I am not saying consciousness cannot be explained by physics. All I am saying is that if consciousness is explained by physics then our notion of physics needs to expand to include subjective phenomena. No known theories of physics have such ingredients.

It is not like you need a Secret Sauce to explain consciousness. But whatever it is that explains consciousness, it will have subjective sauce in it.

OK, I know I can come up with a MWI rebuff. In a MWI ontology all consistent realities exist due to Everettian branching. So I get behaviour that is arbitrarily complex in some universes. In those universes am I not bound to feel conscious? In other branches of the Everett multiverse I (not me actually, but my doppelgänger really, one who branched from a former “me”) do too many dumb things to be considered consciously sentient in the end, even though up to a point they seemed pretty intelligent.

The problem with this sort of “anything goes” so that in some universe consciousness will arise, is that it is naïve or ignorant. It commits the category error of assuming behaviour equates to inner subjective states. Well, that’s wrong. Maybe in some universes behaviour maps perfectly onto subjective states, and so there is no way to prove the independent reality of subjective phenomenon. But even that is no argument against the irreducibility of consciousness. Because any conscious agent who knows of (at least) their own subjective reality, they will know their universes branch is either not all explained by physics, or physics must admit some sort of subjective phenomenon into it’s ontology.

Future philosophers might describe it as merely a matter of taste, one of definitions. But for me, I like to keep my physics objective. Ergo, for me, consciousness (at least the sort I know I have, I cannot speak for you or Max Tegmark) is subjective, at least in some aspects. It sure manifests in objective physics thanks to my brain and senses, but there is something irreducibly subjective about my sort of consciousness. And that is something objectively real physics cannot fully explain.

What irks me most though, are folks like Tegmark who claim folks like me are arrogant in thinking we have some kind of secret sauce (by this presumably he means a “soul” or “spirit” that guides conscious thought). I think quite the converse. It is arrogant to think you can get consciousness explained by conventional physics and objective processes in brains. Height of physicalist arrogance really.

For sure, there are people who take the view human beings are special in some way, and a lot of such sentiments arise from religiosity.

But people like me come to the view that consciousness is not special, but it is irreducibly subjective. We come to this believing in science. But we also come without prejudices. So, in my humble view, if consciousness involves only physics you can say it must be some kind of special physics. That’s not human arrogance. Rather, it is an honest assessment of our personal knowledge about consciousness and more importantly about what consciousness allows us to do.

To be even more stark. When folks like Tegmark wave their hands and claim consciousness is probably just some “super complex brain process”, then I think it is fair to say that they are the ones using implicit secret sauce. Their secret sauce is of the garden variety atoms and molecules variety of course. You can say, “well, we are ignorant and so we cannot know how consciousness can be explained using just physics”. And that’s true. But (a) it does not avoid the problem of subjectivity, and (b) you can be just as ignorant about whether physics is all their is to reality. Over the years I have developed sense that it is far more arrogant to think physical reality is the only reality. I’ve tried to figure out how sentient subjective consciousness, and mathematical insight, and ideal Platonic forms in my mind can be explained by pure physics. I am still ignorant. But I do strongly postulate that there has to be some element of subjective reality involved in at least my form of consciousness. I say that in all sincerity and humility. And I claim it is a lot more humble than the position of philosophers who echo Tegmark’s view on human arrogance.

Thing is, you can argue no one understands consciousness, so no one can be certain what it is, but we can be fairly certain about what it isn’t. What it is not is a purely objectively specifiable process.

A philosophical materialist can then argue that consciousness is an illusion, it is a story the brain replays to itself. I’ve heard such ideas a lot, and they seem to be very popular at preset even though Daniel Dennett and others wrote about them more than 20 years ago. And the roots of the meme “consciousness is an illusion” is probably even centuries older than that, which you can confirm if you scour the literature.

The problem is you can then clearly discern a difference in definitions. The consciousness is an illusion folks use quite a different definition of consciousness compared to more onologically open-minded philosophers.

* * *

On to other topics …

* * *

Is Decoherence Faster than Light? (… yep, probably)

There is a great sequence in Max Tegmark’s talk where he explains why decoherence of superpositions and entanglement is just about, “the fastest process in nature!” He presents an illustration with a sugar cube dissolving in a cup of coffee. The characteristic times for relevant physical processes go as follows,

Fluctuations — changes in correlations between clusters of molecules.

Dissipation — time for about half the energy added by the sugar to be turned into heat. Scales by roughly the number of molecules in the sugar, so it takes on the order of N collisions on average.

Dynamics — changes in energy.

Information — changes in entropy.

Decoherence — takes only one collision. So about 1025 times faster than dissipation.

(I’m just repeating this with no independent checks, but this seems about right.)

This also gives a nice characterisation of classical versus quantum regimes:

Mostly Classical — when τdeco≪τdyn≤τdyn.

Mostly Quantum — when τdyn≪τdeco, τdiss.

See if you can figure out why this is a good characterisation of regimes?

Here’s a screenshot of Tegmark’s characterisations:

The explanation is that in a quantum regime you have entanglement and superposition, uncertainty is high, dynamics evolves without any change in information, and hence also with essentially no dissipation. Classically you get a disturbance in the quantum and all coherence is lost almost instantaneously, and yeah, it goes faster than light because with decoherence nothing physical is “going” it is a not a process, rather decoherence refers to a state of possible knowledge, and that can change instantaneously without any signal transfer, at least according to some theories like MWI or Copenhagen.

I should say that in some models decoherence is a physically mediated process, and in such theories it would take a finite time, but it is still fast. Such environmental decoherence is a feature of gravitational collapse theories for example. Also, the ER=EPR mechanism of entanglement would have decoherence mediated by wormhole destruction, which is probably something that can appear to happen instantaneously from the point of view of certain observers. But the actual snapping of a wormhole bridge is not a faster than light process.

I also liked Tegmark’s remark that,

“We realise the reason that big things tend to look classical isn’t because they are big, it’s just because big things tend to be harder to isolate.”

* * *

And in case you got the wrong impression earlier, I really do like Tegmark. In his sugar cube in coffee example his faint Swedish accent gives way for a second to a Feynmanesque “cawffee”. It’s funny. Until you here it you don’t realise that very few physicists actually have a Feynman accent. It’s cool Tegmark has a little bit of it, and maybe not surprising as he often cites Feynman as one of his heroes (ah, yeah, what physicist wouldn’t? Well, actually I do know a couple who think Feynman was a terrible influence on physics teaching, believe it or not! They mean well, but are misguided of course! ☻).

* * *

The Mind’s Role Play

Next up: Tegmark’s take on explaining the low entropy of our early universe. This is good stuff.

Background: Penrose and Carroll have critiqued Inflationary Big Bang cosmology for not providing an account for why there is an arrow of time, i.e., why did the universe start in an extremely low entropy state.

(I have not seen Carroll’s talk, but I think it is on my playlist. So maybe I’ll write about it later.) But I am familiar with Penrose’s ideas. Penrose takes a fairly conservative position. He takes the Second Law of Thermodynamics seriously. He cannot see how even the Weyl Curvature Hypothesisexplains the low entropy Big Bang. (I think WCH is just a description, not an explanation.)

Penrose does have a few ideas abut how to explain things with his Conformal Cyclic Cosmology ideas. I find them hugely appealing. But I will not discuss them here. Just go read his book.

What I want to write about here is Tegmark and his Subject-Object-Environment troika. In particular, why does he need to bring the mind and observation into the picture? I think he could give his talk and get across all the essentials without mentioning the mind.

But here is my problem. I just do not quite understand how Tegmark goes from the correct position on entropy, which is that is is a coarse graining concept, to his observer-measurement dependence. I must be missing something in his chain of reasoning.

So first: entropy is classically a measure of the multiplicity of a system, i.e., how many microstates in an ensemble are compatible with a given macroscopic state. And there is a suitable generalisation to quantum physics given by von Neumann.

If you fine grain enough then most possible states of the universe are unique and so entropy measured on such scales is extremely low. Basically, you only pick up contributions from degenerate states. Classically this entropy never really changes, because classically an observer is irrelevant. Now, substitute for “Observer” the more general “any process that results in decoherence”. Then you get a reason why quantum mechanically entropy can decrease. To whit: in a superposition there are many states compatible with prior history. When a measurement is made (for “measurement” read, “any process resulting in decoherence”) then entropy naturally will decrease on average (except for perhaps some unusual highly atypical cases).

Here’s what I am missing. All that I just said previously is local. Whereas, for the universe as a whole, globally, what is decoherence? It is not defined. and so what is global entropy then? There is no “observer” (read: “measurement process”) that collapses or decohere’s our whole universe. At least none we know of. So it all seems nonsense to talk about entropy on a cosmological scale.

To me, perhaps terribly naïvely, there is a meaning for entropy within a universe in localised sub-systems where observations can in principle be made on the system. “Counting states” to put it crudely. But for the universe (or Multiverse if you prefer) taken as a whole, what meaning is there to the concept of entropy? I would submit there is no meaning to entropy globally. The Second Law triumphs right? I mean, for a closed isolated system you cannot collapse states and get decoherence, at least not from without, so it just evolves unitarily with constant entropy as far as external observers can tell, or if you coarse grain into ensembles then the Second Law emerges, on average, even for unitary time evolution.

Perhaps what Tegmark was on about was that if you have external observer disruptions then entropy reduces (you get information about the state). But does this not globally just increase entropy since globally now the observer’s system is entangled with the previously closed and isolated system. But who ever bothers to compute this global entropy? My guess is it would obey the Second Law. I have no proof, just my guess.

Of course, with such thoughts in my head it was hard to focus on what Tegmark was really saying, but in the end his lecture seems fairly simple. Inflation introduces decoherence and hences lowers quantum mechanical entropy. So if you do not worry about classical entropy, just focus on the quantum states, then apparently inflationary cosmology can “explain” the low entropy Big Bang.

Only, if you ask me, this is no explanation. It is just “yet another” push-back. Because Inflationary cosmology is incomplete, it does not deal with the pre-inflationary universe. In other words, the pre-inflationary universe has to also have some entropy if you are going to be consistent with taking Tegmarks’ side. So however much inflation reduces entropy, you still have the initial pre-inflationary entropy to account for, which now becomes the new “ultimate source” of or arrow of time. Maybe it has helped to push the unexplained entropy a lot higher? But then you get into the realm of, “what is ‘low’ entropy in cosmological terms?” What does it mean to say the unexplained pre-inflationary entropy is high enough to not worry about? I dunno’. Maybe Tegmark is right? Maybe pre-inflation entropy (disorder) is so high by some sort of objectively observer independent measure (is that possible?) that you literally no longer have to fret about the origin of the arrow of time? Maybe inflation just wipes out all disorder and gives us a proverbial blank slate?

But then I do fret about it. Doesn’t Penrose come in at this point and give baby Tegmark a lesson in what inflation can and cannot do to entropy? Good gosh! It’s just about enough confusion to drive one towards the cosmological anthropic principle out of desperation for closure.

So despite Tegmark’s entertaining and informative lecture, I still don’t think anyone other than Penrose has ever given a no-push-back argument for the arrow of time. I guess I’ll have to watch Tegmark’s talk again, or read a paper on it for greater clarity and brevity.

Continuing my ad hoc review of Cosmology and Quantum Foundations, I come to Max Tegmark and Simon Saunders, who were the two main champions of Many Worlds Interpretations present at this conference. But before discussing ideas arising from their talks, I want to mention an addendum to the Hidden Variables and de Broglie-Bohm pilot wave theory that I totally coincidentally came across the night after writing the previous post (“Gaddamit! Where’d You Put My Variables”).

Fluid Dynamics and Oil Droplets Model de Broglie-Bohm Pilot Waves

Oil droplets surfing ripples on a fluid surface exhibit two-slit interference. Actually not! They follow chaotic trajectories that reproduce interference patterns only statistically, but there is no superposition at all for the oil droplet, only for the wave ripples. Remarkably similar qualitatively to de Broglie-Bohm pilot wave theory.

You delicately place oil droplets on an immiscible fluid surface (water I suppose) and the droplets bounce around creating waves in the fluid surface. Then, lo and behold! Send an oil droplet through a double slit barrier and it goes through one slit right! Shocking! But then hold on to your skull … after traversing the slit the oil droplet then chaotically meanders around surfing on the wave ripples spreading out from the double slit that the oil droplet was actually responsible for generating before it got to the slits.

Do this for many oil droplets and you will see the famous statistical build-up of interference pattern at a distance radius, but here with classical oil droplets that can be observed to smithereens without destroying superposition of the fluid waves, so you get purely classical double slit interference. Just like the de Broglie-Bohm pilot wave theory predicts for the Bohmian mechanics view of quantum mechanics. I say, “jut like” because clearly this is macroscopic in scale and the mechanism of pilot waves is totally different to the quantum regime. Nonetheless, it is a clear condensed matter physics model for pilot wave Bohmian quantum mechanics.

(There is a recent decades trend in condensed matter physics where phenomenon qualitatively similar to quantum mechanics or black hole phenomenology, or even string theory, can be modelled in solid state or condensed matter systems. It’s a fascinating thing. No one really has an explanation for such quasi-universality in physics. I guess, when different systems of underlying equations give similar asymptotic behaviour then you have a chance of observing such universality in disparate and seemingly unrelated physical systems. One example Susskind mentions in his theoretical Minimum lectures is the condensed matter systems that model Majorana fermions. It’s just brilliantly fascinating stuff. I was going to write separate article about this. Maybe later. I’ll just mention that although such condensed matter models have to be taken with a grain of salt, to whatever extent they can recapitulate the physics of quantum systems you have this tantalising possibility of being able to construct low energy desktop experiments that might, might, be able to explore extreme physics such as superstring regimes and black hole phenomenology, only with safe and relatively affordable experiments. I’m no futurist, but as protein biology promises to be the biology of the 21st century, maybe condensed matter physics is poised to take over from particle accelerators as the main physics laboratory for the 1st century? It’d be kinda’ cool wouldn’t it?)

The oil droplet experiments are not a perfect model for Bohmian mechanics since these pilot waves do not carry other quantum degrees of freedom like spin or charge.

Normally I would scoff at this and say, “nice, but so what?” Physics, and science in general, is rife with examples of disparate systems that display similarity or universality. It does not mean the fundamental physics is the same. And in the oil droplet pilot wave experiments we clearly have a hell of a lot of quantum mechanics phenomenology absent.

But I did not scoff at this one.

The awesome thing about this oil droplet interference experiment is that there is a clear mechanism that can recapitulate a lot of the same phenomenology at the Planck scale, and hence offers an intriguing and tantalising alternative explanation for quantum mechanics as an effective theory that emerges from a more fundamental of Plank scale spacetime dynamics (geometrodynamics to borrow the terminology of Wheeler and Misner). Hell, I will not even mention “quantum gravity”, since that’d take me too fa afield, but dropping that phrase in here is entirely appropriate.

The clear Planck scale phenomenology I am speaking of is the model of spacetime as a superfluid. It will support non-dissipative pilot waves, which are therefore nothing less than subatomic gravitational waves of a sort. Given the weakness of gravity you can imagine how fragile are the superpositions of these spacetime or gravitational pilot waves. Not hard to destroy coherent states.

Then, of course, we already have the emerging theory of ER=EPR which explains entanglement using a type of geometrodynamics. If you start to package together everything that you can get out of geometrodynamics then you being to see a jigsaw puzzle filling in that hints maybe the whole gamut of quantum physics phenomenology at the Planck scale can be largely adequately explained using spacetime geometry and topology.

One big gap in geometrodynamics is the phenomenology of particle physics. Gauge symmetries, charges, and the rest. It will take a brave and fortified physicist to tackle all these problems. If you read my blog you will realise I am a total fan of such approaches. Even if they are wrong, I think they are huge fun to contemplate and play with, even if only as mathematical diversions. So I encourage any young mathematically talented physicists to dare to go in to active research on geometrodynamics.

The Many Worlds Guys

So what about Tegmark and Saunders? Well, by this point I kind of exhausted myself today and forgot what I was going to write about. Saunders mentioned something about frequentist probability having serious issues and that Frequentism could not be a philosophical basis for probability theory. I think that’s a bit unfair. Frequentism works in many practical cases. I don’t think it has to be an over-arching theory of probability. It works when it works.

Same in lots of science. Fourier transforms work on periodic signals, and FT’s can compress non-periodic signals too, but not perfectly. Newtonian physics works bloody well in many circumstances, but is not an all-encompassing theory of mechanics. Natural selection works to explain variation and speciation in living systems, but it is not the whole story, it cannot happen without some supporting mechanism like DNA replication and protein synthesis. You cannot explain speciation using Natural selection alone, it’s just not possible, Natural selection is too general and weak to be a full explanatory theory.

It’s funny too. Saunders seems to undermine a lot of what Tegmark was trying to argue in the previous talk at the conference. Tegmark was explicitly using frequentist counting in his arguments that Copenhagen is no better or worse than Many Worlds from a probabilistic perspective. I admit I do not really know what Saunders was on about. If you can engineer a proper measure than you can do probability. I think maybe Tegmark can justify some sort of MWI space measures. Again, I do not really know much about measure theory for MWI space. Maybe it is an open problem and Tegmark is stretching credibility a bit?

The basic idea Valentini proposes is that we could be living in a deterministic cosmos, but we are somehow trapped in a region of phase space where quantum indeterminism reigns. In the our present epoch region there are hidden variables but they cannot be observed, not even indirectly, so they have no observable consequences, and so Bell’s Theorem and Kochen-Specker and the rest of the “no-go” theorems associated with quantum logic hold true. Fine, you say, then really you’re saying there effectively are no Hidden Variables (HV) theories that describe our reality? No, says Valetini. The Hidden Variables would be observable if the universe was in a different state, the other phase. How might this happen? And what are the consequences? And is this even remotely plausible?

Last question first: Valentini thinks it is testable using the microwave cosmic background radiation. Which I am highly sceptical about. But more on this later.

The idea of non-equilibrium Hidden Variable theory in cosmology. The early universe violates the Born Rule and hidden variables are not hidden. But the violent history of the universe has erased all pilot wave details and so now we only see non-local hidden variables which is no different from conventional QM. (Apologies for low res image, it was a screenshot.)

How Does it Work?

How it might have happened is that the universe as a whole might have two (at least, maybe more) sorts of regimes, one of which is highly non-equilibrium, extremely low entropy. In this region or phase the Hidden Variables would be apparent and Bell’s Theorems would be violated. In the other type of phase the universe is in equilibrium, high entropy, and Hidden Variables cannot be detected and Bell’s Theorem’s remain true (for QM). Valentini claims early during the Big Bang the universe may have been in the non-equilibrium phase, and so some remnants of this HV physics should exist in the primordial CMB radiation. But you cannot just say this and get hidden variables to be unhidden. There has to be some plausible mechanism behind the phase transition or the “relaxation” process as Valentini describes it.

The idea being that the truly fundamental physics of our universe is not fully observable because the universe has relaxed from non-equilibrium to equilibrium. The statistics in the equilibrium phase get all messed up and HV’s cannot be seen. (You understand that in the hypothetical non-equilibrium phase the HV’s are no longer hidden, they’d be manifest ordinary variables.)

Further Details from de Broglie-Bohm Pilot Wave Theory

Perhaps the most respectable HV theory is the (more or less original) de Broglie-Bohm pilot wave theory. It treats Schrödinger’s wave function as a real potential in a configuration space which somehow guides particles along deterministic trajectories. Sometimes people postulate Schrödinger time evolution plus an additional pilot wave potential. (I’m a bit vague about it since it’s a long time since I read any pilot wave theory.) But to explain all manner of EPR experiments you have to go to extremes and imagine this putative pilot Wave as really an all-pervading information storage device. It has to guide not only trajectories but also orientations of spin and units of electric charge and so forth, basically any quantity that can get entangled between relativistically isolated systems.

This seems like unnecessary ontology to me. Be that as it may, the Valentini proposal is cute and something worth playing around with I think.

So anyway, Valentini shows that if there is indeed an equilibrium ensemble of states for the universe then details of particle trajectories cannot be observed and so the pilot wave is essentially unobservable, and hence a non-local HV theory applies which is compatible with QM and the Bell inequalities.

It’s a neat idea.

My bet would be that more conventional spacetime physics which uses non-trivial topology can do a better job of explaining non-locality than the pilot wave. In particular, I suspect requiring a pilot wave to carry all relevant information about all observables is just too much ontological baggage. Like a lot of speculative physics thought up to try to solve foundational problems, I think the pilot wave is a nice explanatory construct, but it is still a construct, and I think something still more fundamental and elementary can be found to yield the same physics without so many ad hoc assumptions.

To relate this with very different ideas, what the de Broglie-Bohm pilot wave reminds me of is the inflaton field postulated in inflationary Big Bang models. I think the inflaton is a fictional construct. Yet it’s predictive power has been very successful. My understanding is that instead of an inflaton field you can use fairly conventional and uncontroversial physics to explain inflationary cosmology, for example the Penrose CCC (Conformal Cyclic Cosmology) idea. This is not popular. But it is conservative physics and requires no new assumptions. As far as I can tell CCC “only” requires a long but finite lifetime for electrons, which should eventually decay by very weak processes. (If I recall correctly, in the Standard Model the electron does not decay.) The Borexino experiment in Italy has measured the lower limit on the electron lifetime as longer than 66,000—yottayears, but currently there is no upper limit.

And for the de Broglie-Bohm pilot wave I think the idea can be replaced by spacetime with non-trivial topology, which again is not very trendy or politically correct physics, but it is conservative and conventional and requires no drastic new assumptions.

What Are the Consequences?

I’m not sure what the consequences of cosmic HV’s are for current physics. The main consequence seems to be an altered understanding of the early universe, but nothing dramatic for our current and future condition. In other words, I do not think there is much use for cosmic HV theory.

Philosophically I think there is some importance, since the truth of cosmic HV’s could fill in a lot of gaps in our civilisations understanding of quantum mechanics. It might not be practically useful, but it would be intellectually very satisfying.

Is Their Any Evidence for these Cosmic HV’s?

According to Valentini, supposing at some time in the early Big Bang there was non-equilibrium, hence classical physics more or less, then there should be classical perturbations frozen in the cosmic microwave radiation background from this period. This is due to a well-known result in astrophysics where perturbations on so-called “super Hubble” length scales tend to be frozen — i.e., they will still exist in the CMB.

Technically what Valentini et al., predict is a low-power anomaly at large angles in the spectrum of the CMB. That’s fine and good, but (contrary to what Valentini might hope) it is not evidence of non-equilibrium quantum mechanics with pilot waves. Why not? Simply because a hell of a lot of other things can account for observed low-power anomalies. Still, it’s not all bad — any such evidence would count as Bayesian inference support for pilot wave theory. Such weak evidence abounds in science, and would not count as a major breakthrough, unfortunately (because who doesn’t enjoy a good breakthrough?) I’m sure researchers like Valentini, in any sciences, in such positions of lacking solid evidence for a theory will admit behind closed doors the desultory status of such evidence, but they do not often advertise it as such.

It seems to me so many things can be “explained” by statistical features in the CMB data. I think a lot of theorist might be conveniently ignoring the uncertainties in the CMB data. You cannot just take this data raw and look for patterns and correlations and then claim they support your pet theory. At a minimum you need to use the uncertainties in the CMB data and allow for the fact that your theory is not truly supported by the CMB when alternatives to your pet theory are also compatible with the CMB.

I cannot prove it, but I suspect a lot of researchers are using the CMB data in this way. That is, they can get the correlations they need to support their favourite theory, but if they include uncertainties then the same data would support no correlations. So you get a null inconclusive result overall. I do not believe in HV theories, but I do sincerely wish Valentini well in his search for hard evidence. Getting good support for non-mainstream theories in physics is damn exciting.

* * *

Epilogue — Why HV? Why not MWI? Why not …

At the same conference Max Texmark polls the audience on their favoured interpretations of QM. The very fact people can conduct such polls among smart people is evidence of areal science of scientific anthropology. It’s interesting, right?! The most popular was Undecided=24. Many Worlds=15. Copenhagen=2. Modified dynamics (GRW)=0. Consistent Histories=0. Bohm (HV)=5. Relational=2. Modal=0.

This made me pretty happy. To me, undecidability is the only respectable position one can take at this present juncture in the history of physics. I do understand of course that many physicists are just voting for their favourites. Hardly any would stake their life on the fact that their view is correct. still, it was heart-warming to see so many taking the sane option seriously.

I will sign off for now by noting a similarity between HV and MWI. There’s not really all that much they have in common. But they both ask us to accept some realities well beyond what conservative standard interpretation-free quantum mechanics begs. What I mean by interpretation-free is just minimalism, which in turn is simply whatever modeling you need to actually do quantum mechanics predictions for experiments, that is the minimal stuff you need to explain or account for in any metaphysics interpretations sitting on top of QM. There is, of course, no such interpretation, which is why I can call it interpretation-free. You just go around supposing (or actually not “supposing” but merely “admitting the possibility”) the universe IS this Hilbert space and that our reality IS a cloud of vectors in this space that periodically expands and contracts in consistency with observed measurement data and unitary evolution, so that it all hangs together consistently and a consistent story can be told about the evolution of vectors in this state space that we take as representing our (possibly shared) reality (no need for solipsism).

I will say one nice thing about MWI: it is a clean theory! It requires a hell of a lot more ontology, but in some sense nothing new is added either. The writer who most convinces me I could believe in MWI is David Deutsch. Perhaps logically his ideas are the most coherent. But what holds me back and forces me to be continually agnostic for now (and yes, interpretations of QM debates are a bit quasi-religious, in the bad meaning of religious, not the good) is that I still think people simply have not explored enough normal physics to be able to unequivocally rule out a very ordinary explanation for quantum logic in our universe.

I guess there is something about being human that desires an interpretation more than this minimalism. I am certainly prey to this desire. But I cannot force myself to swallow either HV(Bohm) or MWI. They ask me to accept more ontology than I am prepared to admit into my mind space for now. I do prefer to seek a minimalist leaning theory, but not wholly interpretation-free. Not for the sake of minimalism, but because I think there is some beauty in minimalism akin to the mathematical idea of a Proof from the Book.