Pages

Saturday, August 17, 2019

How we know that Einstein's General Relativity cannot be quite right

Today I want to explain how we know that the way Einstein thought about gravity cannot be correct.

Einstein’s idea was that gravity is not a force, but it is really an effect caused by the curvature of space and time. Matter curves space-time in its vicinity, and this curvature in return affects how matter moves. This means that, according to Einstein, space and time are responsive. They deform in the presence of matter and not only matter, but really all types of energies, including pressure and momentum flux and so on.

Einstein called his theory “General Relativity” because it’s a generalization of Special Relativity. Both are based on “observer-independence”, that is the idea that the laws of nature should not depend on the motion of an observer. The difference between General Relativity and Special Relativity is that in Special Relativity space-time is flat, like a sheet of paper, while in General Relativity it can be curved, like the often-named rubber sheet.

General Relativity is an extremely well-confirmed theory. It predicts that light rays bend around massive objects, like the sun, which we have observed. The same effect also gives rise to gravitational lensing, which we have also observed. General Relativity further predicts that the universe should expand, which it does. It predicts that time runs more slowly in gravitational potentials, which is correct. General Relativity predicts black holes, and it predicts just how the black hole shadow looks, which is what we have observed. It also predicts gravitational waves, which we have observed. And the list goes on.

So, there is no doubt that General Relativity works extremely well. But we already know that it cannot ultimately be the correct theory for space and time. It is an approximation that works in many circumstances, but fails in others.

We know this because General Relativity does not fit together with another extremely well confirmed theory, that is quantum mechanics. It’s one of these problems that’s easy to explain but extremely difficult to solve.

Here is what goes wrong if you want to combine gravity and quantum mechanics. We know experimentally that particles have some strange quantum properties. They obey the uncertainty principle and they can do things like being in two places at once. Concretely, think about an electron going through a double slit. Quantum mechanics tells us that the particle goes through both slits.

Now, electrons have a mass and masses generate a gravitational pull by bending space-time. This brings up the question, to which place does the gravitational pull go if the electron travels through both slits at the same time. You would expect the gravitational pull to also go to two places at the same time. But this cannot be the case in general relativity, because general relativity is not a quantum theory.

To solve this problem, we have to understand the quantum properties of gravity. We need what physicists call a theory of quantum gravity. And since Einstein taught us that gravity is really about the curvature of space and time, what we need is a theory for the quantum properties of space and time.

There are two other reasons how we know that General Relativity can’t be quite right. Besides the double-slit problem, there is the issue with singularities in General Relativity. Singularities are places where both the curvature and the energy-density of matter become infinitely large; at least that’s what General Relativity predicts. This happens for example inside of black holes and at the beginning of the universe.

In any other theory that we have, singularities are a sign that the theory breaks down and has to be replaced by a more fundamental theory. And we think the same has to be the case in General Relativity, where the more fundamental theory to replace it is quantum gravity.

The third reason we think gravity must be quantized is the trouble with information loss in black holes. If we combine quantum theory with general relativity but without quantizing gravity, then we find that black holes slowly shrink by emitting radiation. This was first derived by Stephen Hawking in the 1970s and so this black hole radiation is also called Hawking radiation.

Now, it seems that black holes can entirely vanish by emitting this radiation. Problem is, the radiation itself is entirely random and does not carry any information. So when a black hole is entirely gone and all you have left is the radiation, you do not know what formed the black hole. Such a process is fundamentally irreversible and therefore incompatible with quantum theory. It just does not fit together. A lot of physicists think that to solve this problem we need a theory of quantum gravity.

So this is how we know that General Relativity must be replaced by a theory of quantum gravity. This problem has been known since the 1930s. Since then, there have been many attempts to solve the problem. I will tell you about this some other time, so don’t forget to subscribe.

189 comments:

This amounts to a bit of historical trivia regarding GR/QM incompatibility. However, it is trivia that originated from the mind of Paul A. M. Dirac, one of the most remarkable minds in the history of quantum physics, so I think it's worth a mention.

As recorded in the nifty little book Lectures on Quantum Mechanics, Dirac made this comment near the end of his four lectures:

"So for the Born-Infeld electrodynamics, the consistency conditions for the quantum theory on flat surfaces are fulfilled, while they are not fulfilled on curved surfaces. Physically that means that we can set up the basic equations for a quantum theory of the Born-Infeld electrodynamics agreeing with special relativity, but we should have difficulties if we wanted to have this quantum theory agreeing with general relativity."

In their very early (1930s) non-linear electrodynamics theory, Born and Infeld suggested that in analogy to the c limit of velocity, there is also a maximum electric field strength with a self-energy that then defines the electron mass. In the Standard Model that did not arrive until decades later, this mass would be described by the Higgs mechanism acting on two chiralities of the electron.

(For reasons I won't even try to get into here, I don't think those two seemingly very different views are as incompatible as they sound. Many and fascinating are the equivalent dynamic/kinematic faces of QM, as Robert Spekkens has aptly observed.)

For this discussion I simply wish to point out that Dirac thought intensely about the relationship between quantization and space, and came to the intriguing conclusion that for at least some theories of particles and mass, quantum behavior tends to break down in highly curved space. My own poor reading of his work is that phenomena such as entanglement over large distances simply may not be capable of persisting in sharply curved space.

For anyone interested in a nicely detailed mathematical analysis of why curved space might severely stress quantum phenomena, I highly recommend this short book by Dirac. Dirac was one of those remarkable people whose thoughts should never dismissed too casually. I offer as proof the entire history of Feynman's version of QED and the Nobel Prize he got for it, all of which emerged from a single brief paper on Lagrangian methods that Dirac once published (and promptly forgot) in a very obscure Russian journal.

Sabine, The overall conclusion is of course correct - there needs to be more than just the GR that Einstein proposed. But your examples have different statuses. The double slit one is a low energy phenomenon. It is really no different than the electromagnetic field of an electron during the double slit. Your conclusion "But this cannot be the case in general relativity, because general relativity is not a quantum theory." is not correct, as GR and QM work normally at low energy. But the other two are high energy problems. The singularity issue is the ultimate high energy problem. The information paradox is a bit more debatable, but still boils down to the problems of the ultimate evaporation of the BH (as you do say) which is also a high energy problem. So it is important to differentiate between things that can be solved by having a new theory supplanting GR at high energy, and low energy phenomenon that can presumably be handled by the effective field theory. I do not know any effect at ordinary energies which indicates a problem with the compatibility of GR and QM. But high energy effects signal the need for a modification. It is important to make this distinction.John Donoghue

"Concretely, think about an electron going through a double slit. Quantum mechanics tells us that the particle goes through both slits."

As far as I can tell QM does not say such a thing. It predicts that electrons that encounter a barrier with two slits will produce bands on a screen that resembles an interference pattern. Why do you think such a pattern cannot be the result of electrons passing from a single slit but being influenced by the EM fields produced by the electrons and nuclei from the whole barrier?

"Now, it seems that black holes can entirely vanish by emitting this radiation. Problem is, the radiation itself is entirely random and does not carry any information."

On the topic of going through both slits, Reichenbach had a whole book on how "slit A OR slit B" and "slit A AND slit B" are both acceptable variants of Copenhagen.

"Why do you think such a pattern cannot be the result of electrons passing from a single slit but being influenced by the EM fields produced by the electrons and nuclei from the whole barrier?"

Because people had tried to make sense of this. Some super rare textbooks might cover this, but typically this is a oral tradition passed down through the generations of experimenters, unless you specifically ask for it: The interference pattern seems to be extremely independent of whether the barrier is metal or plastic or what other material. It is, however, extremely sensitive to the geometry of how the slit is cut. This points to a fundamental behaviour. After all, even if you control for the metal surface type, the various different workfunction energies should affect the inference pattern. Yet, it seem not to matter as much as small changes in the geometries.

A reference would be nice. As far as I know the only classical model used to show the failure of classical physics in regards to the two-slit experiment is Newtonian mechanics of the rigid body (bullets, billiard balls, etc.). This is ridiculous given the fact that we are in the presence of an electromagnetic phenomenon. You cannot describe e-m induction using bullets, so why in the world would one expect them to be a good model for QM? A field theory, on the other hand, might very well succeed because in such a theory there is no wonder that a particle passing through slit A can be influenced by slit B.

"The interference pattern seems to be extremely independent of whether the barrier is metal or plastic or what other material."

We have in all cases the same type of particles (electrons and quarks) interacting in the same way (e-m interaction).

"It is, however, extremely sensitive to the geometry of how the slit is cut. This points to a fundamental behavior."

Sure, the field acting on the electron depends on the distribution of field sources. Nothing unusual here.

If you make the claim that classical physics cannot explain the two slit experiment, the burden of proof is on you to substantiate this claim with the necessary calculations, or an impossibility proof of some sort. If you did not perform them you are in no position to make the claim.

It is very easy to make unjustified claims and wait for others to disprove them. Can you prove that, according to QM the ducks should quack? Show me the detailed calculations or agree QM has been falsified.:)

You are literally ignoring the scientific method and insisting that your case is not tried when the opposite is true.

In the modern era it is literally trivial to code up a computer simulation of electrons passing through a double slit. You can do numerical, you can do fields, you can do everything you like and dislike.

You will also find out that conductors v.s. insulators ought to be extremely different and treated totally differently.

Andrei, the double slit experiment has been done with photons (which are neutral), neutrons (ditto), even buckyballs (likewise). As far as I know, the interference patterns observed are the same, which is not what one would expect from some sort of em interaction between slits and particles.

Anfrei wrote to me:>If you make the claim that classical physics cannot explain the two slit experiment, the burden of proof is on you to substantiate this claim with the necessary calculations, or an impossibility proof of some sort. If you did not perform them you are in no position to make the claim.

Hmmmm..... where exactly did you see me make that claim??? Exact quote please.

I merely pointed out that, to the best of my knowledge, no one has ever succeeded in doing what you claim can be done.

I myself am one of those physicists who have tried to do what you suggest, and I too have not succeeded.

You make it sound as if you think it is simple. Fine: show us. I am willing to be convinced.

As I said above, "But, you could be the first. So give it a try."

Go ahead. Try it.

I take it you are not actually a physicist? If that is correct, you might want to first learn some physics.

In all sincerity, good luck. QM bugs me as it bugs Sabine, my old prof Steve Weinberg, and lots of other physicists. So, make us all really happy and show us how QM is really just classical physics.

Andrei wrote to BF:>If it has been tried please provide a reference! I was unable to find such a study.

Andrei, as we keep trying to tell you, most of us did not publish our attempts to give classical explanations of QM because we all failed.

Physics journals tend not to accept papers whose basic point is, "I tried what I thought was a clever idea, but, alas, I was not able to make it work at all."

Maybe you think the journals should publish all of our obvious failures, but all I can say is that then we would all certainly have a much longer publication list -- all theoretical physicists are extremely good at coming up with failed ideas.

PhysicistDave is entirely right, of course. If you think you can do it, then please do it, but just declaring hat it's simple doesn't convince anyone. If you look at it in more detail, you will almost certainly find out you didn't understand the problem in the first place.

This discussion is meaningless without committing to a definition of "classical electrodynamics" (CE). Point particle CE is either ill defined, or else manifestly contradicts energy-momentum conservation (arXiv:0902.4606 [quant-ph]). The only well defined version of CE (in the sense that it is mathematically well-defined, particles have finite energies, and it respects Maxwell's equations and local energy-momentum conservation) is - to the best of my knowledge - ECD. This mathematically non trivial theory has various unexpected properties rendering it a plausible candidate for an ontology underlying QM statistical description. In particular, it is impossible to do statistics of ECD solutions directly; ECD does not come equipped with a natural measure on the space of its solutions. QM, then, is a complementary, fundamental statistical theory of the ECD energy-momentum tensor, on equal footings with ECD itself. The "burden of proof" only applies to the compatibility of the two theories arXiv:1804.00509 [quant-ph].

Extending this rational to curved spacetime, "quantum gravity" is just a statistical description of the (generally covariant) ECD energy-momentum tensor.

"Hmmmm..... where exactly did you see me make that claim??? Exact quote please."

Please notice the "if" in the beginning of my quote. If you do not deny my claims, sure, it does not apply.

"I myself am one of those physicists who have tried to do what you suggest, and I too have not succeeded."

I would be interested to take a look at your study.

"You make it sound as if you think it is simple. Fine: show us. I am willing to be convinced."

I am not saying it is simple. I think it is extremely complex. Any simulation involving more than a few particles is difficult. But my point is that if such simulations cannot be done one cannot conclude that the theory is wrong.

"I take it you are not actually a physicist? If that is correct, you might want to first learn some physics."

I am not a physicist, indeed, but I have formal studies in physics, including QM as part of my chemistry degree. I know the arguments against classical physics and I also know that at least some of them have been shown to be wrong. Can you please point out the errors that I made here and what do you think I should learn? After all, I did not make the claim that classical EM can explain the two-slit experiment, I only asked Sabine her reasons for dismissing this hypothesis. I have not seen any clear answer from here, just an attempt to shift the burden of proof.

I would expect, therefore that true physicists like Weinberg and yourself and Sabine to present a proof that classical physics cannot explain the two-slit experiment. All I could find is a reiteration of the stupid example involving bullets. I would ask those physicists to first explain the workings of a planetary system in terms of bullets. Does the failure of such an approach prove that classical physics cannot describe planetary systems? If not, why should I accept this argument in the case of electromagnetism?

"Andrei, as we keep trying to tell you, most of us did not publish our attempts to give classical explanations of QM because we all failed."

OK, but what does this prove? If you can generalize your results to the whole class of classical field theories, such a result would certainly deserve to be published. But if only the simplified model you tried did not work, no conclusion are obvious. You probably need to change the model. And why should you not publish such a work? We could still learn something from it, and other physicists may find a way to improve the model.

It is also worth mentioning that another great physicist, 't Hooft claims that QM can actually be derived from a deterministic, discrete, classical model. What do you think about his claims?

When presented with an interesting experiment, like the two-slit, one should first try to make sense of it using known physics, will you agree with that?

An electron passing through a large system of electrons and quarks is a problem of electromagnetism, not a Newtonian rigid body problem, do you agree? Yet, every discussion I could find was centered around bullets. Feynman was also very fond of them. I just cannot understand why. There may be some proof somewhere that the best way to describe the motion of charged particles in the field of other charged particles is using bullets, but I failed to find such a proof.

I have asked you a simple question. Please disclose the reasons for dismissing the possibility that classical em or some improved classical field theory of the same sort could provide an explanation for this experiment. If you have no such reasons then I would say that the most obvious research direction is this one. If you have them please provide a link or a reference so that I can understand where my mistake originates.

Now that you have disclosed your chemist's identity, I think you'll appreciate chap.3 of arXiv:1201.5281 . And while I agree with your point regarding neutrons and fullerenes, this argument doesn't apply to `photons'. To get photons from (well defined) classical electrodynamics is less obvious. In essence, `photons' are said to be detected whenever advanced waves adjunct to a particle converge on it and jolt it. One of the nice features of ECD is that one cannot impose an arrow-of-time by excluding advanced solutions; the balance between advanced and retarded solutions is determined by the self-consistency of a solution, which therefore predicts `photons'.

As far as I can tell, E-M waves, like radio waves can be discussed in classical EM. Sure, if we restrict to atomic emission this adds the problem of correctly representing atoms.

I have tried to understand a little bit about your theory, not very easy given my rudimentary math skills. One issue I have is that ECD is not deterministic in the sense that present state cannot be used to compute the future state and retrocausal. This seems to reduce its predictive power and looks a little ad-hoc. Are these features inevitable?

In regards to the two-slit experiment I am not sure that classical EM, as it is, is useless. After all the Standard Model is also mathematically ill-defined. I wonder what one could get even from a crude simulation (the electron represented as a charged sphere, the material in the barrier as a large group of dipoles). If this fails one could increase the realism until the level where the structure of the electron is required.

I prefer the name IVP (Initial Value Problem) for the paradigm you (and others) seem to hold sacred for no real reason. That so many of our successful theories are IVP is due to the fact that the local energy-momentum conservation part of the basic tenets (the other part being Maxwell's equations) renders the coarse grained description of almost any theory, effectively IVP. For example, what else can a hokey pack do but move with a constant velocity if it is to conserve momentum? Add Maxwell's equations and you'll get the Lorentz force equation.

Perhaps one can also build an IVP model for an electron, respecting the basic tenets, but then this electron would be subject to Bell's argument against it being the ontology underlying QM statistics. Furthermore, whence comes charge quantization (ECD ontology is trivially quantized). Most importantly in my opinion, that representation will not be scale covariant, as short range forces would need to counter the internal Coulomb repulsion (ECD is scale covariant).

So, you could model matter as you wish, but don't expect nature to cooperate :)

Andrei wrote to me:>I would be interested to take a look at your study.

Andrei, I have thought about it for nearly fifty years, I have scribbled down stuff, nothing worked out. There is no "study" for you to look at. Everything I have tried failed, and the same is true for all other physicists I know who have tried to do what you want us to do.

Andrei also wrote:>Can you please point out the errors that I made here and what do you think I should learn? After all, I did not make the claim that classical EM can explain the two-slit experiment, I only asked Sabine her reasons for dismissing this hypothesis. I have not seen any clear answer from here, just an attempt to shift the burden of proof.

Ahhh.... For some reason you seem unwilling to accept that the only reason is that a lot of very bright people have tried very hard to do what you suggest and have failed. So, we have a reasonable expectation that the next guy who tries will also fail.

But maybe not. Our expectation can certainly be proven wrong.

In all honesty, sometimes when I am falling asleep, I think as you do and figure there just may be some way to make it work. And, sometimes the next morning I wake up full of energy and try.

And every time I fail.

Andrei also wrote:>I would expect, therefore that true physicists like Weinberg and yourself and Sabine to present a proof that classical physics cannot explain the two-slit experiment.

Well, you expect wrongly. Again, we have no such proof. I doubt anyone ever will.

For some reason, you just will not believe us.

What we do have is failure after failure.

Andre also wrote:>And why should you not publish such a work? We could still learn something from it, and other physicists may find a way to improve the model.

Again, you are not taking us at our word. I do not have a model that was pretty good but sorts failed. I've got nada, nothing, zilch. Nothing came even close to working. You want to see some pages with random incomprehensible scribblings of mine and assume there is some hidden work of genius hidden therein, even though I tell you there is not?

Andrei also asked:>It is also worth mentioning that another great physicist, 't Hooft claims that QM can actually be derived from a deterministic, discrete, classical model. What do you think about his claims?

I've looked at his work on this, and I think he is wrong. But, of course, 't Hooft is smarter than me, so maybe I am the one who is wrong.

Did you tried to perform computer simulations of this experiment treating the incoming electron as a small charge and describing the material of the barrier as an array of dipoles or some other classical approximation?

I do not think that such a problem can be evaluated analytically. Even if you start with just 100 dipoles and you fix their center (nuclei are not expected to migrate inside a solid) while allowing them to freely rotate (so that they can arrange themselves in an energetically favorable configuration and respond to the approaching charge) the problem would require some significant computer power.

Do you think the above example is overly stupid? Was it tried?

To be honest I thought about trying such an idea but I was unable to find a software that would allow me to perform such a simulation.

It's not that I don't believe that you tried, but I have no clue about what you tried so I cannot just abandon any idea just because it is possible that you tried it and it failed.

"Perhaps one can also build an IVP model for an electron, respecting the basic tenets, but then this electron would be subject to Bell's argument against it being the ontology underlying QM statistics."

In my opinion Bell's theorem is ineffective against field theories like classical em, so this problem should not bother you. Let me explain:

The independence assumption in Bell's theorem requires that the hidden variable does not depend on measurement's settings, in other words the details of the emission of the particle pair should be independent of the details of their detection. I see no reason to hold such an assumption in classical em, and in fact in any field theory because such theories do not allow to split a system (source+detectors) into independent subsystems. The subsystems always interact via electric and magnetic fields so the state of one subsystem will always be represented in the description of the other subsystem.

Andrei wrote to me:>Did you tried to perform computer simulations of this experiment treating the incoming electron as a small charge and describing the material of the barrier as an array of dipoles or some other classical approximation?

Can't be done: too computationally intensive.

But we do know how to do that to a very good approximation: Maxwell et al. showed us how -- mu and epsilon and H and D and all that.

And, if you do that, you will not get QM. How do I know? Well... what we call physical intuition -- it's just plain obvious. I'd risk my life on it.

But, no doubt you don't trust my physical intuition. Good. You shouldn't. I'm wrong sometimes. Even Einstein was wrong a few times.

So do the simulation yourself. Why on earth would I waste my time on a simulation when I am quite sure I already know the result?

But you disagree. Good. Really good. This is how science progresses: do the work yourself.

Andrei also asked:>Do you think the above example is overly stupid?

Yes, I truly do think it is incredibly stupid. But prove me wrong: do it yourself.

Andrei also wrote:>To be honest I thought about trying such an idea but I was unable to find a software that would allow me to perform such a simulation.

Perhaps because it is computationally infeasible and/or just not worth doing.

But then create the requisite software yourself.

With all due respect, the sign of true crackpottery in science is some guy who has some wonderful brilliant idea, but instead of actually doing it himself, he spends all his time trying to get other guys to work out all the details.

With all due respect, I think your idea is almost certainly nonsense; you don't. So it is obvious who has the incentive to actually pursue your idea.

"With all due respect, the sign of true crackpottery in science is some guy who has some wonderful brilliant idea, but instead of actually doing it himself, he spends all his time trying to get other guys to work out all the details."

This is ridiculous! I did not claim to have any brilliant idea. I just asked you if you know what our only theory that can describe the motion of a charged particle in a field produced by other charged particles (classical em) predicts for the case of the two-slit experiment.

It's clear that you do not know.

"But, no doubt you don't trust my physical intuition."

There is no way you could intuitively evaluate how the solution of a N-body problem, where N is about 10^20 looks like. By your own admission, even for N=100 it cannot be numerically solved.

You have provided no argument at all that such a solution cannot look like an interference pattern. It's just a belief.

So, let's examine the available choices:

1. As the electron passes through the slits it experiences a Lorentz force due to the electric and magnetic fields associated with the charged particles in the barrier. This force depends on the charge distribution, therefore it is expected that the observed pattern will change when the number of slits changes.

2. The electron splits in two.

3. The universe splits in two.

4. There is an instantaneous (non-local) effect on the electron.

It's quite obvious that option 1 is the only one that has some positive evidence. Classical EM is a very well tested theory. We know that particles behave that way, at least to a good approximation, when looking at the tracks left in a bubble chamber for example. We have never seen a charge split, let alone the whole world and locality is a pillar of modern physics.

Every piece of evidence we have points to 1. There is no evidence for 2,3 and 4. OK, your intuition tells you that 1 is false. What does your intuition tell you about 2,3 or 4? Do you have another option?

Regarding: "What happens in a black hole when a particle meets its antipode?"

It depends on what the antipode is. The antipode of a photon is a photon with a negative frequency of the photon. The photon antipole will only annihilate with the specific photon that it is entangled with when the source of the photon pair is Hawking radiation. When the photon and its anapole are seperated at the event horizon, the characterization of the antipole is identical to all the other photons that are frozen at the event horizon. It does not annihilate any other photon that it encounters.

Sorry for the late reply and maybe everyone else has moved on from the discussion but I thought I'd inject some actual rebuttals to Andrei's comments for future readers so they don't get the impression that physics doesn't have an answer to his questions.

Andrei, on a very basic level, the two slit experiment would give a vastly different diffraction pattern if the electron (or whatever "bullet" you choose) passes through the barrier. In fact, I choose the word 'diffraction' very specifically BECAUSE this is how we perform (and understand) the very basic analytical technique of x-ray diffraction... and a similar method of analysis can be performed with electrons as well.

If the electron passes through the material (very unlikely - as I'll explain below), it will diffract from its interactions (in a classical model: collisions) with the bonded atoms to give a pattern that can be modelled into a symmetry and thus the orientation of the atom arrangement may be determined. So, as I said above, the diffraction pattern would be significantly different from the interference pattern obtained from the double slit experiment.

Furthermore, as someone above mentioned, changing the distance between the slits removes the interference pattern entirely, meaning that it is the geometry available that is giving the interference result.

Quite frankly, the barrier material (in a classical and mostly in a QM approach) is a solid, opaque wall that the electron cannot pass through. Despite the distance between the constituent atoms, their bonding structure and the sheer *number* of atoms (remember Avogadro!) means that unless the barrier is very thin, you would not expect many electrons fired at the barrier to pass through just based on pure statistical knowledge.

Secondly, your repeated belief that the passing electron *must* experience some sort of electrostatic or electromagnetic force is completely lacking any logical thought and I'm surprised this wasn't directly addressed by anyone above for the ridiculousness that it is. (Seriously, it's not helpful for the wider scientific community to let ignorant ideas propagate and go unchallenged on the premise of - "go and understand it yourself".)

From a PURELY classical perspective, Andrei, it is a nonsense to expect an electron moving at relativistic speeds (aka near "c") to have any meaningful interaction with the distributed and near zero charge of a bonded structure. Similarly, it is a nonsense to think that electrostatic forces would have any effect on the motion of a particle passing nearby. A simple look at the energy curve of bonding potential completely dispells this notion:

For one, the electron or "bullet" never comes close enough to interact with any significant potential charge (as a chemist I'm surprised you're even entertaining the notion that a chemically/ionically bonded material would have a strong (or even weak) polar moment). Two, even if the electron came very close to the surface of the wall of one of the slits, the energy in its motion is so high that the interaction has no effect on its direction of movement.

For one, the electron or "bullet" never comes close enough to interact with any significant potential charge (as a chemist I'm surprised you're even entertaining the notion that a chemically/ionically bonded material would have a strong (or even weak) polar moment). Two, even if the electron came very close to the surface of the wall of one of the slits, the energy in its motion is so high that the interaction has no effect on its direction of movement.

Let's put it this way, you can perform the double slit experiment near electrical equipment, at the Earth's surface or in space and the result won't change.

The fact that you're focussing on the charge of the barrier but completely ignoring all the other *STRONGER* forces acting on the electron has left me dumbfounded.

Let me come clear here - I am a chemist too. However, this is stuff I was taught in secondary school and in the first year of the chemistry course I took years ago. This is basic, uncomplicated stuff. I don't know what sort of a speciality you have in chemistry but it is clear you do not understand these concepts.

Sabine Hossenfelder is right again.General relativity rests on two postulates: 1. Einsteins gravitational field equation; 2. The geodesic motion postulate. The geodesic motion postulate applies to point mass particles, and if the point mass particles are elementary particles, their self-energy must be suppressed "by hand".And here is the problem of quantum gravity, unifying general relativity with quantum mechanics: General relativity belongs to the Poincare group of possible theories, while quantum mechanics belongs to the unitary group of theories.

Re “information loss in black holes”:But what happens to all the living information a person once possessed when that person dies? While the physical molecules, atoms and particles are still all there, or we can understand what happened to them, the “living information” seemingly only ever existed as relationships. While these relationships genuinely existed, seemingly they are potentially only representable as equations and/or algorithms. Seemingly, its these relationships that break down when a person dies.Can the “information” that is lost in black holes be thought of as relationship loss?

Behind the “chemical reactions” are information relationships, including law of nature relationships, which are seemingly representable (e.g. in computer simulations) as algorithms, equations and numbers.

And seemingly, the “living”/ conscious information experienced by a person can also be thought of as being representable by algorithms, equations and numbers.

When a person dies, its seemingly (what we might represent as) the top-level algorithmic relationships that die: the law of nature equations don’t die, and all the physical matter (molecules, atoms and particles) can be accounted for.

So, I was wondering: Do (what we represent as) law of nature relationships and numbers die in a black hole?

I have a problem with a particle being in two places at once. One obvious solution is to describe the particle not as a point, but as having a nonzero size (that helps with tunneling as well) - but the math is much simpler for point-like particles. Even then, the double-slit experiment is described only in terms of statistical averages. To observe an average particle passing through two slits at once is equally difficult as to observe an average family with 2.6 children.

Doesn't help with the double slit experiment ... there are many variants which involve "looking" at one slit, e.g. placing a detector immediately in front or behind the slit. As far as I know, the results are unequivocal: if you "look", there is no interference pattern.

Dr. Hossenfelder, I sincerely hope that whatever you do after your current employment ends involves teaching; these last few posts have been great explanations of complicated topics I have always had difficulty understanding.

We can see in part how this problem arises if we assume there is a mass in a superposition of positions. We can write the line element and consider the g_{00} portion as having gravity content. So we have the metric or line elements for proper time-killing

ds^2 = Adt^2 - dx^2 - dy^2 - dz^2

where A = 1 - 2Gm/rc^2. Now for the two slit experiment with the two slits separated by a distance d the coordinate origin at one slit gives this metric coefficient A and at the other slit we have the metric coefficient B = 1 - 2Gm/√(r^2 + d^2 + 2rd cosθ)c^2. These metric coefficients are associated with quantum states for the mass passing through the slits e^{ik·r}|1) and e^{ik·(r + d)}|2). Well let's be bold, damned the torpedoes and full steam ahead, and consider the A and B metric elements as coefficients assigned to quantum states |g1) and |g2), for this superposed bimetric gravity. So we can have the entangled states

Ae^{ik·r}|1)|g1) + Be^{ik·(r + d)}|2)|g2)

So there we have it, but for one problem --- actually two. Quantum states transform by unitary group operations while metric coefficients transform by the Lorentz group. The unitary groups, such as U(1) or SU(2) and these can be orthogonal groups SO(8) etc, have Euclidean or elliptic structure and are compact. In fancy language the moduli space is such that every moduli can be reached by a Cauchy sequence of gauge transformations. What about gravity? This involves the group SO(3,1) that has hyperbolic structure. The corresponding elliptic group would be SO(4) which is perfectly fine for QM, but its hyperbolic variant is not. Also that nice structure of the moduli is gone and in the language of point-set topology it is not Hausdorff and is a bit crazy. These hyperbolic groups lead to negative probabilities, which are oddities to say the least as probabilities are defined as between zero and one. Also the metric coefficients do not transform according to unitary groups, with the exception of pure spatial rotations with SO(3), if that happens to be the gauge group transformation the quantum system obeys.

There is one potential way to cast this that maybe avoids these troubles. To start we consider complex variables, such as z = x + iy. Now consider the function f(z) = u + iv. How do we differentiate this? We have to consider a limit that converges from a radius around a point, which means the total derivative vanishes,

df(z)/dz = d(u + iv)/d(x + iy) = ∂u/∂x + ∂v/∂y + i(∂v/∂x - ∂u/∂y) = 0

where the real and imaginary parts must vanish independently. These are the Cauchy-Riemann condition for analytic functions in the complex plane. Now do this for the quaternion

q = a + xi + yj + zk

with the function Q = f(q). Perform the same df(q)/dq = 0 and you will find a matrix identical to the stress-energy tensor for the electromagnetic field. It takes about a page or two of calculations, but it is not terribly difficult. Why mention this? If we work with quaternionic quantum mechanics we might overcome some problems. The metric can also be written in a quaternionic manner.

We now have a CP^4 quaternionic vector space. If we set the scalar part of the quaternion to zero we have CP^3, all the “magnetic field parts” of the EM stress-energy tensor and this leads into twistor geometry. In doing this we have both aspects of quantum operators and spacetime.

There is Adler's quaternionic QM, but for ordinary quantum problem it is not useful. For quantum gravity this move seems prospective. I sketch this out for anyone to fill it in. This is not terribly difficult, but I have to leave it here as I am probably pushing the character limit here.

I have to write a sort of erratum on this. I was sort of free form thinking when I did this. At the end I said this was CP^4, but it really is CP^1. The quaternions are a pair of complex variables and so is C^2. The constraint defines the P(C^2) = CP^1. C^2 has 4 real dimensions. In C^2 C^2 is a self-dual subspace and there is a Poincare duality between 3 and 1 chains so CP^1 is dual to CP^3. This is how one defines twistor space.

It occurred to me some time ago that another possible criticism of GR is the existence of closed timelike loops in the Godel metric (an exact solution of Einsteins field equations) since it goes against all physical notions of causality. Whilst textbooks mention this unusual property of the Godel metric they rarely explicitly mention that this is a possible problematic despite the fact that causality is deeply embedded in physical thinking. I'd be curious whether any other authors have mentioned this, even in passing, and whether there have been attempts to modify GR that excludes such solutions.

General relativity is a framework that permits these solutions. They all violate the Hawking-Penrose conditions T^{00} ≥ 0, the weak form or the stronger T_{ab}U^aU^b ≥ 0. These serve as ancillary postulates which removes these odd solutions.

It does have to be pointed out that with the Kerr solution to the black hole the inner region has such extreme curvature closed timelike curves exist. Now some argue the inner horizon is a mass-inflation singularity that seals this off, since this is a Cauchy horizon where null geodesics "pile up." However, anything approaching it "gently enough" may have sufficient blue shifted radiation there that this is traversable. So an observer in principle might actually be able to time travel in this region.

I read your blog often and this is my first comment (Wooohoo!). As a fellow physicist I would say that even many physicists do not know the seminal papers explaining the three points you mentioned above. These subjects are also - sadly - often not part of the standard physics curriculum. I know it is quite some task but I think this post would be very helpful for many people interested in the subject by adding references to seminal papers (or great books) as references that deal with the points you mentioned in more detail than possible in a Blog post. It would be a great help for further studies if you could add those.

I'm afraid I find more to disagree with here than usual. First, with regard to General Relativity:

...according to Einstein, space and time are responsive.

Well no, that is only according to the modern interpretation of GR. Einstein's last word on the subject was unequivocal on the matter of the space and time:

There is no such thing as an empty space, i.e. a space without field. Space-time does not claim existence on its own, but only as a structural quality of the field.

- Albert Einstein, Relativity The Special and General Theories, 15th edition, Appendix 5 (Note there are numerous editions available online. Appendix 5 only appears in the 15th edition of 1952, a few years before Einstein's death.)

Substantival space and time are therefore not inherent features of GR; not to mention the fact that there is no empirical evidence for the existence of a substantival space and time.

General Relativity predicts black holes...

No it does not. The Schwarzschild solution to the GR equations predicts black holes because it mistakenly holds the speed of light constant in a gravitational field. The speed of light is not constant in a gravitational field. This is an observationally confirmed fact (Shapiro delay). Einstein was very clear about this:

...according to the general theory of relativity, the law of the constancy of the velocity of light in vacuo, which constitutes one of the two fundamental assumptions in the special theory of relativity and to which we have already frequently referred, cannot claim any unlimited validity. A curvature of rays of light can only take place when the velocity of propagation of light varies with position. Now we might think that as a consequence of this, the special theory of relativity and with it the whole theory of relativity would be laid in the dust. But in reality this is not the case. We can only conclude that the special theory of relativity cannot claim an unlimited domain of validity: its results hold only so long as we are able to disregard the influences of gravitational fields on the phenomena (e.g. of light).

- Ibid. Chapter 22

You cannot disregard the gravitational field when a massive body is undergoing gravitational collapse.

With regard to Quantum mechanics:

The electron passage through two slits at once is at best an arguable interpretation of QM, not an established physical fact. Further, QM itself can be considered incomplete, in as much as, it provides no discernible information about the underlying physics that produces the outcomes for which, the theory provides only stochastic information.

The whole "everything is quantum" craze has no sound logical basis because the model is incomplete. By comparison GR is a much more robust theory, having made numerous, very specific predictions that were subsequently observationally confirmed.

The idea that GR needs to be quantized on the apparent assumption that QM is somehow fundamental, despite its incompleteness, seems a dubious project at best. Not to mention the fact that, there is no evidence for the existence of a gravitational effect on the quantum scale.

Most of GR's problems can be attributed to bad analytical framing (the "universal" FLRW metric). Essentially it is QM that desperately needs work to bring it into some degree of congruence with physical reality.

Substantival space and time are therefore not inherent features of GR; not to mention the fact that there is no empirical evidence for the existence of a substantival space and time. (Bud)

This (the lack of evidence for substantival space) led Kant to the introduction of abstract space and time. Before doing so, Kant had to realize first that different parts of (euclidean) space are indistinguishible, therefore, no "origin" can be identified and therefore all positions and motions are relative to each other. (This led him to the cosmologic_principle and further to a theory about the development of stars and galaxies.)So, in what space lives the "vacuum state" ? It lives in the abstract space. More or less, Quantum Mechanics is an ontology (or part of it) rather than a physical theory.

Therefore, QM and GR can't contradict each other. The "fault" is on QM's side. However, QM is a (very) useful ontology.

The electron passage through two slits at once is at best an arguable interpretation of QM, not an established physical fact. Exactly. From a realistic perspective no one can buy this. If a photon is passing a beam splitter, beam (and support) of the photon gets spatially divided into two beams. The support of the photon can't be described with our incertainty. This can e.g. be seen with the Mach-Zehnder interferometer. In the second BS, the two beams together reproduce the photon in a single beam. However, if we measure the photon in one of the divided beams, it can always be found in one of them. This is a contradiction.

> The speed of light is not constant in a gravitational field. This is an > observationally confirmed fact (Shapiro delay).

It’s not. The observed fact is that the time light takes when passing near massive objects is longer than when passing through ‘empty’ space. The consensus interpretation is not that the speed of light is different, but that spacetime dilates (space is stretched, so the light covers more distance).

The speed of light is defined in a local Lorentz frame. The speed of light is always unity, say one light second per second. For curved spacetime there are different intervals that correspond to proper times or paths. So light can take different paths that we measure to be traversed at different times. This does not say the speed of light is somehow different.

General relativity is a local theory in that it patches together locally flat spacetime regions together in a way that defines a tangent bundle and from their curvature. This in no way means that light has different speeds, or that somehow the expansion of the universe is such that for galaxies with z > 1 there is a violation of special relativity.

You need to show, quantitatively, how the non-constancy of the speed of light avoids black-hole formation.

I assumed anyone who knew Relativity Theory well enough to read this blog post would fill in the obvious quantitative piece of the argument. Let me help you out. Here is the world's most famous equation, rewritten for clarity in this context:

E/m=c^2

Now Phillip, see if you can combine that equation with the the fact that c declines in a steepening gravitational well and come up with a reason to think that GR implies gravitational collapse is self-limiting. Show me you know how to think - about physics. If you want to disagree with that argument, feel free, but you better have something more substantive than faux-scientist posturing.

The speed of light is defined in a local Lorentz frame. The speed of light is always unity, say one light second per second.

Defining the speed of light tautologically as a constant was one of the great unforced errors of 20th century science, only compounded by the constant's promiscuous use in non-inertial frames. There are, in observed physical reality, no true inertial frames, only local approximations thereof.

In fact scientists were unable to fix the speed of light within a meter because it is a variable in even low-gravity fields like the earth's, so the value was set by fiat and victory declared. For calculational purposes in near-inertial conditions this was not unreasonable, but as physics it is simply wrong.

When applied to the large scale gravitating systems we observe it is egregiously wrong, and leads to physical absurdities like black holes.

For curved spacetime there are different intervals that correspond to proper times or paths. So light can take different paths that we measure to be traversed at different times. This does not say the speed of light is somehow different

So you make two counter-assertions to Einstein's arguments as cited above, without offering any arguments of your own. Why should I take those assertions (of a substantival spacetime and constant light speed) seriously? Such bare assertions are not logically or scientifically compelling.

I am not interested in looping through the statement of your beliefs (substantival spacetime, the universal constancy of the speed of light). I am specifically challenging those beliefs with science-based arguments. If the only defense you have is to repeat your beliefs over and over, you have lost the debate by default.

I know that in the isolated realm of your mathematical imagination, axioms and postulates are true by definition, but that is not true when you port your model into the realm of physics.

In the realm of physics, any axioms and postulates concerning the nature of physical reality are merely assumptions subject to empirical verification. The assumptions of your model are refuted by the empirical evidence.

And just for the record, I cited Einstein in support of my arguments, not Newton.

It is not about beliefs. The postulates of relativity are similar to mathematical axioms and so far they work very well. That is all that is important. The speed of light is defined in a local Lorentz frame where if it is small enough special relativity holds. In the case of the Earth, the surface very locally appears flat, but globally the surface is a sphere. In a local Lorentz frame the speed of light is constant. In a curved spacetime particles, including light, follow geodesics and light rays are tangent to light cones. Those light cones can be oriented in different ways "out there" so light is red shifted and there is time dilation. An observer in a frame associated with such a light cone observes no slowing down of light.

It is not about beliefs. The postulates of relativity are similar to mathematical axioms and so far they work very well.

So, you're claiming that substantival spacetime and a constant velocity c are valid postulates of relativity theory? Not according to Einstein, except for a constant c in the limiting case of SR. But we're discussing GR conditions and you're not defending those postulates here except to assert them. Both are empirically false.

But the resultant model works you say? Only in the Ptolemaic sense that the model can be massaged, via the injudicious use of free parameters, to agree with observations. And just like Ptolemy's model, the structure of standard model of cosmology bears no resemblance to the observed structure of the physical reality it supposedly represents. Modern cosmology is a mess and your comments here demonstrate why.

The speed of light is defined in a local Lorentz frame. To talk about the speed of light in any general spacetime is confusion. I am not sure what you mean by substantival spacetime. Does spacetime in some way exist for practical purposes? I would say so, even if it is ultimately built up from entanglements.

To compare things with Ptolemaic system, a prescientific conjecture though very dense in Euclidean geometry, is silly. Sure there are reasons to thing general relativity fails for small dimensions or equivalently extreme curvatures. That does not compare to the ancient-medieval cosmology of Ptolemy.

To talk about the speed of light in any general spacetime is confusion.

The confusion is all yours. My original comment was that the variable nature of the speed of light cannot be ignored in the vicinity of a massive object undergoing gravitational collapse. Your only response has been to repeatedly state the "speed of light is defined in a local Lorentz frame", which is a non sequitur. There is no physically meaningful Lorentz frame for an object undergoing gravitational collapse.

Substantival means that spacetime has physical properties that affect and are affected by matter and energy. There is no empirical evidence for the existence of such a substantival spacetime. The necessity of substantival spacetime to your model only means that the model does not accurately represent the nature of physical reality, because physical reality contains no such entity.

The usefulness of the Ptolemaic model to this, and similar discussions, is that its example proves unequivocally, that it is entirely possible to construct a mathematical model on the basis of completely erroneous assumptions about the nature of an underlying physical system. And despite that inaccuracy, such a model model can be mathematically massaged to make reasonably accurate predictions regarding empirical observations of the system.

This means that there can be no inferential claim that a model's empirically baseless assumptions are physically meaningful, simply because the model "works". The Ptolemaic example proves that such a claim is not justified.

I am increasingly doubtful there is anything I or anyone else can do to dislodge this idea of a variable speed of light from your head. Prior to relativity and general relativity there were ideas along these lines. However, in curved spacetime the light cones in local Lorentz frames have different orientations in spacetime. Go to Google images and look up Eddington-Finklestein diagram. The local frames are such that light is tangent to these light cones and has a constant speed of light c = 299,972km/sec. However, because these light cones, if parallel transported along a spatial region, are not commensurate with each other the definition of the speed of light is not applicable on an entire manifold. It is a constant defined on all local Lorentz frames.

I will have to drop this discussion about the speed of light with this.

I suppose I would, at least within the framework of general relativity or classical gravitation, have to disagree with your assessment about spacetime not affecting anything physical. Distant black holes have coalesced and the resulting gravitational waves, as traveling wave of space, rattled LIGO and Virgo interferometers.

My problem with Ptolemaic cosmogony is that this is really pre-scientific. The ancient Greeks did the best they could with the intellectual tools they had. However, the model does not conform to Galileo's and more firmly Newton's idea of locality of measurements. By this it means the universe everywhere has the same local principles. Ptolemy proposed a centrality to the Earth with no concept of there being principles “out there” that match principle “down here.” This is my main difficulty. String and M-theory may be wrong, but a lot of analysis of say D-branes is just Gauss' law and Stokes' rule in greater generality. Even if these are wrong at least they match with the idea of locality of measurements as a principle for universality.

Lawrence, you are quite right. We are not going to come to an agreement here, but not because you are unable to dislodge an erroneous idea from my head. It is because you have an unscientific proclivity to believe that your mathematical imaginings take precedence over empirical reality. They do not - not in science. Or at least they are not supposed to.

Unfortunately your's is a widely-held proclivity in theoretical physics, and the belief that mathematical models supersede physical reality, is the root cause of the so-called "crisis in physics". Mathematics is not physics; it is not science; it is not the determinant of physical reality.

Mathematics is a fundamental and absolutely essential modeling tool of science - but it is only a tool. Mathematics does not have a causal relationship with physical reality; it is simply a very useful product of the human imagination, but the human imagination is an unreliable guide to physical reality, unless it is constrained by scientific empiricism.

Those who believe otherwise, such as you apparently do, are mathematicists. Mathematicism is an old, half-baked philosophy that has no basis in science and should have no place there either. Unfortunately, mathematicism has, over the last century, become the default operating paradigm in theoretical physics. It is precisely for that reason we have a crisis in physics.

You have been arguing extensively against the observed fact that the speed of light in a gravitational field is variable. You achieve this dubious feat by invoking a simplification, for the sake of mathematical convenience, that is only valid in near-inertial frames. In the vicinity of a gravitational field with a steep gradient that simplification is completely useless, not to mention invalid. It is like arguing that the earth can't be round because your backyard is approximately flat.

Mathematicism has made an unscientific mess of theoretical physics. Until theoretical physicists get their heads out of their math and begin constructing models based on actual observations and events, rather than imaginary entities and events, modern theoretical physics will remain an inert, scientific dead-end.

"I argue that cosmological data from the epoch of primordial inflation is catalyzing the maturation of quantum gravity from speculation into a hard science. I explain why quantum gravitational effects from primordial inflation are observable. I then review what has been done, both theoretically and observationally, and what the future holds. I also discuss what this tells us about quantum gravity."

"… that is the idea that the laws of nature should not depend on the motion of an observer. "

Is (or was) there anybody who claims otherwise? And way should this lead to the two theories of Einstein?

In "Ist die Trägheit eines Körpers von seinem Energiegehalt abhängig?" made Einstein the tremendous fault to compare the same process and its kinetic energy in two different coordinate systems to find out two different kinetic energies for the same thing.

https://www.zbp.univie.ac.at/dokumente/einstein4.pdf

A system cannot change its energy content just by someone changing their view to it.

Energy is both the poperty of measurer and the property of object. It's all about interactions. And yes, you as measurer define the zero level for energy. Changes of energy, forces, masses and accelerations, are more invariant.

Have you ever consider mass as a continuous structure of change of energy state? Not as an energy storage only? At least you should try out.

Re double slit experiment and gravity (hence GR?): in the textbook setup (and many a high school or undergrad lab), the slits and the detector are in vertical planes, and the interference result generally observed as a horizontal line. So the variation in gravitational potential is exceedingly small.

Rotate the setup, so that the slit and detector planes are still vertical, but that we read the interference pattern vertically, i.e. one slit is above the other, rather than being (horizontally) beside it. Now the gravitational potential is different at the top slit than it is at the bottom one.

Is the interference pattern different? If so, how, and why?

Make the distance between the slits ~10m instead of ~mm. With a suitable choice of particle, will the change in interference pattern be obvious in an undergrad lab version of this? How different is this from the Pound-Rebka experiment?

Has anyone done a version of this "vertical double slit experiment" as I may call it?

You have given us a list of successful (and exclusive?) results of General Relativity. But many of them can be explained classically.For instance has the well-known cosmologist Roman Sexl (besides others) deduced the deflection of light at the sun as a classical refraction process using the known dependency of c on the gravitational field. The result not only yields the same numerical result as with GR, but also the same analytical one. And this of course also covers the phenomenon of gravitational lensing.

I could give you further examples of calculations, the results of which conform to GR but are achieved classically. Couldn’t this be a direction to solve the conflict? At least the mentioned calculation of light deflection at a star by refraction has to my knowledge no problems with QM.

Do you have a reference for this ("Roman Sexl (besides others) deduced the deflection of light at the sun as a classical refraction process using the known dependency of c on the gravitational field")?

This seems wrong, or poorly stated: "the known dependency of c on the gravitational field." For example, it seems inconsistent with Pound-Rebka.

Gravitational lensing also involves time: photons on different trajectories arrive at different times (at detectors on telescopes). This has been used to estimate the Hubble constant, directly (i.e. no need for any ladders). How does Sexl treat this?

We have to distinguish between the local speed and the coordinate speed. The local speed is the nominal c in the gravitational field where the measurement process underlies gravity. The coordinate speed is the speed as seen from outside the field. This latter speed was measured in the Shapiro experiment.

The coordinate speed in the field is given by the Schwarzschild metrics by the following equation:c(r) = c(0) * [1-R/r]^P .

Here the exponent P is 1 or ½ depending on the direction of the beam and R is the Schwarzschild radius. If this equation is differentiated to the components of the speed vector, then there results a deflection. This deflection has to be integrated over the full path of the light beam. The result is precisely the one of Einstein’s GR.

Regarding the text book of Sexl I do not have it at hand and have to find it again. (And as I remember it is in German.) Originally I have made this calculation on my own because I suspected that it has this result. When I presented that on a conference I was told that this already exists at Sexl.

I do not see a conflict with Pound-Rebka. His experiment shows the slowdown of time in a gravitational field, and time is connected to the speed of light.

To what do you refer? The light deflection achieved by refraction is an easy calculation with an exact result.

The criterion for the acceptance of an approach or a theory is not the sophistication of its formalism but the conformance of its results to the observations. And within this constraint the simpler approach is the better one.

If one follows the relativity of Lorentz rather Einstein and extends it to GR, one comes to similar results in an easier and a more physical way. And problems like Dark Energy do not exist.

I don't really see why you need quantum gravity to compute an amplitude in case of the double slit experiment. Can't you just naively apply Feynmann's path integral method by summing over all classical paths, weighing with a phase, and taking into account the gravitational field in each point of the paths?

You describe gravity as not a force, but as an effect being caused by the curvature of space and time. But in his 1972 book on Gravitation, Weinberg expresses doubt about this. In particular, on page 147: " ... the geometric interpretation of the theory of gravitation has dwindled to a mere analogy ...". Isn't it possible that gravitation IS a force, mediated by the graviton? And that spacetime is just the unstructured nothingness between events?

You need to check whether Weinberg has changed his opinion since 1972.

Weinberg is an interesting character because he is one of the few people who has actually worked in elementary-particle physics and in GR/cosmology/astrophysics. (There are many "astroparticle physicists", but this is usually particle physics in some astrophysical context, e.g. cosmic rays, BBN, inflation, etc., whereas Weinberg has worked in purely classical GR/cosmology/astrophysics.)

As far as I know, Weinberg never changed his opinion on this. But as Sabine often points out, opinions are not what counts. The point is, we simply don't know whether gravity is a) a force - as described in Weinberg's book, b) some effect of physically curved spacetime, or c) something else (entropy, etc.)

Regarding the irreversibility of black hole info loss: is that really any worse than the irreversibility of the wave collapse in measurement, which QT has even without adding GR?

Also, is it quite accurate to say that "General Relativity is an extremely well-confirmed theory", when it fails to predict galactic structure? Why is everyone so sure that we're missing most of the universe, rather than assuming a problem with GR?

Yes, it is, because black hole evaporation is irreversible already prior to making a measurement.

The issue of galactic structures is a long-distance problem. Quantum gravity is a prima facie a short-distance problem. It is possible the two are related, which is basically Verlinde's idea, but it's hard to make this idea work.

As Sabine has also mentioned several times, GR says nothing about sources. Do you have a problem with non-baryonic dark matter? If you do, then it can't be because GR predicts baryons. The idea that galaxy rotation curves falsify GR is one of the wrongest tropes around. (Perhaps there is some MOND-like modification of GR at low accelerations or whatever, and perhaps rotation curves are indicating this, but this is far from the "GR is wrong, case closed" argument that some people want to derive.)

Your statement has some resonance and is something I brooded about as an undergraduate student when pondering the issue of quantum measurement and then reading Hawking's original paper on black hole radiance. In a measurement the density matrix ρ = |ψ)(ψ| is reduced so that trρ = sum_ip_i = 1 is maintained. This still conserves probability. The density matrix with a measurement and Hawking radiation evolves so that trρ^2 < 1. The evolution of a pure state is such that trρ^n = 1, for all n, but with a measurement and Hawking radiation it is the case there is this failure to describe the evolution of a pure state, and this lead to all sorts of problems.

What does this have to do with information? The von Neumann-Shannon-Khinchin formula for quantum entropy is S = - tr(ρ logρ), and the unitary evolution of the density matrix is ρ' = U^†ρU and we can apply this to this formulated

In the last step I used the Taylor series. A bit of algebra with UU^† = U^†U = 1 should convince you the argument in the trace is unitary and the properties of the trace is such that S' = S. Entropy and by corollary information is invariant. But if we have in measurement and Hawking radiation trρ^n < 1 then entropy is not invariant. This means information is not conserved.

As Sabine says this happens prior to any measurement. We might actually measure a black hole quantum radiance, and analogues of this are being studied in the lab. So is this in some way parallel to quantum measurement? Decoherence of quantum states into decoherent sets of mixed states happens when there is some interaction between a quantum system with a small number of quantum numbers with another that has a large number. Decoherence occurs when the quantum phase of a system is taken by entanglements with states in a reservoir of states. In the case of quantum measurement this is the large number of quantum states, say many moles of states, in an apparatus. In the spacetime case the entanglement is with spacetime. The quantum states of interest are on the horizon, and these holographically define states in the one dimension larger bulk. Spacetime then as a massive entanglement of states may be taking up the quantum phase of the black hole and the radiation we observe bears little or none of this.

Some of course state that information is not really conserved. Many Petrov types of spacetimes have no time-like Killing vectors and so conservation of energy is not really definable. If we are to have a quantum theory of gravitation we need some sort of “anchor,” and information appears to be the most reasonable. If this and anything else are really not conserved then frankly physics is finished, history and kaput. We might as well just join the growing, and I mean growing, marijuana industry and shrug off all this physics. At least this would be for research into the foundations of physics.

Why is everyone so sure that we're missing most of the universe, rather than assuming a problem with GR?

You are right, there is nothing missing in the cosmos, but the fault does not lie with General Relativity. Rather, it is the deeply flawed analytical models to which GR is applied that result in GR solutions that are discordant with physical reality.

Your casual invocation of the "universe" is a case in point. The concept of an all-inclusive "universe" is inherently classical, as it invokes a universal frame. Applying GR to a non-relativistic conceptual model, not surprisingly produced a nonsensical standard model of cosmology.

The situation with galactic dynamics is also a product of strikingly bad, one is tempted to say lazy, analytics. That physicists used the Keplerian method, developed for the solar system, to model galactic rotation curves leaves one with the sense that late 20th century theoretical physics was completely devoid of any sense - of how physical systems actually behave. Modern cosmology is a mess but GR is not the culprit.

The dynamics of galaxies or the motion of stars in a galaxy is essentially Newtonian. Relative to the galactic coordinate system stars move a few 100km/sec, which is small compared to the speed of light c ≈ 300,000km/sec. In a special relativistic setting this is has a gamma factor of γ ≈ 1.0000005. If you try to find a gravitational factor you find even smaller contributions.

Of course the orbit of Mercury is comparable, but this is one fairly isolated mass in a central gravity field. Galaxies are "messy" and it is impossible to extract any tiny influence from GR physics.

A graduate student I knew did modelling of accretion disks. The gravity field model was Newtonian again. Here the GR aspects are far greater than what happens with galaxies, but they were small enough to be ignored.

MOND type of ideas propose that for small gravitational field there are these funny deviations. General relativity for small masses and velocities recovers Newton, and for really small accelerations in Newtonian gravity it is proposed there are further deviations.

This paper is somewhat dense and would require some time reading. However, I have not heard of any great pronouncement that GR somehow fixes this problem. The paper is dated to 2005 and in the last 14 years this controversy has remained.

The curvature of spacetime associated with a galaxy is extended over a 100 kilo-light years. In a small region or frame the curvature is essentially zero. A galaxy may then deflect light from a distant background, where null geodesics are deviated over the long distances around a galaxy.

Anyway, if Cooperstock is right here this would indicate there is no incompleteness of GR with respect to galactic rotations.

Regarding: “The third reason we think gravity must be quantized is the trouble with information loss in black holes. If we combine quantum theory with general relativity but without quantizing gravity, then we find that black holes slowly shrink by emitting radiation. This was first derived by Stephen Hawking in the 1970s and so this black hole radiation is also called Hawking radiation.“

People have been debating what goes on at the event horizon for 70 years and that debate provides no end to controversy. But one solid prediction in that debate has been experimentally verified recently when entangled particles were observed coming from an analog black hole.

One of the debate points in the event horizon debate is the nature of time at the event horizon and how that nature is dictated by the frame of reference where that time is measured.

The entangled nature of hawking radiation connects these two reference frames together so that the photon energy of the Hawking radiation observed by an outside observer is the same as observed by the observer at the event horizon.

As time stops at the horizon, energy of the photons falling into the horizon goes to zero as red shift of that radiation goes to infinity. The energy of the entangled photon that is outside the event horizon must also fall to zero. The entangled nature of the Hawking radiation speaks against any evaporation of the black hole caused by that radiation.

If there is no evaporation going on of the black hole, then quantum theory protects the black hole from losing information as seen by an outside observer.

For 2-slit electron diffraction experiment it's probably worth pointing out that the impact of electron charge passing through a slit is roughly a gazillion times (give or take a boo-coo) more powerful its gravitational impact. For the narrow slits needed in electron diffraction, the degree of transient dielectric polarization of the molecules on the inside surface of the slits should easily be large enough to make the passage of the electron detectable, especially choose to make the slits out of materials such as LVNO ceramics that are particularly susceptible. In sharp contrast, there is no conceivable form of instrumentation that will ever make the gravitational impact of that same electron detectable.

So let's do it!

You make the raw slits from LVNO, take the measurements, and Lo! the diffraction is still there! So you say ah ha, I've got you now!, and take the next step of wiring up the LVNO to send you a data signal whenever one or the other side reacts enough to show that the electron went through.

And it works!

You check your data, and sure enough, every single time you get a specific side for where the electron went through. It was always just one electron going through one side after all, no weird splits or anything like that, HAH!, classical reality WINS!

Except…

Oops! You check your interference data again, the same data that in your earlier LVNO slit experiments showed beautiful quantum interference and a peak between the two slits… and all of that is completely gone! You now just have two peaks sitting right behind the two slits, as classical as baseballs being whacked between nets in practice. But you are using exactly the same slits and slit materials in both cases! What went wrong?

Very simple, really: You recorded the reaction of the LVNO, instead of allowing the material to rebound elastically and invisibly to its original state after the electron passed. By recording the LVNO response you created real information, real history, and irreversible causality. This is literally the definition of classical physics, and also the deepest and most irrevocable difference between the quantum and classical worlds.

You can in fact fully define quantum mechanics in the following remarkably compact fashion: It is the physics of processes for which no information yet exists anywhere in the universe on exactly how those processes took place. Thus an electron circles a proton in an infinite number of ways all at once, because until you poke it hard with that photon there exists no record anywhere in the universe to say which way it was circling. So the universe defaults to all of them at once, and we call the wave-like sum of all those possibilities an orbital instead of an orbit. The name change reflects that it an orbital is a bit of desperately information-starved quantum mechanics, as opposed to the excruciatingly well-documented, information-rich classical orbit of an earth satellite.

Anyone interested in finding out more about this remarkable relationship between universal ignorance and quantum behavior can look at either (or both) Feynman's delightful little book QED, which is written without math (‼), or in his Lectures III Chapter 3, Probability amplitudes.

For gravity slit experiments, current quantum mechanics actually does a pretty good job at predicting a result. It says that if you can manage to get a sufficiently massive compact particle into a quantum state (not easy!), then it will pass through only one slit if and only if it shuffles atoms in the slit in some irreversible fashion… but it will remain quantum, with an interference pattern, if instead its gravitational nudge on those atoms stays fully elastic and reversible.

Stretching this a bit, or a lot, one might say that in the absence of information, many universes exist in superposition, but only locally, only in the region where there is this void of information. That puts a slightly different spin, no pun intended ( I lie) on many worlds.

Replying to my reply is like talking to myself, a familiar thing... so, stretching this a LOT more one might suppose a primal or initial state in which every possible universe exists in superposition, until some event (a quantum fluctuation?) triggers differentiation of all these superposed universes into definite realms with definite values of fundamental constants. One might also suppose that once such a process begins it proceeds very, very rapidly, like a big bang, or like inflation. So, in this picture, when a "measurement" is made, broadly defined as a definite value appearing anywhere in any region, no branching universe appears, as they were all there from the start. What happens is that a small region that failed to contain definite information now is defined, filled in, across all the existing universes, in all the still shared local regions where definite values for some physical process have still not been defined, where the primal superposition still holds-- one supposes then that every possible value for that measurement congeals across that vast body of universes, a multiverse that always existed.

Rick, what you just said is a pretty good summary of the situation: Without recorded data to the contrary, all possible universes coexist, with the qualifier that their likelihood varies hugely according to a well-defined set of rules.

Richard Feynman in QED has a quite delightful example of just how extreme this "weird universes" thing gets by showing that even light going in a straight line is not strictly forbidden in other universes! It's just that when you add all of those possible paths in all those possible universes together, it's only the ones that "sync up" (are in phase) that are appreciably likely to happen. But on a small enough scale, where choices are fewer, both the path of light and even the velocity of light come "up for grabs." In other universes, the speed of light could be infinite, a concept on a large scale would wreak havoc with classical time and causality. Yet it turns out that this particular possibility must be taken into account in order to calculate precisely the exact probabilities for how light and matter interact at the smallest scales. On this point, the Marvel Universe with Tony Stark's mostly insanely tangential (EPR? OMG!) comments about things quantum is actually pretty close to what the math requires to make real theories accurately predictive.

It amuses me also that while I sound like I've been going on some kind of rant proposing crazy universes where light that can go in circles at infinite speed (another possible but unlikely amplitude in Feynman diagrams), all I'm doing is referencing remarkable writing by the same fellow who got a Nobel Prize for coming up with all of this to create an extraordinarily accurate predictive method for calculating how the universe actually behaves. If you have not read QED by Feynman, I heartily recommend it. In that book in particular, he makes the paradoxes of things quantum accessible to folks with no particular mathematical background, using mostly clock analogies (!).

As for puns, I'd claim that I like to make them about algebras defined locally on the surfaces of an n-dimensional manifolds, but that would be a Lie.

Rick, my reply above was to your first comment. Your second comment is even more interesting and I think deeply insightful.

The event you have so aptly described is a self-perpetuating information explosion, more often called the Big Bang, since information on a massive scale is the same thing as heat.

The critical concept for how such an explosive event could emerge from smooth, everything-is-possible quantum symmetry is the emergence of persistence. For example, if one side of a virtual spin pair gets too tangled up with other complex entities, the pair runs the risk of becoming unable to recombine, and thus of becoming persistent. Such persistence is the simplest form of observation, and it requires nothing more exotic than a bit of statistically irreversible thermodynamic complexity.

Somewhere early on, as you described, one of those almost infinite combinations of possible rules hit on something new: persistence through asymmetry, an inability to null itself out. (Actually, this almost certainly occurred at multiple levels of complexity, slowly building up rules upon rules.) Since nothing is more fragile than a perfectly symmetric house of cards, the arrival of symmetry-breaking information would have spread explosively, creating a wealth of heat and yet another innovation: a well-defined arrow for entropic, classical time.

Finally, if it is true that the universe literally came into existence as a direct result of the formation and subsequent explosive growth of the first bit of information, then debates on whether the world is really "quantum" or "classical" may be missing the point. The universe might better be described as layers upon layers of information and rules for using information, which express themselves experimentally as both classical and quantum physics.

I thought you were thinking in terms of light following a geodesic and expressing that in an odd way. Yet another thing I want to look at: this idea of establishing an arrow of time. Maybe that should be taken more literally. Imagine a suite of vectors appearing in a superspace containing multiple or all possible universes, each vector is a literal time's arrow and all diverge but perhaps not in random ways. Here I'm thinking of gauge symmetry in the standard model. Interesting if one could classify universes entirely in terms of these vectors, and find some link to the gauge symmetries we are familiar with. Enough. If I don't get to work I'll get distracted by something else. No more here on this topic.

A couple of paragraphs above I asserted that "… debates on whether the world is really 'quantum' or 'classical' may be missing the point. The universe might better be described as layers upon layers of information and rules for using information, which express themselves experimentally as both classical and quantum physics."

I would like to suggest a name for this view: the psifist ("see-fist") interpretation of physics. It's from the Greek word ψηφίο, psifio, for "digit", meaning a digit of information in any base (not necessarily binary). In contrast to the classical-first and quantum-first extrema of physics interpretations, the psifist interpretation makes mundane classical information into the deepest foundation of reality.

This is not just a philosophical difference, since a psifist interpretation of physics unavoidably makes specific experimental predictions. One pointed example is that in the psifist interpretation the current massive investments to build entanglement-based quantum computers will all fail. That is because the MWI worlds upon which such computing gains depend (or as the brilliant algorithm designer Jarek Duda might say, the time reversal parts of such algorithms) cannot even exist in psifist unless first elaborated using real, classical, costly, non-quantum information processes that are solidly embedded in classical entropic time. An easy-to-recall name for this assertion that all quantum computing investments will fail is Traci, for Time reversals are computationally inaccessible. I was very skeptical of Traci's opinion at first, but increasingly I suspect she may well be correct.

Another psifist difference is that the entire vacuum density problem goes poof. The virtual pairs that create the problem become pure potentials only, with elaboration into real pairs occurring only the degree that real energy and information enable them. There has to be a way that can be made into a specific test! One could argue that the very existence of mostly flat space is a result that supports psifist physics, but it needs to be stated much more sharply than that.

On the GR side, wormholes simply cannot exist in psifist physics for similar reasons. That is, the rules for using natural information become enough like a naturally occurring form of software (an idea I call pavis, for physics as virtual instantiation software) that you can't go into the past for exactly the same reason that you can't program a computer to change its own inputs from two days ago. Psifist and pavis thus predict that there is an error somewhere in currently GR math, more specifically that there is some form of overgeneralization that burps when time tips over to fully vertical… which if you think about it is probably not that different from saying there is a hidden divide-by-zero bug in GR math. It may be much like in-your-face, explicit divide-by-zero (I love that!) formalism invoked by SK coordinates to enable the existence of gravitational singularities. Speaking of that, singularities (ever seen one?) are also forbidden in psifist physics, where the event horizons of black holes instead become densely populated spherical momentum spaces that hold all of the infallen mass and remain connected to the outside universe by Fourier (holographic) transforms. No info is ever lost, but in such dark mirrors information does get really, really badly scrambled.

Enough. At least now folks can easily say "OMG, Terry is a %&$& psifist!" :)

And quantum gravity naturally leads to the MWI as the correct interpretation as in the wavefunctional formalism you end up with amplitudes assigned to entire universes and no external observers that could collapse the wavefunction.

I wonder how you can call an interpretation as being „correct“ without having successfully conducted some experiment that delivers data to support it. Leaving this step means that one stops doing science. That does not chance if one says that there is no possible experiment that could proof it.

Why do you say quantum gravity leads to MWI? With the similarities in the reduction of a wave function in a measurement and Hawking radiation, read my post above, and some aspects of M-theory with D-branes as condensates or classical-like objects, I think a fair pitch could be made for Bohr and Copenhagen Interpretation.

The reduction of a quantum state is a sort of quantum analogue to an update of a Bayesian prior. There are some problems I think with this idea, for state reductions occur outside of unitarity and that is a big cornerstone. Also quantum probabilities are not to my mind quite the subjective probabilities in Bayes' theorem. Bayesian updates are done on paper, and I fail to see how nature does it. Conversely to use Kant's program, MWI shifts the collapse from the noumena to the phenomena. where local observers are sent on a branch to the exclusion of witnessing all others. Maybe nature is not collapsed, but the observer knows no better either way.

I am not disposed either way, and I think there is no decidable procedure for determining whether any quantum interpretation is correct.

If wavefunction collapse is not merely an effective macroscopic description of the underlying unitary time evolution, then that implies new physics. So, it is up to the people who reject the MWI in favor of collapse models to point to experiments that can be conducted to prove the existence of a real collapse as opposed to merely an effective collapse.

No matter how you look at it from a phenomenological perspective there is still a stochastic quantum jump, or what we call collapse. With Bohr and CI the replacement of a quantum state by a quantum basis state corresponding to a measured eigenvalue is considered to be axiomatic. With MWI there is this idea that on a deep quantum level there is no such violence committed, the world simply appears according to a local measurement by an observer who is in a sense "quantum frame dragged" along a reduced world based on that measured eigenvalue. From an empirical perspective there is no way to discern one from the other. They also both involve a sort of stochastic jump.

If I have the time and temerity to jump into this cauldron of trouble, I may work out this conjecture I have this ultimately involves quantum states encoding quantum states. This runs into some incompleteness of Gödel's theorem. The implication is there is no decidable way to determine whether QM is ψ-epistemic, say in the sense of Bohr and Copenhagen Interpretation and Fuch's QuBism, or if it is ψ-ontic in the sense of MWI and GRW and ... .

All of these have problems. Bohr and CI posit as fundamental quantum and classical domains. However as pointed out by Heisenberg early on there is an ambiguous boundary between these two domains. MWI proposes a splitting of worlds, but there is no definitive meaning to a spatial surface or set of them according to probabilities. Bohm's idea really only has sense in nonrelativistic domain, but that could hold for physics on a holographic screen, so while this has serious weaknesses I will not throw it completely under the bus. There are I read at last count over 50 interpretations. Collapses are not the result of unitary evolution and so appear inconsistent with the evolution of quantum states. I think this is some sort of incompleteness in the axioms or postulate of QM. It might be compared to the fifth axiom of Euclid in geometry. I think in the end, or if I am right, then this is simply a state of affairs that exists, there is no causal process behind it and we are best to just accept this and press on. Maybe this is Mermin's Shut up and calculate and this search for interpretations is a waste of time.

About GR and black holes: the singularity sits inside the Schwarzchild radius, but there was some discussion early on in the history of GR on the physical realityof what's inside the Schwarzchild radius. Spacetime is a (pseudo)-Riemannian manifold, and it could well be that the points with coordinates inside the radius sit outside of the manifold (and therefore have no physical existence).This was argued as early as 1923 by the physicist Marcel Brillouin (an English translation of his paper by Antoci is available on arxiv).So what evidence do we have that what's inside the Schwarzchild radius is really real?

The Schwarzschild metric really is what mathematicians would call a bad chart. There is a coordinate singularity at r = 2m. At first it was thought this was a singularity and the physics community was in a bit of a kerfuffle over that. Einstein argued that conservation of phase space volume would prevent this. Of course if we think of phase space according to g_{ij} for a spatial metric and π^{ij} as the conjugate momentum metric this objection does not hold. Einstein refused to believe in these objects.

Kruskal found a way around the problem with a coordinate transformation that did not have this coordinate singularity. So at least classically there is no "bump" at the horizon that is computed by general relativity. There is this firewall problem, but that should not effect black holes now. The firewall emerges around when the page time occurs, which is 7/8ths the duration of a black hole. A solar mass black hole should endure about 10^{67} years and a supermassive BH up to 10^{100} years. So any slight perturbation due to ambiguity of entanglement monogamy should be insignificant.

So if you have a way of getting to SgrA* there is a 4 million solar mass black hole. It is only 27 thousand light years away, a mere pittance of cosmological distance, so what are we waiting for? Oh yeah, the Voyager space crafts are a around 40 astronomical units (1 AU = distance from sun to Earth) and this means they are only about 21600 light seconds our or around 6.9x10^{-4} light years out. We have a problem with rocket technology. So SgrA* is out, and closer ones, the closest around 3000 light years, is too small and you are shredded before entering it.

So we are going to have to rely on indirect evidence. Anyway, I would not jump in a black hole. I have hiked the Grand Canyon Kaibob trail a number of times, and there are places where a little jump and you go down hundreds of meters down to a fate not different from entering a black hole. Even if you enter a black hole a report back to this world is a bit like a message from beyond the grave.

What I find so bad with the Schwarzchild metric is not so much the singularity at the Schwarzchild radius, but the fact that in the expressionfor the line element ds^2 the signs of the coefficients of dr^2 and dt^2 change. This is I believe why you can read in popular expositions expressions such as: "time becomes space and space becomes time".This is also what worried physicists such as Marcel Brillouin almost 100 years ago, and strongly suggests (by reductio ad absurdum) that points with radial coordinate r < 2m do not belong to the solution manifold.As you pointed out there is a way of removing the singularity at r=2m by a change of coordinates. What is not so well known is that there is another change of coordinates which removes *both* singularities (r=0 and r=2m). Did you know about this?

In physics we often do these analytic continuations into the complex plane, or rotations such as Wick rotations. In the old ict approach to relativity the interior is a case where ict → ct and r → ir. Generally we like to have solutions operate without these sorts of cut-offs. However, Polchinski looked at p-branes or black-branes to derive Hawking radiation by ignoring the interior. I heard Susskind give a talk about holography where he indicated a disdain for thinking of the radial direction as time in black hole interiors. In the Eddington-Finkelstein metric this sort of language can be avoided. Whether one wants to think of the radial direction as time, or to avoid that, depends upon the coordinate chart one uses.

There is a physicists who insists the singularity can be removed. I can't remember his name and I was not sure about his claim. In a certain set of coordinates the singularity can be sent “to infinity,” which is in a sense a removal.

From what can be understood if you fall into a black hole there is no “bump” on the horizon. The firewall does not appear until late in the duration of a black hole that is around 60 orders of magnitude smaller than the temporal existence of any black hole now. So any deviation with quantum monogamy is going to be tiny, very far into the IR and probably of no physical concern. Such IR stuff is rather easy removed from QFTs as it is.

If we could do black hole experiments we might be able to do physics that involves entanglements of states in the exterior with the interior. Though I think such entanglements get swapped with entanglements with exterior gravitons or BMS translations. We might more realistically be able to look at this with black hole analogues in quantum optics and solid state physics. Jumping into a black hole is not feasible, nor is it at all desirable. However, if by some means an observer gets to SgrA* and enters that black hole I suspect they would not encounter some disastrous crash at the horizon.

"There is a physicists who insists the singularity can be removed."Do you have in mind the change of coordinates proposed in this article of Modern Physics Letter A: https://www.worldscientific.com/doi/abs/10.1142/S0217732315500510

The change of coordinates that they propose can be checked independently of the issue of negative mass, and the computations seem correct to me (they do get rid of all singularities). It would be good to have some expert opinion on this, though.

What is real? To give Einstein some deserved credit, he knew that neither space nor time were real physical entities, so "spacetime" in his formualation is not a description of the real, physical universe, but a verbal metaphor, since there were no words for the structure he imagined. To then expect it to be accurate in all ways to describe the world precisely, is a made up, imagined, conflated account, intended to be an idealized metaphor, no more deserving of our blind belief than Plato's perfect forms. The universe is rougher and more random than a mathematician's idealized geometry or equations. Only the math is smooth. The real world is rough and chaotic.

"The Rashomon effect is a term related to the notorious unreliability of eye witnesses. It describes a situation in which an event is given contradictory interpretations or descriptions by the individuals involved."Wikipedia: Rashomon_effect

Is there a movie about various physicists relating their own interpretations?

I suggest that the phenomenon of gravity is related to “quantum entanglement.”

I asked the following questions in an alternate format. See what you think...

When it comes to the question of what gravity is all about, is it possible that it has something to do with the superpositioning and entanglement of our quantum underpinning?

For instance, a planet’s gravitational status is based upon its overall mass which, logically (from the quantum perspective), is the sum-total of all of the waveforms of a planet’s contents and features - all blending together into one superpositioned wave.

And when a random asteroid, for example, crashes to a planet’s surface, the asteroid’s wavefunction...

(which up to that moment was basically autonomous in the vacuum of space)

...is now subject to becoming entangled (cohered?) with the planet’s greater wavefunction.

In other words, upon contact with a planet, the asteroid’s wavefunction seamlessly intertwines itself (becomes one) with the planet’s overall wavefunction, thus becoming superpositionally enmeshed with the planet’s phenomenal structures.

In which case, the occurrence of what we refer to as being the asteroid’s newly acquired “weight” is something that is proportional to the degree of the entanglement of its own unique waveform constituents with those of the rest of the planet.

And the point is that because the asteroid has a greater array of quantum attributes than that of a feather, for example, it is thus “heavier” than the feather due to a greater complexity of its entanglement with the “whole.”

And all that means is that as we attempt to move or lift the asteroid (or a bowling ball, or a freight train), we are, in essence, “tugging” on a vastly greater web of superpositionally entangled waves than those that comprise the feather...

...hence we therefore encounter a greater resistance to our effort.

Furthermore (and with the help of a rocket), if we were to send the asteroid back into space, it would simply be a situation of detangling (decohering?) its wavefunction from the greater wavefunction of the planet...

(with the degree of detangling still having something to do with distance, as per Newton’s law)

...thus restoring the autonomy of its wavefunction (and its prior weightlessness) in the vacuum.

Now I realize that what I am proposing is highly speculative, however...

...is it possible that the greater the volume and complexity of the entangled morass of quantum waves that comprise a planet’s overall wavefunction is what determines the strength of that which we call a planet’s gravity?

There are a couple of issues that were once the subject of intense debate but seem to have been forgotten or possibly just swept under the rug..

1) A covariant derivative cannot lead to a genuine (local) conservation law, for which you need ordinary derivatives. Only if a covariant derivative can be converted into an ordinary derivative can you make a conservation law. E.g. for the electric current one forms (g = determinant of metric) sqrt(-g) J_mu and then the factor in front cancels out the bad terms coming from the connection when taking the covariant divergance, and one is left with an ordinary divergence. You can do this for totally antisymmetric tensors (differential forms essentially because there a metric around) but not in general, and not for the energy-momentum tensor in particular. Thus the covariant divergence equation D_mu T_mu_nu = 0 does not lead to a law of energy-momentum conservation locally. The best one can do is the so-called pseudo-tensor of energy and momentum. This is actually IMO a show-stopping problem and is the ultimate reason why all attempts to apply standard Hamiltonian-based quantization to GR fails. All discussion of this critical issue seems to have ended in the 1960s.

1a) Because of the ambiguity of the definition of gravitational energy and momentum, there is no sure way to know if gravitaional energy transport is even possible. This has great importance for the observability of gravitational waves.

2) The cosmological constant is not just a choice - it represents a truly arbitrary aspect of the system of equations. That is, there is not one general relativity theory, but a whole family of them with differing values of the CC. Unless you can pin down the CC to zero for theoretical reasons, you don't really have the simple theory that Einstein envisioned. The ultimate origin of this ambiguity is that the covariant divergence of the metric in Riemannian geometry vanishes. That means there is an arbitrary global scale of length. This again is a non-local aspect of the theory. Weyl created a new geometry which makes the length scale a dynamic element and this issue can be resolved. But GR as such is stuck with it.

3) GR is only well-tested in the very weakest regime, a test particle in the field of a large body - e.g. the solar system, gravitational redshift etc. It is practically impossible to get into a dense matter regime in which the full non-linear set could be tested over large macroscopic distances. Ironically, there is a case which provides a test of the next-higher regime in which the matter distribution is smeared out instead of centralized - a galactic disk! Fred Cooperstock showed that consistent application of GR to a thin disk of rotating matter, including the lowest order of non-linearity, reproduces the non-Keplerian rotation curves that are actually observed. If you throw out the non-linearity, you throw out the flattened curve. So don't do that :) (This is much like what happens when you throw out viscosity in fluid flow - you get what Feynman called "dry water", a fluid that does not have real-world behavior.)

With regard to the cosmological constant, the problem is basically that GR, a local theory, cannot be meaningfully deployed to cover a "universal" model. Such a model (the FLRW metric, for instance) is inherently non-local and invokes a "universal" frame, both of which are antithetical to the GR framework which only provides for the consistent applicability of physical laws between local frames.

The structural inconsistency of the resulting standard model of cosmology with the observed cosmos is entirely due to this early 20th century conceptual error. It does not represent a failure of GR itself, only of its misapplication to an erroneous qualitative model.

It is interesting that Zwicky's early intuition of gravitational viscosity (see On the Masses of Nebulae and of Clusters of Nebulae, 1937) having an effect on the rotation curves of galactic disks was set aside in favor of the Keplerian decline expectation curve. That was, of course, another serious analytical failure of 20th century science.

Thanks for the Cooperstock reference. What has been missing all along in galactic models is not some "dark matter", just a proper quantitative treatment of disk self-gravity.

bud rap wrote:>Such a model (the FLRW metric, for instance) is inherently non-local and invokes a "universal" frame, both of which are antithetical to the GR framework which only provides for the consistent applicability of physical laws between local frames.

Well, I was going to blast into you for that comment, until I remembered that I once had a similar thought. I even thought about adding a "temporal field" to show the preferred direction of time (no doubt that can be done, but the resulting theory would have to work better than GR at predicting physical reality, and it would certainly not be GR).

Anyway, your and my youthful intuition is wrong. GR is not some vague sort of non-mathematical idea about local vs. non-local that rules out the FLRW spacetime.

GR is a mathematical theory. And, the FLRW spacetime is a solution to that mathematical theory. THE END

You just do not get to say that FLRW is non-local but GR is local and therefore FLRW violates GR. However you have defined "local" and "non-local" somehow you have gone astray: FLRW is a solution of GR and therefore it is obviously consistent with GR.

I know that non-STEM people think they can fight over definitions of words and thereby trump the math. But GR is (applied) math. Words cannot trump the math.

If you cannot accept that, you can never be any good at STEM. A lot of us who are good at STEM like it precisely for that reason: no verbal weaseling and manipulation can trump the math.

Words are weak, but math is strong.

(To be sure, it is possible to get "lost in math," as Sabine's book points out, usually because someone gives up connecting the math to the actual physical world. But words cannot trump the math.)

PD, I don't think we need to be lectured on our math skills. Well, I'm pretty happy with mine, thank you. I have no idea what this reply has to do with my points above, or bud rap's response. He's making a completely valid point.

A lot of us who are good at STEM like it precisely for that reason: no verbal weaseling and manipulation can trump the math.

It appears you are good on the M end of STEM, but not so hot on S. As you demonstrate here verbal weaseling and manipulation can be used to support a preferred mathematical model.

I said nothing explicitly about the the FLRW solution. I quite specifically referred to the FLRW metric. You do know the difference, right?

The FLRW solutions arise from the application of GR to the assumed universal FLRW metric. An assumption of universality is therefore built into the FLRW solution because it is, a priori, an assumed quality of the FLRW metric.

Relativity Theory was derived specifically in the context of non-universality. In Relativity Theory it is assumed that a universal frame does not exist - in a sense non-universality is what necessitates a relativistic theory.

It is because the FLRW solutions are based on a universal metric that the standard model of cosmology contradicts Relativity Theory and is, on its face, scientifically absurd, regardless of whether the model satisfies your appetite for mathematical consistency.

Now I'm going to give you the opportunity to demonstrate that mathematical weaseling and manipulation can be used to obscure the logical inconsistencies of a preferred mathematical model.

In the context of the standard model it is commonly stated that the universe is 13.8 billion years old. There are only two possibilities:

1. That is a statement of universal simultaneity (and therefore antithetical to GR).

bud rap wrote to me:>The FLRW solutions arise from the application of GR to the assumed universal FLRW metric. An assumption of universality is therefore built into the FLRW solution because it is, a priori, an assumed quality of the FLRW metric.

Here is a simple test of whether anyone is physically literate: Is or is not the paragraph I just quoted meaningless gibberish?

bud rap asked me:>In the context of the standard model it is commonly stated that the universe is 13.8 billion years old. There are only two possibilities:

>1. That is a statement of universal simultaneity (and therefore antithetical to GR).

>2. It is a false statement.

>Which is it Physicist Dave? And why?

If you happen to be in a frame of reference that is co-moving with most of the local matter around you (such as supergroups of galaxies or whatever) it is more or less true.

There are of course frames of reference in which it is not true.

Life always works that way in GR. And, for certain in the real universe (it's required by special relativity, even if GR turns out not to work that well.)

If you happen to be in a frame of reference that is co-moving with most of the local matter around you (such as supergroups of galaxies or whatever) it is more or less true.

There are of course frames of reference in which it is not true.

So, the claim, 'the universe is 13.8 billion years old', is "more or less true" and "not true" according to you. But, if the claim is not "universally" true then it is false, or more accurately in this case, meaningless, right Physicist Dave? Basic logic. But then of course there is mathematicist logic:

Life always works that way in GR. And, for certain in the real universe (it's required by special relativity, even if GR turns out not to work that well.)

Well maybe it's not fair to call that logic of any sort. Not physics either. Mathematicism, maybe.

Lets try and clear up your confusion about my comment re the derivation of the FLRW solutions:

I admit that the terminology around the FLRW equations is a little bit murky. For instance, from the Wikipedia entry (https://en.wikipedia.org/wiki/Friedmann%E2%80%93Lema%C3%AEtre%E2%80%93Robertson%E2%80%93Walker_metric):

The Friedmann–Lemaître–Robertson–Walker (FLRW) metric is an exact solution of Einstein's field equations of general relativity...

This does indeed make it sound like the metric is a product of the solution. However, there is a subsection on Solutions:

Einstein's field equations are not used in deriving the general form for the metric: it follows from the geometric properties of homogeneity and isotropy...

This metric has an analytic solution to Einstein's field equations...

It is in that sense that the metric (with its assumption of universality) is input to the GR solutions that provide the basis for modern cosmology.

It is the validity of that universal assumption lying at the base of the standard model that I am challenging, by arguing that the assumption of a universal metric is antithetical to Relativity Theory and therefore is inappropriate as an input to a GR solution. I cite in support of that position the consequent empirical baselessness of the standard model's structural elements.

If you want to argue with those points, go right ahead. If you wish me to clarify a point, please ask. But if all you have is some faux-scientist posturing, covering a lazy argument from authority, don't bother.

bud rap wrote to me:>So, the claim, 'the universe is 13.8 billion years old', is "more or less true" and "not true" according to you. But, if the claim is not "universally" true then it is false, or more accurately in this case, meaningless, right Physicist Dave? Basic logic. But then of course there is mathematicist logic:

Uhhhh.... by more or less true, I meant it's close to 13.8 billion years, but of course not exact.

As to your "universally true" etc., it just is what I said: all competent physicists and astrophysicists know this. Perhaps you read some pop-science book that used phrases like "universally true." Sorry: I can't help that.

By the way, one of the reasons I am still replying to you is that at one point in my youth, I was similarly confused. I know that the pop-science books are often confusing, but the actual truth about the real universe (and GR) is what I wrote above.

br also wrote to me:>It is in that sense that the metric (with its assumption of universality) is input to the GR solutions that provide the basis for modern cosmology.

>It is the validity of that universal assumption lying at the base of the standard model that I am challenging, by arguing that the assumption of a universal metric is antithetical to Relativity Theory and therefore is inappropriate as an input to a GR solution.

I'm trying not to be condescending, but, look: you use terms such as "universal metric" that are not generally used in physics textbooks. For example, I do not find it used even a single time in MTW.

So, exactly what do you mean by "universal metric"? I'm sure you think you know, but I doubt you do. And, if you do have a definite meaning for "universal metric," why is that "antithetical to Relativity Theory" and what does that have to do with FLRW?

Please: not just another barrage of words and do not treat Wikipedia as authoritative. Show us what you are talking about in terms of discussions in standard texts: to be concrete -- MTW; Adler, Bazin, and Schiffer; or Sean Carroll.

I know you are quite sure that everyone uses the phrase "universal metric" all the time and knows exactly what you mean and that the standard textbooks must do so also. But, you are mistaken.

If you wish to critique FLRW, use the terminology and descriptions in the standard textbooks, and maybe knowledgeable people will understand what you are trying to say.

By the way, I took GR from Kip Thorne, the guy who won the Nobel for discovering gravitational waves. I am capable of discussing this intelligibly, but not if you insist on using terms in a way that almost no physicists do.

Ah yes, the faux-scientist posturing, the arguments from authority, the diversionary semantics, the feigned confusion about terminology, the anything-to-duck-the-arguments-raised squirming, unfortunately Dave, I've seen this act before. As scientific argumentation goes, it's a disgrace. There's so much shucking and jiving here its hard to know where to begin.

The bottom line charge, is that there is implicit in the derivation of the FLRW equations, an atavistic cultural concept which constitutes an axiom of the model - the cosmos as a unitary entity to which a universal metric with universal properties (isotropy and homogeneity) can be applied. Pretending that is not the case is simply that, pretense.

...you use terms such as "universal metric" that are not generally used in physics textbooks. For example, I do not find it used even a single time in MTW.

You can't possibly tease out the meaning of the straightforward term "universal metric" because it wasn't in your college textbooks? You poor boy. Do you need smelling salts and a fainting couch in order to deal with a concept outside the comfort zone of your received wisdom? As a scientific argument, that's pathetic.

What you don't seem to grasp is that I am criticizing the content of your urtexts, as constituting a belief system. Citing them without mustering a defense against the critique being leveled, essentially means you don't have the capacity to understand an argument you disagree with. Again, pathetic.

This to remind you to please do not post links to personal websites. I do not have time to check if the content of these websites is scientifically legit, and therefore will just not approve the comment, regardless of what the content of the website.

I cannot edit comments, I can merely approve or not approve them, and so this means your entire comment will not appear.

Links to journals, the arxiv, major news pages and so on are ok. Basically anything that I can recognize at first look as a reliable source will pass.

Sabine,Your "electrons have a mass and masses generate a gravitational pull by bending space-time" comment also concerns their charge/electric field - from which perspective electron is indivisible elementary charge, cannot go through simultaneously two slits.This GR/EM problem requires dBB-like explanation: that particle (charge) goes one slit, only its coupled "pilot" wave travels both.And it has been confirmed for photons - literally measuring their average trajectories in double-slit experiment: https://science.sciencemag.org/content/sci/332/6034/1170.full.pdf

For both problems we could respond that we have superposition e.g. of electric field for going through one and through the second slit, and analogously for GR: superposition of spacetimes for both scenarios.

But this is unimaginable multiplication of entities, which is not needed - can be avoided by just accepting wave-particle duality, with only the wave going through both slits.

As Jarek aptly pointed out, the way one would use quantum theory to predict the landings of particles with strong gravity fields is the same as how it is already used to predict the landings of particles with strong electric charges. In a nutshell, quantum theory applies equally well to any kind of particle with any kind of field, provided only that the field doesn't leave a permanent information trace anywhere before the particle lands on the screen. (For a detailed example, see my earlier post in this blog.) Notably, interference has been shown not just for electrons and photons, but for molecules as large as C48H26F24N8O8, which has a total of 114 atoms.[1]

However, Sabine's main point was a bit more subtle, and a lot more intriguing. You can see it better if you flip the question the other way around: Since GR is defined around the idea of localized masses and their associated gravity fields, it lacks any concept of "fuzzy" or wavelike distributions of those masses for exploring multiple paths. No wave function means no piloting effect, and thus no way to explain something like "interference" in GR without making some fairly drastic changes to its fundamental assumptions.

This may seem trivial, since the obvious answer is "well, just use quantum theory for that part of it," but that is exactly the problem: The two theories are not integrated. You must instead just turn one off and turn the other one on.

Since gravity is incredibly weak, this either-or approach works great for almost any conceivable situation. But what happens when gravity gets so fine-grained or so intense that it too enters into the quantum domain? At that point you are in trouble, because gravity stubbornly refuses to fit well into the usual quantum mathematical frameworks. And even if it did, you would still be left with this weird problem of how to explain the conversion of the 100% geometric view of GR into the lots-of-particles view of QM, in a way that doesn't lose the value of either (or both). GR is so good at what it does via geometry that simply discarding that aspect of it without explanation will never make muster as a "complete" theory.

Anyway, I hope this helps a bit. Sabine now has my full permission to excoriate me royally (er, not that I could stop her, this is her site :) if I botched my attempt to elaborate her concerns.

P.S. — When Jarek mentions "pilot waves", he is talking about a particular way of interpreting quantum experiment results, one in which the information issue is strongly separated from the particle issue. In the pilot wave approach the particle is always just a particle, but one that gets "advised" by an associated phenomenon called (for obvious reasons) the pilot wave. The earlier qualifiers still apply. That is, the pilot wave only exists while the particle has not been detected. Pilot waves are in many ways just a different way of slicing the experimental data, one that gets away from the discomforting (to many) idea that the particle has no single location as it moves through the slits. While pilot waves are not terribly popular these days, no less of figures than Einstein and John Bell have been advocates of them.

GR is not "defined around the idea of localized masses" - its stress-energy tensor is usually seen as effective: average over huge volumes, we have no experimental evidence that it translates to scales of single particles. We don't even really know gravitational mass of electron (see e.g. Witteborn, Fairbank 1967 electron mass experiment).

In contrast, while I don't think we have any hints for "mass quantization", we have well confirmed charge quantization - that Gauss law can only return integer multiplicities of 'e'. Fundamental charges are indivisible, we know that they are well localized - they haven't observed any "blurring" of electron e.g. in electron-positron scattering.

And electric field of electron has ~39 orders of magnitude stronger influence than gravitational - for example choice of one of two slits essentially affects surrounding atoms through such moving electric charge - they kind of measure the "which-path information".

I have searched for online discussion about the relation between Landauer's Principle (physical law at last?) and the black hole information paradox, but other than a somewhat cryptic remark on Wikipedia I can't find anything.

Over one thousand physicists have worked hard on quantizing gravity for 50 years, with a few hundred dedicating their careers on the problem, with little success. That's one of the reasons I think gravity will not be quantized. There are lots of ways to understand a quantum - gravity interaction, my bet is on using Bohmian mechanics as a way to see which direction to go in: Example paper (not by me)

The double-slit experiment refers to a system's wave function not to a particle or double-slit screen separately: for a moment the particle-double- slit form a system described by a wave function with non-zero probabilities for each slit and non-zero probabilities at each point behind the screen (interference pattern). The function is an instantiation of the particle-screen interaction; because of that, no particle needs be in two places at once.

Why does everyone assume it's GR that has to be changed? And why does everyone assume Quantum Theory is the one true rock bottom reality? What if Quantum Theory is the thing which has to change? There are theories that our universe is the result of 'branes meeting and giving rise to the BB. What if one of those branes were a quantum brane and the other was a GR brane? What if what we're finding is just the way it is and has to be? Both theories have been tested to the nth degree, and try as we might to disprove either, they defeat all our attempts thus far to disprove them. Maybe the Universe is telling us something.

Westy asked:>Why does everyone assume it's GR that has to be changed? And why does everyone assume Quantum Theory is the one true rock bottom reality?

Very large numbers of physicists do not assume that QM is the final word: I don't, and I don't think Sabine or Weinberg do, either.

But, at a quantitative, experimental level, QM works better than any other scientific theory has ever worked.

So, even those of us who hope to somehow transcend QM just have to come to grips with QM. Our new, wonderful theory better look pretty much like QM in the areas where QM works really well or we are just wrong in terms of experiments.

There is another strange situation in GR where quantum effects might be relevant.When a star undergoes gravitational collapse, it is known (from the Tolmann-Oppenheimer-Volkov equation) that the pressure at its center becomes infinite *before* it reaches the Schwarzchild radius. As Sabine pointed out, infinities usually signal the breakdown of a theory and that one occurs before the geometric singularity associated to a black hole.So what does theory have to say on what happens when pressure becomes infinite?

You can find this in chapter 14 (on gravitational collapse) in the book by Adler, Schiffer and Bazin ("Introduction to GR", equation 14.53 p.473).OK, it's an old book but has this been refuted ever since?In any case, there is curiously no mention of this "pressure singularity" in the wikipedia page on gravitational collapse.

Quote Pascal You can find this in chapter 14 (on gravitational collapse) in the book by Adler, Schiffer and Bazin

Interesting : They don't comment it further. It seems that they don't realize the (possible) importance of the "slight" deviation. However, the source has been quoted several times, e.g. in https://arxiv.org/ftp/arxiv/papers/0806/0806.1176.pdf

Thanks. I'll look it up. I'm currently interested in the nitty-gritty details of gravitational collapse, so now I have to go through some grungy calculations to see how this particular thing works. (My hope is that if I understand at the detailed physics level what is happening, kind of like how all of us grasp freshman mechanics, then I will see what is really going on in Hawking radiation, the so-called firewall, etc.)

Recall the pressure singularity occurs before formation of the event horizon. This could mean that the mathematical model that is used to deduce the formation of an event horizon breaks down, and there is perhaps no good reason to deduce that an event horizon forms at all.

I looked up the reference and redid that calculation from first principles (I means really first principles!). Most of my derivation followed a different route in its intermediate steps: I did not just copy and work through the ABS algebra.

Thanks for writing a very simple explanation of why QM and GR are incompatible.

If you had a sufficiently sensitive device that could detect the difference in gravitational attraction of an electron depending on which slit it 'went through', wouldn't that be an observation of the electron as it went through the slits, and thus collapse the interference fringes?

I don't think saying the electron "passes through both slits" is a sufficiently precise English equivalent of saying you need to add the amplitudes for both paths, if you are looking at this from a path integral point of view, which you are. You are relying on an intuitive understanding of what "passes through" means which isn't supported by the actual math -- you can, of course, not measure the presence of the electron in either slit without destroying the interference, so in what sense has the electron "passed through" if you cannot measure it actually doing so without causing it *not* to pass through both? These are very complex problems in quantum measurement theory, and I don't think you can sweep them under the rug to construct a reasonable critique of GR. The presence of singularities is a much more powerful and solid argument.

That said, I also think your headline is misleading. We know *either* QM *or* GR are incomplete and/or flawed, but to prefer one over the other is an article of faith, not based on any sound physics. So maybe Einstein is a little wrong. Or maybe the Standard Model is a little wrong, and Einstein was right. We don't know, and we can't know, until we have a successor theory that resolves the contradictions.

As I recall, Bell speculated the correct Schroedinger Equation is nonlinear at some higher order, triggering wave function collapse dynamically. But then quantum mechanics would no longer be linear or unitary would it? On the other hand, a dynamically collapsing wave function might avoid conflict with General Relativity?

New theories or revised old ones? Where do you see the boundary between both?

In 2004 Leonhard Sussking said in an interview to a German magazine (Die ZEIT): The situation in physics is quite desperate. The best would be to forget everything that we have learned and go back to the state before Aristotle, and then start physics again from scratch.

A strong statement, indeed! I find it exaggerated. Maybe it is sufficient to go back to the state before Einstein and Heisenberg and start from that point again.

By default a new physics is understood as a union of general relativity and quantum theory. And that is, in my opinion, a dead end. Quantum Gravity will not help anyone. Just you would not be able to confirm this. For new experiments the money will be missing anyway. Therefore the old experiments should be reevaluated.

Honestly, I do not know any experiment that would confirm the general theory of relativity. All are just the adjustments. And these adjustments should be removed. Only then we will be able to see what reality really looks like.

A major cause of GR-quantum conflict is mathematical extrapolation into domains such as the Planck foam that are not actually experimentally accessible. One feature I like about a bits-at-the-bottom interpretation of physics is that this entire class of existence-by-extrapoation phenomena simply ceases to exist.

As a specific example, in bits-at-the-bottom the quantum fabric of spacetime becomes identical to the Standard Model quantum mechanics of ordinary matter, including the everyday quantum fuzziness we see there. Spacetime at any given location only seems infinitely precise for the same reason that the Mandelbrot set at any given location seems infinitely detailed: You get the level of detail that you paid for by applying more energy.

It seems to me that the proper reference frame for an observer to access what is happening near the event horizon is the reference frame of the external observer. The information at the event horizon and/or inside the black hole is not defined in space time and therefore irrelevant. Without time, space time is not defined. Therefore the information contained at the event horizon is also undefined to the external observer.

Undoubtedly, what is missing from General Relativity is some connection to quantum theory. And it is unlikely that Einstein would have claimed otherwise.

Paul Dirac, when he took the "operator square root" of the flat spacetime metric tensor using the anti-commutator \eta^\mu\nu =.5{\gamma^\mu \gamma^\nu}, found a quantum level to nature that today we recognize correctly describes the properties of observed fermions. And we know how to generalize this to curved spacetime using tetrads.

Knowing this, might Dirac's quantum theory of the electron, when used to deconstruct a suitable curved spacetime metric tensor to obtain its \Gamma operators and tetrads and wavefuntions, provide at least part of a viable, mathematically-workable path to developing a quantum theory of gravity?

Also, because General Relativity does not include quantum mechanics, there is ample reason to doubt the physical reality of a big bang or a big crunch. Recall how quantum mechanics "saved" electrons from spiraling into a nucleus. At the very least, the uncertainty principle and / or the Pauli fermion exclusion principle should prevent the entire universe from ever physically being in such a highly compressed state as that of, e.g., the zero-radius singularity of the simple Friedmann model cycloid.

This is not to argue against cosmological expansion being a good explanation for cosmological redshift. It is just to say that quantum mechanics -- which is missing from GR -- would most certainly stand somewhere between our present universe and a big bang or crunch, just as it "saved" the electron.

Well, that quantum particles can "be" in two places at the same time is not a claim of quantum mechanics, at least in its Copenhagen version. I hope you mean this claim as a metaphor and not as a physical fact.

Three spacecraft A, B, C moves on nearly the same line with constant velocities. First passes spacecraft B the spacecraft A with lower velocity. Than spacecraft C passes spacecraft A with higher velocity then B. Later, the spacecraft C has to pass spacecraft B (encounter). At this moment, with that event ("Ereignis"), spacecraft B and C have to observe different distances to spacecraft A because different length contractions which are caused by different velocities (special relativity).

But this is impossible. At this event, the encounter B and C, they have to have the same distance towards A.

The problem is that it would require that I write 10,000 words of stuff to go into special relativity and to illustrate how that really works. I can't do that, and all I can suggest is that you take the time to learn the real stuff.

I encounter this often with younger students who think somehow they know physics. Every time I have taught elementary physics I give the suggestion that everyone abandon what they think about physics, clear the deck and start over. In teaching quantum mechanics it is again the same, though it has been a long time since I taught that. I see this in older adults as well, where after reading some popularizations of physics they end up with some wrong ideas. A brother of mine was a professor of molecular bio (he died 7 years ago) and I had to straighten him out on some of this after he read Hawking's A Brief History of Time.

Then there is the "shaving to a point" problem. I encounter people who will persist in argumentation long past what is necessary. This often happens with people who embrace various forms of mundane crack-pot idea, from creationism to anti-relativity ideas, There is a character who haunts blogs and on-line stuff, Valev I think his name is, who could force any physicist into a full time job of trying to counter his arguments. I really do not want to go there either.

So my advise is to read the real literature on this. Special relativity is old stuff these days; it has been tested in many thousands of ways and analysis with it fills volumes of journals. Taylor and Wheeler wrote a book back in the 1960s-70s on special relativity that is pretty good.

The SRT says, that two objects which are at the same time at the same location be in two different distances to a single point if the objects moves with different speed.

You can't wipe out this antinomy with advice to some articles. Mathematically something like this may work - but not in reality with real things. In the real world, there is only one distance between two points in one moment.

And I have no idea what this fact has something to do with creationism or something else far from physics. You seem to try to delegitimize my person with such insinuations.

Spacecraft B and C [when they coincide at different velocities] have to observe different distances to spacecraft A. But this is impossible.

It isn’t impossible. Let E0 be the event at which B and C coincide, and let E1 and E2 be the events of A that are simultaneous with E0 in terms of inertial coordinate systems S1 and S2 in which B and C (respectively) are at rest. The spatial distance between E0 and E1 in terms of S1 is the absolute interval between those two events, and the spatial distance between E0 and E2 in terms of S2 is the absolute interval between those two events. Those two intervals are not the same, so there is no reason they should have the same magnitudes. They represent the intervals between the event E0 (where B and C intersect) and two different events E1 and E2 of A.

It isn't impossible in the math of SRT. But it is impossible in the real world.

Two points has to have the same distances to each other at the same time (the point of time is given by the encounter).And the distance from a first point to a second point is the same as from the second point to the first point (symmetry!).

With special relativity you have also to say, the distance AB is not the same as the distance BA, the distance AC is not the same as CA, and so on.

No, no, no! You can say that the objective distance between A and B is some length L when viewed AT REST. But then, viewed in motion, the length AB will contract and times will dilate. Experiments prove this a zillion time over, and there is nothing faulty in the logic. And following Lawrence, that is all I will say.

The distance from a first point to a second point is the same as from the second point to the first point (symmetry!).

Well, the interval from E0 to E1 is the same as the interval from E1 to E0. Likewise for the pairs of events (E0,E2) and (E1,E2). There are three events here. It’s a triangle.

Two points has to have the same distances to each other at the same time…

There are not just two events here, there are three. The events E1 and E2 are not at the same time. They are both on spaceship A, but separated by a time interval. Remember, events E0 and E1 are simultaneous in terms of the inertial coordinates in which B is at rest, and events E0 and E2 are simultaneous in terms of the inertial coordinates in which C is at rest.

With special relativity you have also to say, the distance AB is not the same as the distance BA…

No, in terms of any given system of inertial coordinates, the spatial distance from A to B is the same as the spatial distance from B to A. Your question involves two different systems of inertial coordinates, in which B and C respectively are at rest. You are asking about the spatial distance from the intersection event E0 to two different events (E1 and E2) on A in terms of those two different systems of inertial coordinates. You are claiming that the interval from E0 to E1 must be the same as the interval from E2 to E0, but that is obviously not true.

In SRT, the "distances" from A to B (when B encounter C) and B to A (when B encounter C) and A to C (when B encounter C) and C to A (when B encounter C) have all to be different. This is blatant nonsense.

In SRT, the "distances" from A to B (when B encounter C) and B to A (when B encounter C) and A to C (when B encounter C) and C to A (when B encounter C) have all to be different. This is blatant nonsense.

No, in terms of any given system of inertial coordinates, the spatial distance from A to B is obviously the same as the spatial distance from B to A. But that isn’t what you are asking about.

You are asking about the spatial distance between the events E0 and E1 in terms of inertial coordinates in which B is at rest, compared with the spatial distance between the events E0 and E2 in terms of inertial coordinates in which C is at rest. Those are two different intervals (connecting two different pairs of events), decomposed in terms of two different systems of inertial coordinates. Needless to say, these distances are not the same.

The problem with the statement that you correctly call “blatant nonsense” (and incorrectly attribute to special relativity) is your indiscriminate use of the word “when”. The event of A simultaneous with E0 in terms of inertial coordinates in which B is at rest is not the same as the event of A simultaneous with E0 in terms of inertial coordinates in which C is at rest.

"In terms of any given system of inertial coordinates, the spatial distance from A to B is obviously the same as the spatial distance from B to A.”

Wrong!

It isn’t wrong. In terms of any given system S of inertial coordinates x,t, any two material particles A and B on the x axis have unique x coordinates xA(t) and xB(t), respectively, for any given t coordinate. In terms of S, the unique spatial distance between those two objects at coordinate time t is |xA(t) – xB(t)|. Please note that it is not necessary for either A or B to be at rest in S.

Because SRT is assuming different durations and different length for differently moving observers.

You’re confusing material objects (e.g., observers) with coordinate systems. In popular descriptions of special relativity, authors often use the phrase “for an observer” as shorthand for “in terms of a system of inertial coordinates in which a particular object (e.g., observer) is at rest”. Thus, when you talk about lengths and durations “for two differently moving observers”, you are referring to two different systems of inertial coordinates, S1 in which A is at rest, and S2 in which B is at rest. A given event E0 of B is simultaneous with event E1 of A in terms of S1, and with event E2 of A in terms of S2. So, as explained before, you are referring to the spatial distances between two different pairs of events (in terms of two different systems of coordinates), and claiming that they must be equal. That is obviously not true.

In reality, a moment is the well… defined state of the whole system A-B…

That isn’t well-defined at all. Your thought process is circular, because you are defining “moment” as the state of A and B in a given moment. More accurately, a moment (or instant) is the locus of events at constant t, but this obviously depends on the system of coordinates. It’s common, unless otherwise stated, to refer to some system of inertial coordinates, which has well-defined operational meaning.

which underlies completely the law v =s/t.

That isn’t a law, it’s a definition. In terms of any given system of coordinates x,t, the velocity v of a particle is defined as dx/dt, which of course is dependent on the system of coordinates.

In experimental physics we know that usually "objective measurements" have to be corrected to achieve objective knowledge about the observed circumstances since the measurements are affected by the observation conditions.

So in the case of length measurement.

If some measures of the same fact leads to different results, we have to correct them. We have to take in account correction factors.

The signal propagation delay is no obstacle to gain objective and therefore identical information for all observers. If not just in time, but surely after a time delay. We can know objective all about trajectories, times and events concerning different observers by exchanging messages with time stamps and location information.

The "Laplace's demon" leaves no room for a "relativity of simultaneity". And then, all other SRT dogmata falls too. There is no "length contraction" or "time dilation". These are all effects which depends on the conditions of the observation - not on the conditions of the observation object.

These arguments are easy to grasp and elementary basic and so, nobody must be a specialist to review the "Special Relativity" which predicts "length contraction", "time dilation" and "relativity of simultaneity" as necessary elements to seem to be sound.

It's not only an issue for experts. Anybody is invited to rethink modern physics.

Bear in mind that the GR community is often guilty of a certain amount of exaggeration over the statistical significance of their successes.

So, going through your examples, //Newton// actually described gravitational lightbending in Opticks (which then gives gravitational lensing), //Michell// predicted gravitational shifts in 1783, the unavoidable consequence of gravitational shifts is gravitational time dilation, Newtonian energy-losses scale up at cosmological scales to something that looks very like Hubble redshift, and Michell's piece also presented the r=2M gravitational horizon radius.

Admittedly, Michell's "dark stars" were not modern Wheeler black holes ... but since C18th dark stars had effective horizons rather than event horizons, they actually generated the classical counterpart of Hawking radiation (Thorne 1994), a trick that we still can't do with modern GR.

Einstein's 1916 theory is not a particularly good theory. The idea was brilliant, and the concept beautiful, but the implementation was fouled up. The result is a mess of contradictions.

Comment moderation on this blog is turned on. Submitted comments will only appear after manual approval, which can take up to 24 hours. Comments posted as "Unknown" go straight to junk. You may have to click on the orange-white blogger icon next to your name to change to a different account.