Pages

Thursday, October 18, 2018

First stars spell trouble for dark matter

In the beginning, reheating created a hot plasma of elementary particles. The plasma expanded, cooled, and emitted the cosmic background radiation. Then gravity made the plasma clump, and darkness was upon the face of the deep, whatever that means.

Cosmologists call it the “dark ages,” the period in the early universe where matter is already too cool to emit radiation, but not yet clumpy enough to ignite nuclear fusion. At this time the universe was filled almost exclusively with rather dilute hydrogen gas. It’s not until a billion years after the Big Bang that the first stars light up, an epoch poetically called “cosmic dawn.”

We cannot directly measure light emitted from those first stars, but we can indirectly infer the stars’ presence by studying the cosmic microwave background. That’s because the early stars emit UV radiation which couples to the hydrogen gas, and for some while this coupling enables the gas to absorb light of a specific wavelength – at about 21cm. This leaves a mark in the cosmic microwave background.

The wavelengths of light stretch with the expansion of the universe, so what was 21cm back then is now deep in the radio regime. That makes it difficult to find cosmological signals because other sources – both on earth and in our galaxy – can contaminate the data.

In February, the Experiment to Detect the Global Epoch of Reionization Signature (EDGES) announced they had measured the absorption that stems from the first stars. They found it at the expected wavelength – a few meters – but stronger than the predictions said it should be.

Astrophysicists can make predictions for this absorption by using the concordance model for cosmology. This model has 8 free parameters – one of which is the amount of dark matter – and the physics of the first stars follows from this straight-forwardly. Besides the cosmological dynamics, it’s only well-known thermodynamics and atomic physics. Compared to the large variety of today’s stars, the first stars were fairly simple. Or at least that’s what astrophysicists thought so far.

It took me a while to get around and read the EDGES paper, and I’ve since tried to find out what, if anything, astrophysicists think about the mismatch with the predictions. The answer is: not much. Most of them think it’ll go away. Maybe a measurement error. I have even been told to not pay attention to the EDGES result because the paper has not been cited all that often. Seriously.

Well, as you can tell, I looked at it anyway. I’m not an astrophysicist and I can’t judge the experimental design of the EDGES collaboration. I can only say that I don’t see obvious flaws with their data analysis. The paper seems fine to me.

Besides the possibility of a measurement error, the theoretical explanations for the signal have so far focused on what type of dark matter could possibly make it work, as the commonly considered ones don’t do the trick.

Stacy McGaugh – the modified gravity dude – had the brilliant idea to see what the absorption signal from the first stars would look like if there was just no dark matter. Turns out this would fit remarkably well with the EDGES data. I say “remarkably well” because the parameters that enter his calculation are known from other measurements already, so no freedom to adjust them.

The reason why the absorption is stronger without dark matter isn’t hard to understand. The more matter there is in the universe, the faster the expansion decelerates. This means without dark matter, the period in which the gas can interact with the radiation is longer, allowing more absorption.

Now, I have recently developed a soft spot for modified gravity, but I am not terribly convinced of Stacy’s argument. It’s one thing to say that galaxies probe different physics than cosmology and thus a new type of force may kick in on galactic scales. It’s another thing to just throw out dark matter from the concordance model because that screws up the whole fit from which the other parameters stem to begin with. You have to self-consistently extract the whole set of parameters from the data – you need a different model entirely.

Indeed, to recover the benefits of dark matter, Stacy employs rather heavy neutrinos. The masses are on the upper end of what is still compatible with constraints. (That’s not counting the cosmological constraints, which are tighter, because these constraints assume the concordance model, and hence don’t apply for modified gravity.) The neutrinos don’t make a difference for the EDGES signal. Still, the dark-matter-less model does not account for the third acoustic peak of the cosmic microwave background. So you have to choose, get the EDGES absorption right or get the third acoustic peak right. Frankly I’d rather have both.

102 comments:

"and darkness was upon the face of the deep, whatever that means. ". it's from Genesis 1:2

"The earth was without form, and void; and darkness was on the face of the deep. And the Spirit of God was hovering over the face of the waters." "And the earth was without form, and void; and darkness was upon the face of the deep: and the Spirit of God moved upon the face of the waters."

My understanding is that modified gravity makes predictions that may be testable near places in the solar system where the Newtonian gravitational fields of the sun and a large planet would cancel out. Is that your understanding? If so, would it be worthwhile to send a probe past one of those points, perhaps as part of a journey to do planetary science in the outer solar system?

All this suggests that mere invariant "guess-masses" of conventional matter can not explain all the dynamics. The energy contained in the gravitational field itself within containing the energy of the signaling of gravity changes must be taken into account so that the overall picture can be clarified. Do you see something like that?

Just to be clear: a modified gravity dude is a gravity dude who has been modified (how, I will leave to your imagination, as well as what a gravity dude is). A modified-gravity dude is a dude who works with modified gravity.

A high energy physicist has perhaps been smoking some weed. And so on.

Is Steven Weinberg an elementary particle physicist? Definitely not. But an elementary-particle physicist.

Hyphenations make things clear. Some people have a tend to hypercorrection, though, and hyphenate adverb-verb combinations such as "well-known", where no hyphen is needed.

Sometimes it is really unclear what is meant. Is "massive star formation" the formation of massive stars, or very much star formation? If the former, then "massive-star" formation makes it clear. (I read this today---bonus points if you can guess where---and from the context it was clear---but only after having to backtrack and read it again---that "massive-star formation" was meant, though the text said "massive star formation".)

Rule of thumb (no, not "rule-of-thumb") if two words are being used together in a combination to build a compound adjective (NOT an adverb modifying an adjective), then hyphenate them to make that clear.

It can get complicated. There is the strong-CP--violating angle. The strong CP violating angle is a strong CP angle which is violating. The strong CP-violating angle is a CP-violating angle which is strong. The only sensible construction is the first one, but it is clear to hyphenate it correctly. In other cases, more than one interpretation is possible. In very few cases, the meaning with and without the hyphen is the same.

At risk of "explaining" what you may also already know, the Hebrew word translated "darkness" is choshek and "deep" is tehom. Choshek can also mean "obscure."

Like many ancient Near Eastern cultures the Hebrews believed the order of the created world had been somehow shaped from of an endless primordial chaos and saw the ocean or sea as linked to that chaos. Especially when they were out of sight of land, with no idea what might be below them, they believed the deepest parts of the ocean to be remnants of this chaos, waiting to burst forth and consume the world. A seminary professor suggested the phrase was a poetic way to say that the unknown of the universe prior to God beginning creation was so turbulent and so disordered that it could not even be perceived.

But as it is a poetic formulation, the phrase can be interpreted in many other ways that may better suit the listener's ear.

My guess: "the deep" refers to deep waters, which the universe consisted of at that time in that cosmological model. Then the Earth was created, and a transparent shield, the "firmament" was placed around it to keep the water off (hence blue skies, the blue being water). Later, the firmament was opened to let some of the water in and create the Flood.

Well, as a model, it had one empirical observation which it explained: blue sky. (Two if you count rain, leaking through holes in the firmament opened by the Rain Maker.)

(Those are guesses based on going to Sunday School for about 12 years, against my will.) (Also Sunday Church Service, Sunday Evening Youth Surface, Summer Daily Vacation Bible School, and Thursday afternoon Release Time Religious Education in High School.) (You would think I would know more about Christianity, but it was mostly rote Bible reading, questions not encouraged.)

There is a problem with saying the lack of dark matter explains this. The idea the universe was expanding faster might account for the absorption might account for this. However, this leads to other problems. I wrote a derivation of the FLRW equation for the Hamiltonian H = E = 0

E = (å/a)^2 + 8πGρ/3c^2 + k/a^2 = 0

on stack exchange ( https://physics.stackexchange.com/questions/257476/how-did-the-universe-shift-from-dark-matter-dominated-to-dark-energy-dominate/257542#257542 ) using Newtonian physics. This accounts for the FLRW equation above for k = 0. This is really a remarkable result. Anyway the density has various components and these are often represented by Ω_m = .05, Ω_dm = .23, and Ω_de = .72. If we have data on H = å/a ~ 72km/s/Mpc and we know Ω_m = .05, then the missing dark matter component has to go somewhere for the total Hamiltonian constraint H = 0 to hold. It can't go into the dark energy portion, for that would really skew things. If it is gone then general relativity is violated, for remember in the ADM relativity NH = 0 is the contact manifold or constraint.

Alex Small-- Such experiments have been proposed, but the analyses in most such papers have neglected to account for the "External Field Effect," https://www.astro.umd.edu/~ssm/mond/EFE.html

MOND is a nonlinear gravitational theory, and the Solar System is not isolated, but is subject to the gravitational effects of the rest of the galaxy. Since the expected galactic gravitational acceleration of the Solar System is well in excess of the MOND "critical acceleration" a0, and since the MOND response of a test-body is determined by the ratio of the "Newtonian" gravitational acceleration to the MOND characteristic acceleration, any test-body within the Solar System will obey unmodified Newtonian Physics to a good first approximation --- even at the point where the Earth's and the Sun's gravitational fields "cancel out" --- because even at the "cancelation point" the galactic gravitational field is still strong enough to put the test-body in the "Deep Newtonian" regime, solar-system-induced accelerations notwithstanding.

To do a proper "cancelation" experiment would most likely require going to the point where the Solar and Galactic gravitational fields cancel. Unless I've made an error, my BOTE puts the Solar/Galactic "cancelation point" at on the order of ~0.1 lightyears away from the Sun --- which is well out into the Oort Cloud, and much much farther away than any human-made spacecraft has yet reached.

"…and darkness was upon the face of the deep, whatever that means " "And the earth was without form, and void; and darkness was upon the face of the deep: and the Spirit of God moved upon the face of the waters."“I know where the quote is from. I was saying it doesn't make sense to me.”

But let’s follow the fundamentalists’ advice and take it literally. Join me on a beach at sunrise 5000 years ago, when my five-year-old says “Dad, what’s actually going on here”? I need an answer, but I don’t know fuck-all about space, planets, or why exactly I think I can rely on the sun coming up every morning, but I do know that there are mountains, deserts, oceans, and that the world is much nicer in the daytime than it is at night. So (and here we’re just hoping that Genesis was reasonably well-translated from Aramaic to Greek to Latin to English, or however it happened), I say “Kiddo, once upon a time it was always night, not even the moon and stars, and you couldn’t even see the water, and there wasn’t anything moving, and look, now we have sunrises and you can count on the tides every day, and the wind makes waves on the sea; that’s the story. Let’s go catch some breakfast.”

Not a fact about the world, but (as someone once said about Van Gogh paintings) a window into our brain. And about the god part, here I am in 2018 Philadelphia, 60 years an atheist, daydreaming about benevolent space aliens landing on Earth and fixing our mess. Raise your hand if you’ve done it too.

This is outside my field, and I confess I do not understand it, but I can't help noticing that an associated paper by Burkina (Nature 555, pages 71–74) seems to be saying that the result is predicted by dark matter, and the dark matter particle cannot be heavier than several proton masses. That leaves a lot of scope for dark matter, although it does knock out some postulates. Another point that seems puzzling (to me) is it seems to depend on the what the temperature of the gas was supposed to be. How sure are we that that model is likely to be correct, and how much does it depend on the earlier inflation being correct? Finally, the rise in temperature at the end - how plausible do you think it is that this could be due to hydrogen atom coupling to form molecules. This gives of 1/6 of the ionisation energy of a single atom per single atom. One could argue this has to happen (and it might need dark matter to make it happen) because there is still the problem of forming stars - how do the first stars radiate away the heat due to gravitational collapse? One possible route might be H3+, which radiates infrared and is formed when hydrogen molecules get above 2000 K.

“The more matter there is in the universe, the faster the expansion decelerates. This means without dark matter, the period in which the gas can interact with the radiation is longer, allowing more absorption.”

I don’t understand. The time from big bang to present is fixed around 13.8 billion years but distances between galaxies are increasing over time. Without dark matter, expansion is faster and distances longer. Intergalactic hydrogen gas clouds are more diffused. There should be less absorption of UV light. Interstellar gas clouds are not affected by expansion rate since gravity within a galaxy is strong enough to resist space expansion.

It's the distance that light travels while the temperature is between two different values so that absorption can take place. The temperatures change with the expansion. The distances between galaxies (or stars) are not the relevant quantity.

Temperature is relevant for the period when hot plasma cooled to form atoms that emitted the CMB. But the absorption of UV by hydrogen gas occurred during the “cosmic dawn.” The gas had already cooled before the first stars were formed. Temperature is no longer relevant during this period.

Lawrence,

I agree. You still need dark matter for the missing mass. Neutrinos travel at light speed. They cannot form dark matter halos in galaxies. The orbital speed is much less than light speed.

If this were the only problem, but we also have the "core-cusp problem", the "missing satellites problem", the "extremely low scatter of the Baryonic Tully-Fisher relation problem", the "planes of satellite galaxies problem" and the "too big to fail problem". Will they all "go away" ? Hm ...

Free mean path of photon is a function of cross section of the absorbing atoms and particle (atom) density. Shorter path means more absorption. Temperature is not in the equation. Temperature is relevant for plasma mixed with atoms. Lower temperature, more atoms form by recombination.

@Enrico, Neutrinos have mass. We know neutrinos oscillate between the various lepton types or families. This is because the mass matrix is not diagonalizable with the Hamiltonian. Feynman writes about this with the K K-bar physics. As a result since there is a mass splitting special relativity enforces momentum-energy conservation by demanding even the smallest mass neutrino have a non-zero mass.

@Bee, Indeed neutrinos could form dark matter. Their kinetic energy would be very small. For their mass ~ 10ev and moving in a halo at around 100km/s the kinetic energy is ~ 5×10^{-8}ev. That is very small and one might ponder what conditions would have frozen them into such a low energy gas. There is the misalignment mechanism with axions during inflation that theoretically froze them into near zero kinetic energy. So very small mass particles could form DM.

Supersymmetry appears potentially ruled out. The low mass DM models for WIMPS are not working. The recent ANITA anomalous events, and supporting observations from IceCube, may be signatures of an S-tau. This is at an extreme energy of 10^{9}GeV. This is in line with my criticism of low mass SUSY particles, for the large Ricci curvature during inflation would have broken SUSY “hard.” This means the neutralino phenom for DM is probably ruled out.

https://www.symmetrymagazine.org/article/five-mysteries-the-standard-model-cant-explain... "The Standard Model is a thing of beauty."

The standard model is a kludge curve fit omitting neutrino mass and "dark mater." It is a formalized restatement of what theory knew, purporting to predict outside its boundary conditions. It does not.

Bee said, "You can't rule out susy. You can only rule out certain mass/interaction regimes for certain particles."

That is sort of what I inferred. The low mass SUSY partners appear to not exist. The ANITA results appear to indicate a huge mass, ~ 10^{18} ev, for the supersymmetric pair to the tau lepton with mass ~ 105MeV. I think it was I. I. Rabi who responded with "Who ordered that?" when the tau (then called meson) was found, and we now have again "Who ordered that?"

During inflation the Ricci R^{00} is at most 10 or so orders of magnitude below the Planck curvature 10^{66}cm^{-2}. SUSY is an exact symmetry for zero energy, and so SUSY would be completely broken at this energy. Then with reheating when this false de Sitter vacuum collapsed it is possible that SUSY remained highly broken. With some phase transitions if the temperature changes very abruptly or there are no nucleation sites such as dust, a material can exist in a phase typically not assigned to that temperature. So even though the cosmological constant is ~ 10^{-54}m^{-2} SUSY might have remained in a highly broken phase.

The small mass SUSY breaking is analogous to Zeeman splitting, which made the most sense. At atom in a very strong magnetic field behaves very differently and becomes a sort of quantum string of electrons with a nucleus. The breaking of SUSY might have been more of this order, and it persisted even with the bubble nucleation to the false vacuum.

Lawrence, Isador Rabi's famous comment "Who ordered that?" was actually in reference to the newly discovered 105 MeV muon, while addressing a fellow diner at a Manhattan restaurant in 1936. The tau lepton, discovered in 1975, has a mass of about 1776 MeV.

Lawrence, I meant to say that you accidentally mixed up the names of the two heavy leptons (muon and tau), with regard to the I. I. Rabi story "Who ordered that?". Having been interested in the weak interactions for many years, that anecdotal story was fresh on my mind.

@ David Schroeder: Yep, you are right. It was the muon and not the tauon.

@ Ajo: Verlinde's emergent gravity may have some role. There are two aspect of this. One is the entropy force of gravity, which really involves the change in configuration of a holographic screen. From the perspective of moving a holographic screen we can see how a change in entropy derives Newtonian gravity. Heat energy Q = E is

E = TS = ½NkT

and N = # of oscillator modes on the holographic screen is N = A/ℓ_p^2, for A the area of the holographic screen A = 4πr^2 and ℓ_p = sqrt{Għ/c^3} the Planck length so that ℓ_p^2 is a fundamental unit of area. We then employ the temperature of a Rindler wedge by the Unruh effect T = ħg/2πkc. We now put all of this to get

E = (r^2/G)gc^2.

Then for the energy we just use E = Mc^2 and the acceleration is

g = GM/r^2.

This is one way of thinking about Verlinde's entropic gravity.

Erik Verlinde carried this further with an idea that matter is an aspect of spacetime physics. The idea is that in the anti-de Sitter spacetime entanglements of CFT fields on the conformal boundary are equivalent to wormholes throught the bulk. This is a form of Susskind's ER = EPR. I think this in general is an aspect of how the AdS_n = O(n,2)/O(n,1) embeds into a Hermitian manifold geometric description of entangled states. Anyway, the de Sitter spacetime this holographic AdS ~ CFT no longer holds, so the amount of entropy on the horizon no longer reflects entanglement. Hence if we are to say that quantum entropy is a measure of quantum bits and is constant then this must be made up “elsewhere.” Hence there is then the occurrence of this quantum information in this massive field effect we call dark matter. It is an interesting idea, which I am not either convinced of or certain it is wrong.

It does seem to correlate with problems with de Sitter spacetimes as not consistent with quantum gravity, and Vafa shows it is not consistent with string theory of the landscape. The observable universe is approximately dS_4. Maybe dS_4 spacetimes are junction condition on holographic screens in AdS_5 with positive energy and reflect then some sort of symmetry breaking. As indicated above inflation should have highly broken SUSY, and so we may indeed be in Vafa's swampland.

Comment on MOND: MOND is really not a theory, but more of a phenomenological method. It is a way of working what might be thought of as gravitational form factors. Newtonian gravitation for some distribution of mass density ρ(r) and

F = -∫d^3r Gρ(r)/r^2

has some equivalency with

F = GM/f(r),

for some function f(r) of the radial distance. That is in a minimal sense all that MOND may be telling us. The modification f(r) may be thought of as due to mass-energy distribution, and knowing this gives information on galaxy structure.

Dr. H, this is not about your blog post. You don't have to moderate it through. But I can't resist sending it to you. I've been trying to understand Dr. Edward Witten's views on the state of string theory, and while looking at his papers I ran across this:

"At the same time, naturalness has been called into question because of developments on another front – the observation of the cosmic acceleration. If we apply the same reasoning that we applied to the Higgs mass parameter, then the measured vacuum energy [...] is highly unnatural – as far as we can see. This might be telling us that “naturalness” – as understood by particle theorists for the last 30 years – is not the right concept." - Edward Witten, 2008

Yes, that's right. The cosmological constant is not natural. It just isn't. We have known this for more than 20 years. (Or longer, though people didn't discuss it as long as they thought the CC was zero. Zero is also technically unnatural though. Witten must have believed it's negative. Why he only discussed its unnaturalness once it was clear that it's positive I don't know.)

That's why I am so frustrated. We know it doesn't work. We have know this for a long time. We knew this before the LHC even turned on. That's why I've been writing about this for 10+ years. I am not surprised the LHC hasn't found anything besides the Higgs. We had all the necessary information to make this inference already.

It doesn't make sense to me either. All I know is that if we have a question or concern important to us, and we ask God in faith, believing he'll answer, then he will answer in his own time and way. If it's important to us, it's important to God because he loves us each so dearly.

Thanks Lawrence for your elaborate reply. Unfortunately I'm not a physicist, so most of what you wrote is beyond my understanding. It is just that I think Verlinde's theory seems elegant, as it does not require particles we have never directly detected. It seems to explain the anomalies of rotation in galaxies, and also passed tests of gravitational lensing. Hence I wonder if it would also pass the observations in the EDGES project. My (simple) understanding of the original post is that these observations do not tie in with our current theories on dark matter, but that MOND (not a theory?) does. Verlinde's theory may therefore also fit, and provide additional support for this (emerging) theory and possibly disprove the theory on dark matter.

If naturalness is not a useful concept for guiding physics beyond the standard model, then let us also state the corollary - the Planck scale may not mean anything.

The naturalness argument is that "quantum gravity" becomes relevant at the Planck scale.

Without naturalness, we could have "quantum gravity" becoming relevant at a scale that is unnatural in Planck units. Moreover, this could go either way, "quantum gravity" could become relevant at an energy scale much lower than the Planck scale, or it could become relevant at an energy scale much higher than the Planck scale.

@Lawrence Crowell-- There is more to the MOND phenomenology than just a change in the asymptotic distance-dependence of the gravitational force law. The asymptotic MOND "force" on a test-particle depends on sqrt(M), not M, so Newtonian linear superposition of the gravitational fields produced by massive sources fails quite strongly in the "Deep MOND Limit" of "ultra-weak" accelerations that are much much less than the "MOND acceleration" $a_0.$ Moreover, the "External Field Effect" also implies a failure of the Weak Equivalence Principle at small accelerations. (Note that Weak-field superposition and the Weak Equivalence Principle are both cherished tenants of most modern gravity theorists, and that abandoning either one of them even in a particular limit is a "Big Ask.")

OTOH, Milgrom has shown that the "Deep MOND Limit" follows from requiring "space-time scale-invariance of orbits" in the ultraweak-field ultraslow-motion dynamics, arXiv:0810.4065; that this scale-invariance requires that "Mass" can only appear in the combination $GMa_0$ where $a_0$ is a universal constant with the dimensions of "acceleration," and that any theory that reduces to MOND in the "Deep MOND Limit" that all accelerations are much much less than $a_0$ leads to a new form of the Virial Theorem for "Deep MOND Systems" that is nonlinear in mass. (It has long been known that the Virial Theorem is closely connected to scale transformations, see e.g. Landau & Lifschitz's Mechanics.)

Failure of linear superposition and the WEP in the "MOND Limit" and their replacement with "space-time scale-invariance of orbits," taken together with the observation that $a_0$ is on the order of the "Hubble Acceleration" H/c (so that the "MOND Limit" tends to cooccur with the "Cosmological Limit") would seem to imply that any theory that leads to MOND in the "Deep MOND Limit" must be "Nonlinear," "Scale Invariant," and in some sense probably "Machian" in that limit. Not many researchers other than Milgrom have studied such "scale invariant" models; about the only examples I can think of off the top of my head are Julian Barbour and his collaborators on "Scale-covariant gravity" --- and so far as I know, none of them have made connections to MOND, since instead of studying the 'ultraweak-field" and "cosmological" limits, they have instead been far more interested in trying to connect to strong-field GR on the one hand, and Quantum Gravity on the other. And while Milgrom has speculated on possible "Machian" aspects of MOND, to the best of my memory not even Milgrom has constructed an explicitly "Machian" model of MOND. (Again, Julian Barbour and collaborators have worked on "Machian" theories, but not on "MONDian" theories.)

Bee and Unknown: We have known since 1990 there was something wrong with a lot of our physics. The standard model gives a large vacuum energy, much larger than known by the cosmological constant. Supersymmetry works in unbroken phase with zero vacuum energy or cosmological constant. String theory works only in zero or negative cosmological constant, where the negative case is the AdS background. Yet we know the observable world has positive cosmological constant. Now with the LHC all we have is the standard EW plus QCD model, and that most likely can't be fundamental.

When it comes to quantum gravity the real problem is not theory, it is experimental. Whether one is dealing with strings or LQG or other ideas we simply have little way of accessing Planck scale energy. I propose to let nature do the heavy lifting for us. Black hole coalescence may have signatures of quantum hair and connections with quantum gravity in gravitational waves signatures. Without experimental data we can only expect to have confusions.

for your next blog post, could you explain why string theory has a problem with dark energy, Vafa's conjecture, possible problems with quintessence and scalar fields like the higgs and string theory, the status of getting deSitter space out of strings like KKLT?

could 2018 be the beginning of the end of string theory, or do you think string theory can work around these problems?

Your statement that we have no way of accessing the Planck energy has been often repeated but it's wrong. The Planck energy is a small energy in every-day terms. I don't know how come that physicists frequently tend to forget gravity is accumulative. It adds up for massive particles. That, indeed, is the very reason why it's hard to quantize.

A consequence of this is that we can test quantum gravity not by colliding particles at high energy but simply by creating more massive objects with quantum properties. There is no obstacle to this in principle - at least not that we currently know of - it's a matter of technological sophistry. This is Nobel-prize worthy research (and let me be clear I am not talking about my own research here), yet there is hardly anyone working on it and pretty much nobody knows anything about it. How come, I ask you? Best,

But Bee, it is the unnatural numbers and subsequent necessary “fine tuning” produced by perturbation quantization in the energy realm where it actually works well for most phenomena that led to naturalness in the first place.

Second, why should we trust perturbation quantization of gravity at all? What observed phenomena justifies that?

btw, what do you think is a stronger result empirically, dark matter as inferred by the third peak in the CMB, or no-dark mattter, baryon only in 21cm, which may be independently verified by additional experiments you cite.

perhaps the "dark matter" as identified in the third peak of the CMB decayed by the time of the 21cm epoch

I don't know what you mean. You can perturbatively quantize GR, and you can extrapolate it to higher energies, and from that you can estimate at which energies the interactions have a similar strength as the other interactions in the standard model. That energy is the Planck energy. There are various easier ways to arrive at the Planck energy as the relevant energy scale but that, I believe, is the cleanest one.

In case you want to say that this assumes there are no other contributions that come in and ruin the extrapolation, that is certainly true.

A theory that is non-renormalizable isn't merely fine-tuned, it's useless once you pass a certain energy. This is not a criterion from naturalness, it's a criterion from "having a theory that actually allows you to calculate anything in the first place."

That's a very good question, alas one I can't just answer by gut feeling. What one would really need to do is to compare how well those models do in explaining all available data. I find it disturbing that many astrophysicists discard modified gravity simply because in some regimes dark matter works better. That's not sound procedure. I suspect that the parametrically most preferred model is currently one that has dark matter at some temperatures and modified gravity at others. If someone would just bother to make a global fit we could tell the presence of a phase transition right from the data. Alas, a two-phase solution is too ugly for astrophysicists to contemplate, so nobody does it.

But it's actually a problem that machine learning can solve, so I am quite hopeful that it'll happen in the soon future.

i had in mind either quantum mechanical black holes or Wheeler's geons or neutrinos that explains the third peak of the CMB, but by the 21cm era, they have dissipated creating a baryon only universe, and galaxy curves are explained by MONDian gravity

Thing is there's a lot of data from different observations and it's really hard to tell which model does the best job fitting them overall. The issue is that people who do parameter fits usually only look at cosmology, disregarding that it's difficult to make dark matter work on galactic scales. Otoh, the dark matter folks who do galaxy simulations don't compare their model to modified gravity. Then they complains "modified gravity can't do cosmology". This doesn't make any sense. It's not a coherent argument.

I don't know how well Stacy's NoCDM would fare (I'm not a fan of heavy neutrinos & not excited about this) and I don't know anything about other ways to explain the 3rd peak. Even if I knew, however, my point is that there are better ways to figure out which model works best than polling people's opinions. This is research which could have been done decades ago but wasn't, and it's still not being done because too many people think it's just too ugly to even contemplate. Best,

@Bee, I suppose I should say we need energy density ρ = ℓ_p^4 ~ 10^{76}GeV^4. There are people working on quantum Buckminsterfullerines and mesoscale quantum “cats.” Maybe that will be scaled up to quantum dust motes or quantum fleas, which is about a Planck mass. This will not do much for quantum gravity, for the gravity fields of these quantized objects will be minuscule. These quantum objects have to be reduced in size so they are down to the Planck scale and form a quantum unit of a black hole. That is where things get really tough.

@gdp, I am aware that MOND is considered fundamental and that GMa_0 is thought of as the coupling constant with a_0 some fundamental constant. However, the methodology is to adjust things to fit data, and this really as I see it is a sort of form factor fitting. MOND as a result is continually tuned to fit with galaxy structure. However, ΛCDM models galaxy clusters well and it also reproduces the CMB anisotropy curve in ways that MOND is not able to. ΛCDM admittedly does not address issues of galaxy structure.

I though tend to see ΛCDM and MOND as different methods. For that reason comparing or contrasting them is a bit uncertain. How this will turn out I am not certain. The existence of a very small and fundamental a_0 is to me odd, though maybe it is some S-dual of the Planck acceleration a_p ~ 10^{53}m/s^2.

@ Lawrence Crowell - Your very technical response to ajo caught my attention, particularly this sentence regarding Erik Verlinde's Emergent Gravity model: "The idea is that in the anti-de Sitter spacetime entanglements of CFT fields on the conformal boundary are equivalent to wormholes throught the bulk."

In further detailing Erik Verlinde's model, in that response, 5 dimensional anti-Desitter space (AdS_5) is mentioned, so I immediately assumed that Emergent Gravity utilizes entanglement through a bulk space such as that incorporated in Randall-Sundrum (RS 1 and 2) models (though their "bulk" may be in de Sitter space, not sure).

Pardon my naivete, as a layperson, but this seems to make sense with regard to entanglement experiments conducted in laboratories, and even at Tenerife, where gazillions of atoms are interposed between the entangled particles, be they atoms or photons. In Verlinde's model having the wormhole set up through a bulk space to connect distant entangled particles obviates the need for the wormhole to pass through our 4D space filled with gobs of matter. Since, at least according to the RS models, baryonic matter cannot enter the bulk, there would be none there to interfere with the wormhole.

No, that's not the case. You need Planckian energy-density if you want to reach the regime in which perturbative qg breaks down. You do not need Planckian energy-density to probe the perturbative regime. As I explained above, the only thing you need to do is to prevent quantum effects from decohering as you pile up masses. That's not simple, but there's no major impossibility standing in the way of doing this.

Bee wrote No, that's not the case. You need Planckian energy-density if you want to reach the regime in which perturbative qg breaks down.

Ok maybe one does not need Planck energy, or the Hagedorn energy/temperature, but you have to have energy in an extreme density. I might be able to quantize a flea, which would have around 10^{15} atoms. This would be quantization on the large, but it would not inform me much about gravity. While a superposed particle in positions of space, say at two slits, means spacetime curvature has this property as well, the field is virtually negligible. The gravitational potential energy of one flea on another that are touching is around V = -GM^2/r ~ -10^{-21}j or about 10^{-3}ev. This is smaller than the Planck energy by 30 orders of magnitude. To get our quantized fleas to exhibit quantum gravitational they need to be compressed by a factor of 10^{30} which is close to the Planck scale.

The Randall-Sundrum model is similar to my suggestion that maybe spacetime with a positive vacuum or cosmological constant is a junction condition on a holographic screen surrounding a causal wedge in AdS_5. I prefer to tread lightly on some of the deeper string theory, such as D-branes, though there are theories of holographic D-branes by Moskovic and others.

In the AdS we have a correlation between the bulk gravitation and a conformal field on the boundary of one dimension lower. This was the coup that Maldacena did 20 years ago. Then ER = EPR implies that entanglements between quantum states on the boundary is equivalent to wormholes through the bulk. What about de Sitter spacetimes? The problem is that the cosmological horizon is not a holographic screen. So there are no equivalent constructions for entanglements on a screen equal to wormholes. So according to Verlinde we have something else. This entanglement is with a field effect we call dark matter.

We see galaxies way out there with z > 1, which means on the Hubble frame these galaxies are beyond the cosmological event horizon at d = sqrt{3/Λ}. A galaxy with z = 1 is on the Hubble frame now crossing the horizon. So we have sort of an holographic result, but we do not see them Lorentz contracted to a black barrier. If we lived in an AdS spacetime we would see galaxies both blue and red shifted coming towards us and leaving us near the "conformal boundary." So there is no analogue of entanglement on a holographic screen or horizon. As a result quantum matter at the boundary is entangled elsewhere, and this entanglement has a field effect in spacetime around matter that is entangled with fields on the horizon.

I think there is something to this, though I would like to see some particle equivalency to it. Einstein's field equation might be thought of as

UV gravity field = IR quantum fields

UV = ultraviolet and IR = infrared, where if we put gravity in a bulk and the quantum fields on the boundary this is AdS~CFT of Maldacena. We might also think of this as

nonlocality of bulk = locality of fields on boundary,

where quantum gravitation is nonlocal, but in the IR limit the quantum fields on the boundary are local. A junction condition on a holographic screen bounding a causal wedge in AdS would be induced by symmetry breaking that isolates causal wedges from the AdS bulk with colosed timelike curves. In dS, maybe on the holographic screen, this would imply IR locality of quantum fields, but something odd with nonlocality of quantum gravity. Maybe this is what Verlinde is looking at.

@Lawrence Crowell-- In fact, there is essentially no "fitting" involved in MOND, since all admissible MOND "interpolation functions" must approach unity for $a >> a_0,$ and must approach the "identity function" (i.e. be linear and equal to their argument) for $a << a_0.$ Since the large majority of observed objects are either in the "Deep Newtonian Regime" (a >> a_0) or in the "Deep MOND regime" (a << a_0), the interpolating function is effectively either equal to 1 or equal to the "identity function," so there is nothing that can be "fitted" --- and hence MOND effectively has but a single adjustable parameter, $a_0,$ which currently appears to be a "Universal Constant." (By contrast, LCDM involves a LOT of "fitting", since in addition to the "Mass to Light Ratio" for each object, most CDM "Halo Functions" have at least one nonuniversal free parameter that must be individually estimated for each object. For example, the widely-used "NFW profile" has two free parameters, the "central density" and the "halo radius," and neither of them are "universal constants.")

It is only for the small minority of bodies that are in the "transitional zone" $~a_0/10 < a < ~10a_0$ that the detailed form of the MOND "interpolation function" matters, since the observable data supporting the "Radial Acceleration Discrepancy" range from more than two decades below a_0 to more than three decades above a_0 --- and as a number of authors have noted, even in the transition interval, the predictions of MOND are fairly insensitive to the detailed form of the "interpolation function," at least when plotted on a log-log scale. Thus, in practice there is no "fitting" involved in applications of MOND; one determines whether the object is question is in one of four qualitatively different regimes: Newtonian (a_0 << g_external << g_internal ), External-Field Dominated Newtonian (g_internal << a_0 << g_external), External-Field Dominated Quasi-Newtonian (g_internal << g_external << a_0), or "Deep MOND" (g_external << g_internal << a_0) and then apply the appropriate asymptotic limit of the interpolation function --- none or which steps involve any "fitting," see https://www.astro.umd.edu/~ssm/mond/EFE.html

The other point is that in your earlier post, you assume that MOND is essentially equivalent to merely replacing the acceleration-law GM/r^2 with GM/f(r) for some f(r) that is to be "fitted"--- but this is not the case. For small accelerations and a small test-body, the asymptotic acceleration is proportional to sqrt(M), not to M, so that the system generates an "Intrinsically nonextensive dynamics" that (to somewhat abuse terminology) does not exhibit the "Cluster Decomposition Property" when it is the "Deep MOND regime."

So MOND is neither "Just A Fitting Problem" nor is it just a "Simple replacement of the inverse-square law" with an alternate long-distance dependence; it is an asymptotically nonextensive form of dynamics that is qualitatively different from Newtonian dynamics with regard to both mass scaling and space-time scaling. (It is also qualitatively different from weak-field GR due to its asymptotic breaking of the Weak Equivalence Principle, which is one of the reasons why MOND has been so resistant to "relativization.")

Collision (or absorption) frequency is proxy for probability. In the frequency equation, temperature is substitute for mean speed of gas molecules. Temperature is proportional to frequency and probability. Higher temperature means higher probability (more absorption). Suppose the “cosmic dawn” occurred between 1 to 2 billion years after big bang. With dark matter, expansion is slower and volume smaller than without DM. In adiabatic expansion, temperature is inversely proportional to volume. Bigger volume, lower temperature. So with DM, smaller volume, higher temperature and more absorption.

Lawrence,

Yes neutrinos have mass but they still travel approximately (not exactly) at light speed. Way too fast for orbital speed of galaxies. Neutrinos cannot form dark matter halos in galaxies.

Once again: What you say is simply wrong. I am now repeating this for the third time. Why do you keep insisting on making wrong statements? You do not "quantize a flee" whatever that's supposed to mean, you need to create massive objects with quantum properies, eg entangled states or coherent superpositions. The gravitational field associated with them also has quantum properties. That's weak-field gravity of course. Perturbative regime. If the object is massive enough, you can measure the quantum properties of the gravitational field. For this you neither need to reach Planckian energies nor any particular threshold mass, you merely need to be able to reach the current sensitivity limit for measuring gravitational fields (which is something like a nanogram at present but improving rapidly).

As I said above, too many people think that gravity is weak per se. That's incorrect. The coupling constant of gravity is dimensionful. It is weak *for the typical masses of elementary partices* but it gets stronger if you add up mass.

My point about fleas is they have about a Planck mass. Gravitation scales as GM^2, and as you say if we could get quantum objects with very large mass to be quantized we could do quantum gravity. An extreme case might then be quantum planets. More moderately might be a Cavendish type experiment with quantum masses. This runs into another problem; quantization for large N is hard to maintain.

This approach to quantum gravitation would be a form of Schrödinger’s cat, where the cat is in a superposition of two states that have different spacetime configurations. This superposition of spacetime configurations would then be a quantization of the metric. The evolution of the field and source would obey a ∇^2h – 1/c^2∂_t^2h = (8πG/c^4)ρ equation with the reduced metric h an operator so this wave equation is quantum mechanical. The right hand side is pretty small.

People have been doing quantization on the large experiments with quantum molecules and buckminsterfullerines. The source below is a bit dated, but more current work is about on this order. This is a long long way from doing quantum gravity with Schrödinger’s cats.

I’ve since tried to find out what, if anything, astrophysicists think about the mismatch with the predictions. The answer is: not much. Most of them think it’ll go away. Maybe a measurement error. I have even been told to not pay attention to the EDGES result because the paper has not been cited all that often.

Really?

A quick search with the NASA ADS shows that this paper has accrued 131 citations in less than a single year. This is astonishingly large, even for a Nature paper. (I did a quick check on citation counts for other Nature papers in astronomy and cosmology published within a month of this paper. There were three with citation counts in the low 90s; the other 25 or so all had citation counts less than 40, with the majority having only 10 or fewer.)

Having noted that, I would say that being skeptical of a single study from a single telescope (published in Nature, no less) is perfectly sensible. Lots of surprising or astonishing preliminary results turn out to be exaggerated or mistaken, after all. This paper did a re-analysis of the public data from EDGES and argued that the 21-cm results

I wrote about an experiment of this type here and I think we are speaking here of a decade or two maybe until we reach the required range. But really the exact time it will take is besides the point. The point is that there are no major obstacles in the way of doing this.

This paper did a re-analysis of the public data from EDGES and argued that the foreground modeling may have had some problems:

"If we use exactly their procedures then we find almost identical results, but the fits imply either non-physical properties for the ionosphere or unexpected structure in the spectrum of foreground emission (or both). Furthermore we find that making reasonable changes to the analysis process, e.g., altering the description of the foregrounds or changing the range of frequencies included in the analysis, gives markedly different results for the properties of the absorption profile. We can in fact get what appears to be a satisfactory fit to the data without any absorption feature if there is a periodic feature with an amplitude of ~0.05 K present in the data. We believe that this calls into question the interpretation of these data as an unambiguous detection of the cosmological 21-cm absorption signature."

I will concede that maybe the Aspelmeyer group is closer to looking into this than I thought. Come to think of it I remember reading about some nanogram membranes or vibrating drums in quantum superpositions.

There is though still a rub. If you are going to say spacetime enters into a superposition of metric configurations you have to now come up with a way of really measuring this. As I see it a photon entering this configuration will split into two different paths and one would have an entanglement of the photon state with the quantum gravity field. The photon would have to be set to interfere with itself so a statistical set of tests give a quantum result. Think of the 2 slit experiment.

This experiment would only be a start. It would only tell us the metric, in this case in a linear weak field limit, exists in a superposed state. This would be progress, but far more would need to follow.

@gdp: However one places it the idea of MOND is to modify F = ma with something like

F = ma/(1 + a0/a)^x

where x is some number. For Newtonian gravity F = -GMm/r^2. If I let x = 1 for simplicity here with a0/a << 1 then

m(ω^2r - a0) ≈ GMm/r^2

and so

ω^2 = GM/r^3 + a0/r.

Now if I think of the RHS as GM/f(r) then it is not hard to see that f(r) = Gmr^2/(GMr + a0r^2). It is easy to see that for a0 = 0 this recovers Newtonian gravity just as it recovers F = ma. It is a feature of the equivalence principle.

@Lawrence Crowell-- You are taking the wrong limit. The "Deep MOND Limit" is not a_0 --> 0, it is a << a_0. Thus in the "Deep MOND" limit of "Classic MOND" where the "interpolation function" approaches $\mu(x) = x$, the magnitude of the MOND acceleration of a small test-mass induced by an isolated compact source-mass approaches $m |a|^2 / a_0| = GMm/r,$ which yields $|a| = sqrt(GMa_0/r).$

Things are more complicated in "Modern MOND" which has a field-theoretic basis rather than an "Action at a Distance" basis in order to ensure that energy and momentum will be conserved; however, for a compact source-mass and a negligible test mass, the "Deep MOND Limit" of even "Modern MOND" still yields $|a| = sqrt(GMa_0/r),$ so that in the "Deep MOND Limit" the acceleration of a test-body near an isolated compact source-mass M will still depend on sqrt(M), not M.

Moreover, if one examines the MOND Virial theorem as derived in several of Milgrom's papers and recapitulated in the review arXiv:1311.2579, one finds that unlike Newtonian Mechanics and weak-field GR, in MOND the RMS velocity dispersion of a galaxy or star-cluster will depend on how its mass is subdivided, not just its total mass. Thus, "Deep MOND" dynamics is nonextensive.

A particular example demonstrating that MOND is not equivalent to simply modifying the distance-dependence of the force law is given by Eqn.23 of arXiv:1311.2579, which shows that for an isolated two-body system in which both masses are nonnegligible, in the "Deep MOND Limit" MOND dynamics is approximated by the "force-law" $(2/3)\sqrt(Ga_0)[(m_1 + m_2)^(3/2) - m_1^(3/2) - m_2^(3/2)]/r.$

Finally, the "External Field Effect" implies that the internal dynamics of a subsystem depends on the presence of nearby masses, i.e., superposition of weak gravitational fields no longer holds.

Finally, I must correct an earlier statement: The "External Field Effect" implies that the "Modified Gravity" form of MOND necessarily violates the Strong Equivalence Principle and the Einstein Equivalence Principle, not necessarily the Weak Equivalence Principle, because the dynamics within a small system "feels" the effects of external bodies. (MOND depends on the absolute accelerations of the test-bodies, not just their relative positions.) "Modified Gravity" MOND does not necessarily break the Weak Equivalence Principle, since "Universality or Freefall" may still hold for small test-masses. However, the "Modified Inertia" form of MOND does violate even the WEP, since in "Modified Inertia" MOND, gravitational and inertial masses are no longer equivalent.

Gah-- Bad typos in the first paragraph; should read: The "Deep MOND Limit" is not a_0 --> 0, it is a << a_0. Thus in the "Deep MOND" limit of "Classic MOND" where the "interpolation function" approaches $\mu(x) = x$, the magnitude of the MOND acceleration of a small test-mass induced by an isolated compact source-mass approaches $m |a|^2 / a_0| = GMm/r^2,$ which yields $|a| = sqrt(GMa_0)/r.$

@Lawrence Crowell-- In MOND, your observation is not viewed as a "Funny Consequence," it is viewed as a feature.

MOND is a nonextensive dynamics. MOND does not satisfy an analog of "gausses law" for mass in the "Deep MOND Limit."MOND does not have an analog of "Birkhoff's Theorem" in the "Deep MOND Limit."The "force" exerted by a mass distribution in MOND does depend on how the mass is radially distributed.The above are all part of how MOND eliminates most of the need for "Dark Matter" in extended systems.

Trying to solve the dark-matter mystery in isolation from the other conundrums of physics, will most likely result in an addition of another `epicycle'. Current theory is already to complicated as is (and, with the exception of isolated islands, also ugly). A young physicist must currently spend most of his creative years on trying to make sense of it, before attempting anything genuinely new. From personal experience, young physicists are discouraged by their Professors from doing that lest they fall behind in the `publication race'.

Unless active physicists start untangling the imbroglio left behind by previous generations, the task of restoring elegance and simplicity to physics will soon become superhuman (even for the likes of Copernicus and Einstein).

At the risk of being labeled a crank - I think I found a promising lead.

I am not sure how the term extensive is used here. The opposite of extensive is intensive that in thermodynamics refers to quantities that are independent of scale, such as temperature.

As an exercise let me consider the motion of a particle in a region with a constant density ρ of matter. The gravitational force on a test mass m is then

F = -∫d^3r Gρm/r^2 = -4πGρmR,

where R is the upper limit on the integration. Well this is interesting for it means the force law is a Hooke's law force for a spring. The centripetal force mω^2R gives

ω^2 = -4πGρ.

A set of test masses would then orbit at a constant angular velocity independent of the radial distance. This would be the motion of a body in a galaxy halo of dark matter with no other matter.

This is one motivator for dark matter. Zwicky noticed motion of stars in galaxies that was in between a distribution predicted by Kepler's third law and a harmonic oscillator or "solid disk" motion. To model this is really complicated, and the Kepler law must take into account a variable matter distribution with the radius.

The complicated stuff that adjusts the mass distribution is then in MOND assumed to be in part due to a change in Newton's second law for tiny accelerations. For the constant distribution of matter I can take Gauss' law

∫da⋅F = -(4π)^2Gρm∫r^2 dr = -(4π)^2Gρmr^3/3.

That is odd, for one might expect with a galaxy for it to increase with area. The toy model MOND calculation I did is linear in radius. This would imply a 1/r force law. gdp wrote something like that above. Things start getting a bit complicated with messy functions. This starts to look like a different sort of physics that roughens up some elementary physics.

IMHO, Bee’s caution in the blogpost, and PeterErwin’s two recent posts, deserve more attention. EDGES is interesting, but a) it’s just one observation, and b) there is more than one way to analyze the data wrt cosmologically early CDM.

I suggest that it’ll be several more years before the early CDM, cosmological (i.e. not ~galactic or even cluster scale), results are robust enough to be considered inconsistent with LCDM (or not).

@Lawrence Crowell-- Yes, in some versions of MOND the "gravitational force" can be approximated as asymptotically approaching "1/r", as I've previously noted. From the standpoint of "potential theory," the asymptotic MOND potential therefore behaves as "log(r)", which implies that at long range, the MOND "force" becomes "confining". [Certain phenomenological "potential models" of "quarkonia" (bound states of a heavy quark and a heavy antiquark) likewise assume an approximate logarithmic potential to model "confinement".]

A crucial point that you seem to be insisting on not getting, however, is that the approximate MOND "force" is nonlinear in mass, and therefore does not obey a "superposition principle." The MOND "force" is "non-additive" (AKA "nonextensive" in the sense of Tsallis's generalized statistical mechanics).

Therefore, your persistent attempts to misapply "gausses law" to MOND are doomed to failure, because the "Deep MOND Limit" of MOND does not repeat not obey "gausses law". Gausses Law is a statement that the "flux" of some field is "linear and conserved," and therefore the sum of the sources of "flux" may be determined by simply "summing up the flux lines" produced by those sources. Since the "Deep MOND Limit" of MOND does not "conserve flux" even approximately, nor do the fields of MOND sources obey a "linear superposition principle," there is no "gausses law" in MOND, and so trying to apply "gausses law" to MOND is nonsense.

For the MOND nonlinear-in-both-masses and "1/r" effective "force" that approximately holds in the "Deep MOND Limit" for the two-body problem and only for the two-body problem, see my previous post of 2018-Oct-24, 12:06 PM.

Now one is kind of stuck. The gravitational acceleration in MOND is still $g = -\grad(\phi)$ (modulo a curl), so one can sometimes make armwaving assumptions about "symmetry" in particular cases and try to infer something useful, but such assumptions are even less justifiable in MOND than they are in newtonian gravity, because the weak-field limit of MOND is intrinsically nonlinear; moreover, the existence of the "External Field Effect" tells one that one can't naively ignore contributions from masses outside the boundary.

@ Lawrence Crowell Thank you for your very detailed response to my query. I've been traveling and visiting all week, and am finally back on my own computer, on which I can respond. It will take me some time to digest the response, as I'm still winding down from almost 700 miles of traveling in a number of vehicles.

What I am doing is to say that MOND does not obey Gauss' law in a standard way. This is to me a serious adulteration of physics. I am fairly confident that Newtonian mechanics is not adulterated quite this way.

It seems to me that MOND is then at best some phenomenology from Verlinde's dark matter as spacetime from entropy force. This involves a connection between spacetime physics and quantum fields, or what might appear as such, and the connection between spacetime and quantum mechanics is where things are at. Entanglement on the large appears to build spacetime, supersymmetry is a relationship between spacetime and quantum physics, the moduli space for N = 4 SUSY gauge field bears similarities with the geometry of entanglements and so forth. Verlinde's work seems to be some facet of this. I think there is some general relationship or equivalency between quantum mechanics and spacetime physics.

I would like to emphasize the point that a theory which would be better than LambdaCDM can be quite generic - all that one needs is something which makes the evolution of the early universe more close to coasting, that means, more close to the linear a(tau) ~ tau. From McGaugh's paper:

"A low density universe devoid of non-baryonic dark matter is close to the coasting limit. This leads to a longer path length and correspondingly greater optical depth."

"Generically, we expect a universe devoid of non-baryonic dark matter (NoCDM) to be low density, and thus experience less deceleration at early times [31] than the conventional ΛCDM [32] universe. As a consequence, there is a greater path length to the surface of last scattering that leads to a stronger absorption signal."

That's good news for me, given that my own theory of gravity (arxiv:gr-qc/0205035) provides a term which also leads to an evolution more close to coasting. So, thank you very much for this post.

@Lawrence Crowell-- When you talk of "adulteration of physics," please note that you are making an "elegance" or "beauty" argument. Your host, Sabine Hossenfelder, has just written an entire book on why such reasoning is logically flawed.

You are judging MOND using a "metaphysical" criterion, not a physical, observational, or empirical criterion.

Lawrence, your response and discussion on the Verlinde model is rather complicated, and I confess I have a predilection for simple models of nature. On that note, I'm wondering what you think of the gravitational dipole approach to explaining Dark Matter, apparently first broached by Blanchet/Blanchet & Tiec, and later taken up in a revised model by Hajdukovic.

This approach may be something that you are not familiar with, so in a nutshell this is what I gleaned from reading Hajdukovic's mercifully short, 6 page, paper "Is dark matter an illusion created by the gravitational polarization of the quantum vacuum?". Hajdukovic assumes that antimatter has the opposite gravitational charge to matter; e.g., it would fall up in a gravitational field. Now in QED an isolated electric charge is surrounded by virtual electric dipole pairs, that attract each other, and the consequent "screening" results in the electric field diminishing as one moves away from the central electric charge.

In contrast, since the virtual gravitational dipole pairs feel repulsion, the reverse situation occurs and one has "anti-screening", resulting in an enhancement of the gravitational field as a test probe moves away from the central gravitational field of a galaxy, for example. Since virtual pions (from QCD vacuum polarization) have the most gravitational effect in the vacuum, he does his vacuum effect calculations from pion-antipion pairs, as a best approximation.

Now these gravitational dipole models came out well before this recent observation of the UV radiation coupling to hydrogen gas being larger than expected, so not sure how they would fare with this new information. Another weakness of, at least the Hajdukovic model, is that it has no mechanism for explaining the power spectrum of the CMB. Nonetheless, I find the model appealing for its simplicity.

@Lawrence Crowell - Regarding Hajdukovic's model outlined above, which requires that anti-matter has negative gravitational mass for the theory to work, I just read a wonderfully written article by Ethan Siegel on this very question, titled: "Is Anti-Gravity Real? Science Is About To Find Out." He explains that the ALPHA experiment at CERN is getting ready to determine if anti-hydrogen falls up, or down, in a gravitational field.

But as I read the article, I had a disconcerting thought, much as I would wish anti-matter to fall up in a gravitational field. It dawned on me that for a planet sized chunk of anti-matter to have negative gravitational mass I would think that in the standard 2D embedding diagram, showing the warp of spacetime, that instead of a downward dipping bowl the fabric of spacetime would be an upward rising bump instead. If that's the case, then anti-matter would have to possess negative-energy, which it can't, as the annihilation of an electron and positron produces two positive-energy photons. A negative-energy electron merging with a positive-energy electron would simply produce zero energy out. So anti-matter must fall down, just like regular matter.

Hopefully the experiment will be conducted soon, and we'll know for sure. The confirmation of anti-hydrogen falling down would mean Hajdukovic's model would need some other type of dipoles to explain Dark Matter. I'm aware that the quantum vacuum seeths with regions of negative energy paired with equal positive energy regions for extremely short intervals. Perhaps he could use that phenomena to make his model work - just a thought.

@David Schroeder- I cannot judge the quality of the following post in PhysicsToday: "Don't dismiss negative mass", but it seemed relevant in this discussion: https://physicstoday.scitation.org/do/10.1063/PT.6.3.20170524a/full/

@David Schroeder: The issue of negative mass has had a bit of chatter lately. There are two cases, one where the equivalence principle holds and antimatter has negative mass and the other where antimatter has negative gravitational mass, but positive inertial mass. To straighten this out consider Newton's second law with gravity for two masses M and m separated by a distance d,

F = ma = -GMm/d^2.

Remember that we would then have Ma' = Gmm/d^2 so the two masses accelerate towards each other. Now let us write this for two masses m_1 and m_2 such that m_1 = |m_1|, it is positive valued, and m_2 = -|m_2|, it is negative valued for both inertial and gravitational mass. Newton's 2nd law with gravity is then

m_1a_1 = Gm_1m_2/d^2

m_2a_2 = -Gm_1m_2/d^2

where the sign difference on the right hand side reflects the fact the force is operating in the opposite direction on the two masses. Now easily compute their accelerations

a_1 = Gm_2/d^2 = -G|m_2|/d^2

a_2 = -Gm_1/d^2

We notice the two masses accelerate in the same direction! The positive mass accelerates away from the negative mass and the negative mass is attracted to the positive mass. We may think of the situation as where if you put a force on a negative mass it accelerates in the opposite direction, but the gravitational force on the RHS is opposite from standard gravity as well. Two negatives canceling. For the positive mass the gravitational force is opposite, but the acceleration is not, which means the positive mass is repelled.

Now we has a bit of a problem in that an electron and positron combine to give two gamma ray photons with net energy E = 2mc^2. But if positrons have negative inertial mass then mc^2 - mc^2 = 0. So we can then consider the case where only gravitational mass is negative. This would mean F = ma = -GMm/d^2 gives us for inertial mass m_2(i) positive and gravitational mass m_2(g) negative

m_1a_1 = Gm_1m_2(g)/d^2

m_2(i)a_2 = -Gm_1m_2(g)/d^2

which with m_2(g) negative flips the sign of the accelerations. The mass and anti-mass repel. This is the test ALPHA that the CERN is setting up. I read the same “Starts with a Bang” article by Siegel the other day on this.

This of course violates the equivalence principle. It most clearly violates the strong EP that says a mass falls on a geodesic independent of its composition. An antimatter particle has opposite charges or quantum numbers, which by the strong EP should have no bearing on its trajectory of fall. This is the biggest hurdle I see to the idea here laid down by Hajdukovic.

To have antigravity you need negative *gravitational* mass, not negative kinetic mass. Negative gravitational mass necessarily implies that the "anti-gravitating" particle does not move on geodesics. Of course that violates the equivalence principle, but that's the whole point. You can't have anti-gravitating anti-matter, this is in contradiction with loads of results from quantum field theory, Lamb-shift to name the most obvious one. Please read the appendix of this paper.

I worked some Petrov theory with Plebanski, and he used to rail against bimetric gravity. It is also a bit of a snag with string theory as the background dependency. Nevertheless it would be curious to think of this according to gravitational dyon, where your equations 36 and 37 are Schwarzschild and Taub-NUT metris with the a^μ_ε intertwine them.

@ajo, I read that article ("Don't Dismiss Negative Mass"), which you linked above, by Manu Paranjape, some days ago. I then scanned it again over the last few days, along with some of the comments - the first by Sabine, followed by M. Paranjape's response to her, and saw there was disagreement between them.

Now I only understand this stuff at a basic level, not the deep theoretical, equation-heavy comprehension that Sabine, Manu, and others, possess. However, I do find it appealing that negative-mass, should it exist in some form - real or vacuum fluctuations paired with equal positive-mass fluctuations, might be masquerading as Dark Matter, or even be implicated in the inflationary epoch of the early Universe, as suggested by Manu in his article.

But, as I perused Paranjape's article, several times over the last few days, the account of the +5 kg object and -5 kg object chasing each other endlessly across the Universe seemed, on first blush, to be a violation of kinetic-energy (MV); something for nothing. Then, immediately, I realized that would not be the case as the net mass (M) of the two objects would be zero. Then another thought arose; the energy configuration of this ensemble is generically the same as the Alcubierre-warp. Even the direction of travel is the same. The important difference is that a positive-mass (spaceship) is interposed between the two opposite energy regions. In that situation energy would be required to propel the total assembly forward (of course, one might simply increase the negative-energy region to match the positive-energy region plus spacecraft. But surely something's wrong with that appreciation, as we then really have something for nothing.).

When Alcubierre came out with his metric, it was quickly understood that it contained inherent pathologies, which seemed to rule out its actual realization. No sooner had these papers dashed the hopes of Trekkies, when along came counter-claims of 'cures' for these problems. Not sure where the warp-drive concept stands today as to viability, but one thing for sure is that negative-energy sources are key to its implementation – which leads to another possible edification. I’ve long assumed that the ‘natural habitat’ of negative-energy matter is a metric where length, width, height, and the direction of time are the reverse from our experience. In such a metric presumably all the laws of nature would be the same as ours to a sentient being composed also of negative-energy matter. That said, it seems logical to conclude the difficulties associated with understanding how negative-energy/matter interact with our Universe, stem from the fact that it is not in its ‘natural habitat’, but an inversion of it.

@ David Schroeder: Negative energy enters into physics in a number of ways. The anti-de Sitter (AdS) spacetime has negative curvature Λ < 0, which has pathologies such as closed timelike curves and so forth. Most physics is done on a conformal patch that avoids closed timelike curves. There is an interesting spacetime called the Taub-NUT space that is similar to the Schwarzschild black hole, but where the horizon occurs not for some minimum radius, but some minimum time. This horizon separates a chronology protected region for t > t_{critical} from a region with closed timelike curves for t < t_{critical}.

We might think of the de Sitter (dS) spacetime and the AdS as hyperboloids, where the dS is a single surface and the AdS is one of two disconnected regions. The are connected at I^±, and these are analogous to mass-gap conditions on Dirac momentum-energy cones. There is then a mass-gap that separates the two hyperboloids, and this may be a very firm way that on the dS the chronology condition is enforced. This might then prevent wormholes or things such as the Alcubierre warp drive.

This might put the kibosh on science fiction ideas about warp driving around the galaxy. In some sense this might actually be seen as a good thing. As Bee has in her music video Outer Space the question “Will you visit our place?,” is something any reasonable extraterrestrial intelligence would see with horror. That is unless they want us to come and convert their planet into a nightmare of fast-food shops and mass-landfills. The inability for us to expand beyond a certain distance, maybe a few 10s of light years at most, protects the rest of the universe from billions of garbage making meat machines.

@ Lawrence Crowell: I must admit the exact meanings of de Sitter and anti-de Sitter spacetimes, confuses me. It's early in the morning so I'm just thinking off the top of my head, not particularly deep thought. Plus I never seem to be able to sleep enough, due to certain types of traffic on a nearby highway, which is very distracting and prevents me from sustaining a higher level of mental concentration for prolonged stretches. Oddly, the massive tractor-trailers that come up the steep hill are wonderfully quiet and smooth sounding, perhaps regulated by the ICC. It's the explosive, violent roar of the oversized, jacked-up, pickup trucks, with lake-pipes, that put me into a dither, requiring time to settle down from a state of agitation. So, while I'm rarely physically tired, I'm always wanting to sleep more.

Now basic Minkowski spacetime is perfectly flat, so that rulers and clocks give the same measure at any point therein. In the gravitational well of our planet length is contracted more, and time is dilated more, at the surface than they are further away from our planet. The converse, presumably, would be true for a negative-mass Earth. I assume, therefore, that spacetime is curved "positively" in the vicinity of our actual planet, while in the vicinity of a negative-mass Earth spacetime would have a "negative" curvature.

But, at the top of page 307 in Peter Collier's book: "A Most Incomprehensible Thing; Notes Towards a Very Gentle Introduction to the Mathematics of Relativity", in the section titled: "The de Sitter model" it's described as a spacetime with no matter or radiation, but only Dark Energy. So that naturally makes me think that Dark Energy has the same effect as negative-energy/matter. But, from what I've read it's a form of positive vacuum energy.

I really need to start from the beginning of Collier's excellent book, as I had long ago intended to sort this all out.

Lawrence, a quick note, before I start digging into the Collier book. I'm glad to see that wormholes and Alcubierre warps are not definitely ruled out, being a fan of the original Star Trek series, and hoping that humanity won't be limited to a few tens of lights years exploration-range, and will hopefully make advancements in recycling waste.

On the chronology protection issue, which I assume is what you mean with the mass-gap that separates the two hyperboloids, I know that was a concern of Stephan Hawking and Kip Thorne - the paradox of going back in time and killing a grandparent. Kip Thorne presented a scenario involving a wormhole and backwards in time travel in his book. Well here's my own fundamental thinking on any kind of time travel. Maybe this is wrong, but this how I think about it: If you could freeze-frame the entire universe, then let each frame of the universe's 'film' increment one-by-one, this would define the forward direction of time in the universe, along with its physical/spatial evolution. Now, I know that clocks are ticking at different relative rates all over the universe; likewise relative length-scales are widely divergent, both of these as a function of relative motion between locales. So having a 'master clock' synchronized to start/stop the entire universe at will might be a bit problematic, not to mention the instantaneous correlations across billions of light-years needed to make it happen.

But, from this perspective, since the configuration of all the matter/energy in the universe has a unique value in each 'film frame', going back to the Triassic, or the birth of a grandparent, from the present moment, would appear to be impossible. Of course this doesn't affect scenarios like the 'twins paradox'. Also, there's some subtleties in Thorne's wormhole scenario, which at the moment, I'm too tired to think about, not to mention that I lack the in-depth knowledge of GR, which gave him a clear vision of what is going on in that thought experiment.

David, in a sense there is a master clock for the Universe. Wherever, whenever, you can always determine the age of the Universe, so as long as you accept the big bang, you have a universal clock. Certainly not one of the more accurate ones, but that is more due to our inability to read it properly. You can also always measure your velocity with reference to the CMB.

Oh my gosh, I feel so embarrassed. Before I had even pulled Peter Collier's book off the shelf some long-inactive brain cells flickered back to life and I remembered the 2D depictions of spatial curvature of a saddle and a sphere, I had read about numerous times, but years ago. When one doesn't think about these things for a long time it becomes buried in the deep subconscious, like the name of someone you knew years ago, but can't dredge up on a chance meeting of the person.

Going to bed last night, after earlier doing an invigorating, 90 minute bike ride in 38 F., Canadian-arctic air, I began reading Collier's wonderfully written book. Delightfully, the first chapters are a refresher course on the foundational mathematics needed to understand General Relativity. This is followed by a refresher on Newtonian Mechanics, Special Relativity, Introducing the Manifold, etc., etc. Nice organization!

SEE Ptolemaic AND epicycles - how many rescuing devices does this model need? The allowed error with current concepts that support the big bang model, and then angles that our standard model is coming under fire, should give us pause - we need to fix what is broken, instead of adding another mathematical fantasy, which seems to now substitute in physics for actual reality and measurement.

Then we have the multiverse which makes a nice T-shirt, or baseball cap, but this ridiculous promotion of a WORLDVIEW driven Idea (to avoid the fact that our universe looks information based and CONSTRUCTED, most likely because it is (choose your designer here - aliens with infinite regression, simulation thoery with the same issues, or a MIND outside spacetime and matter. I will choose the last one, which is the most obvious and sane. A HUGE amount of evidence, nicely supported by QM, is showing us that the brain is an organ like any other, it provides a function, namely computing and receiving, but the brain does not think YOU AND I THINK. We use our brains to think, using our minds - we should get used to it already, instead of constantly reaching for ideas that are the self assisted suicide of physics.

@ David Schroeder: The first crack in the so called grandfather paradox was offered y David Deutsche according to probabilities. If you go back in time with the intention of killing your grandfather you have only a probability of doing so. If the dice roll against you there is no paradox. If you do succeed then either your probability for existing is zero, or with the many worlds you shift to another world where your probability in unit. Scott Aaronson https://arxiv.org/abs/0808.2669 made this more exact with there being some chronology protected subroutine that operates with an algorithm with qubits on a closed timelike curve. The closed timelike curves are then paths in a path integral that constructively and destructively interfere with themselves and the chronology protected subroutine of the algorithm in a way the produces an outcome. In this way bounded polynomial quantum algorithms compute NP complet S^1xR^ne problems.

The AdS_{n+1} spacetime is topologically S^1xR^n where the circle is time. This means these anti-de Sitter spacetimes have oddball cauality conditions, and chronology protection only holds on certain conformal patches. In other words if you choose a small enough regions bounded by null curves you can locally have causality. These null surfaces can be holographic screens if there is an energy junction with ΔT^{00} > 0 will define a dS spacetimes with causality and with the de Sitter (dS) one dimension lower. In this way certain NP problems, such as the sign problem, are effectively computed in this observable cosmos, where otherwise they would be intractable.

So this would be my take on the relationship between spacetimes with chronology protection and those with closed timelike curves. As I said there may be a sort of mass-gap that separates the two types of spacetimes. This would within the small tunneling amplitude over this mass-barrier be a fairly stiff way that chronology protection is enforced. As such I suspect things sucha as wormholes and warp drives are exceedingly improbable.

In sime ways it gives some answer to the so called Fermi paradox on where is there intelligent life. Intelligent life on other planets is plausible, but if the occurrence is rare enough, say only one per galaxy, then they never travel across these vast distances.

@ Lawrence Crowell: So it looks like the "grandfather paradox" is not really a paradox, when examined at a deeper theoretical level. That's good to know, as such scenarios would wreak havoc with the universe. But that evidently, in turn, more or less precludes the possibility of Alcubierre warps and wormholes. So, on the face of it, star-faring civilizations would be restricted to sub-light speed, and exploration of only their local neighborhoods in reasonable travel times. Disappointing, but the laws of nature are the ultimate arbiter of what is and isn't possible.

In recent years, I've drawn inspiration from Harold "Sonny" White's Alcubierre-warpdrive concepts. At one time he had an online paper "Warp Field Mechanics 101", which I printed out some time ago. I just checked, and it doesn't seem to be on the internet anymore.

for a black hole with charge or spin. Without spin or charge this local region is a Rindler wedge. The spacetime is AdS_2xS^2, and the metric for the AdS_2 is for a conformal patch, which has causality. The BTZ black hole (https://arxiv.org/abs/hep-th/9204099v3) in 3 dimensions is stable only in AdS_3 and there is a mass-gap condition between the hyperbolic spacetime with negative curvature and vacuum energy and flat spacetime. The black hole classically is a continuum of states with this gap that defines a sort of potential barrier between the Minkowski flat spacetime and AdS_3.

With quantum gravity in three dimensions(https://arxiv.org/abs/0706.3359) there is a discrete set of central charges c_L = c_R = 3ℓ/2G that for large N number of states define the Bekenstein bound. This means the BTZ condition is more generally a set of quantum states, or normal modes.

This means with respect to black holes there is only a local condition that appears to exist for negative vacuum (Boulware vacua) and this does not appear globally to permit closed timelike curves. The mass-gap I mentioned may play a role in this. I think it means wormholes, warp drives and related configurations are exceedingly improbable.

Indeed it seems unlikely anyone ends up warp driving around the galaxy or universe. A part of Fermi's original question was with the objection to any idea aliens travel faster than light. The apparent answer to "Where are they?" is that they can't get here. Alien space visitors can only happen if they are on a very nearby star, which may be exceedingly improbable.

The video below carries this forwards to ETI in general and the apparent absence so far of their signals. The video below, which is non-technical, takes this question in general to life in the universe. Even at sublight speeds a technological alien collective (IGUS = Information Gathering Utilizing System) would span a galaxy in a few million years. Maybe we humans will do this with von Neumann probes or as cyborgs (cybernetic organisms) that travel in space. However, in the geological history of Earth there is no evidence of this planet being visited or impacted by such.

@ Lawrence Crowell, The link you provided to the lower dimensional (2+1) BTZ black hole model is very interesting, thanks for providing it. I glanced at the paper, but am currently not yet at the level of comprehension of GR needed to digest it. Today I'm driving to a sister's Cape Cod condo, in a quiet neighborhood, to take care of her cats, while she and her daughter are on vacation for two weeks. I'm bringing all the relevant books, and printed-out papers, with me. Luckily cats don't bark, so it should be a pleasant reading/learning interlude.

I watched the video on the depressingly absent evidence for electromagnetic signals from distant civilizations, or other signs of their presence. While there's nothing in the current framework of physics that permits superluminal, or instantaneous, information transfer, I do sometimes wonder about what actually underlies the non-local aspect of quantum correlations beyond the quantum formalism. It is quite remarkable that the distance between correlated entities, or the amount of intervening matter, apparently has no effect on this phenomena. This kind of suggests that their might be a separate 'channel', whereby these connections are enacted. One thing to note is that correlated entities must first travel away from each other at light-speed, or less, so even if the inherent randomness of the signals could be overcome, information transfer would still be restricted to light-speed, unless an advanced civilization pre-positioned correlated entities all over the galaxy in the course of millions of years.

But let's suppose there is a way to access this back-channel; Startrek's "subspace" communication system, and overcome QM's random nature. In that case advanced civilizations would abandon electromagnetic communication altogether like old-fashioned smoke signals. If this occurs in a relatively short time (couple centuries) for a typical civilization, the expanding bubbles of EM communication attempts might have long ago passed through our solar system, while new early-civilization EM bubbles are still far in the future. So, just maybe, the aliens are chatting up a storm on the "subspace" party-line, and we're currently missing out. These are nice thoughts, but I suppose such a mechanism would violate causality, so perhaps such scenarios will always be confined to the realm of science-fiction.

As far as evidence for visitation over geological time-spans, I would think that any metallic objects would have rusted away over millennia or, in the course of hundreds of millions of years, been subducted into the crust with plate movement, even if they were resistant to corrosion. And in historical time-spans there are, of course, many reports of anomalous objects, most of which are undoubtedly mistaken identity of conventional things. But, being a student of the UFO literature, I cannot help but notice that a defining characteristic of more credible reports is that the objects (if such they are) are observed to accelerate at phenomenal rates - starting/stopping on a dime, for example. Now, curiously, both Dark Energy and Dark Matter involve acceleration - the former expands the Universe, while the latter kicks in at a0. Conceivably, whatever is ultimately behind the Dark Sector might be technologically exploitable by advanced civilizations who have figured out how to ramp it up enormously in a compact space. Such civilizations might not want to reveal their technological acumen and possibly ugly countenances to less developed civilizations. Thus they might limit their activities, on discovering young technical civilizations, to covert surveillance, leaving the inhabits to guess whether they are real or imaginary.

David wrote: "I do sometimes wonder about what actually underlies the non-local aspect of quantum correlations beyond the quantum formalism. It is quite remarkable that the distance between correlated entities, or the amount of intervening matter, apparently has no effect on this phenomena."

This is slightly off-topic, but what makes you think there is such transfer? My argument is the rotating polariser experiment does NOT show a violation of Bell's inequalities, the reason being fundamentally there are insufficient true variables in the observational set-up to put into the inequalities. Strictly speaking, any classical wave obeying the Malus law gives the same result. There are several reasons, but the simplest one is this, if we consider the Aspect experiment, which is the arch-typical experiment. First, the correlation depends on the conservation of angular momentum; if this did not occur there would be no entanglement. Now, suppose we consider the three separate observations A+B- , B+C-, A+C-. These are supposed to be be parts of the independent variables plus and minus A, B and C, giving 6 variables. Now B+C- is exactly the same as A+B- but rotated through 22.5 degrees. Um, how does rotating the experiment create two new variables, given that the conservation law states that such rotation does not alter the value of variables, assuming the background is rotationally invariant? Why bother? Why not simply move the experiment down the other end of the bench and save the trouble?