Pages

Monday, March 26, 2018

Modified Gravity and the Radial Acceleration Relation, Again

Have I recently mentioned that I am now proud owner of my personal modified gravity theory? I have called it “Covariant Emergent Gravity.” Though frankly I’m not sure what’s emergent about it; the word came down the family tree of theories from Erik Verlinde’s paper. Maybe I had better named it Gravity McGravace, which is about equally descriptive.

It was an accident I even wrote a paper about this. I was supposed to be working on something entirely different – an FQXi project on space-time defects – and thought that maybe Verlinde’s long-range entanglement might make for non-local links. It didn’t. But papers must be written, so I typed up my notes on how to blend Verlinde’s idea together with good, old, general relativity.

Then I tried to forget about the whole thing. Because really there are enough models of modified gravity already. Also, I’m too fucking original to clean up somebody else’s math. Besides, every time I hear the name “Verlinde” it reminds me that I once confused Erik Verlinde with his brother Herman, even though I perfectly know they’re identical twins. It’s a memory I’d rather leave buried in the depths of my prefrontal cortex.

The blue squares in this figure are the data points from the McGaugh et al paper. The data come from galactic rotation curves of 156 galaxies, spanning several orders of magnitude. The horizontal axis (gB) shows the acceleration that you would expect from the “normal” (baryonic) mass. The vertical axis (gtot) shows the actually observed (total) acceleration. The black dotted line is normal gravity without dark matter. The red curve is the prediction from my model; 1σ-error in pink. For details, see paper.

As the data show, the observed acceleration is higher than what the normal (Newtonian limit of ) general relativity predicts, especially at low accelerations. Physicists usually chalk this mismatch up to dark matter. But we have known for some decades that Milgrom’s Modified Newtonian Dynamics (MOND) does a better job explaining the regularity of this relation, in the sense that MOND requires less fumbling to fit the data.

However, while MOND does a good job explaining the observations, it has the unappealing property of requiring an “interpolation function”. This function is necessary to get a smooth transition from the regime in which gravity is modified (at low acceleration) to the normal gravity regime, which must be reproduced at high acceleration to fit observations in the solar system. In the literature one can find various choices for this interpolation function.

Besides the function, MOND also has a free constant that is the acceleration scale at which the transition happens. At accelerations below this scale, MOND effects become relevant. Turns out this constant is to good approximation the square root of the cosmological constant. No one really knows why that is so, but a few people have put forward ideas where this relation might come from. One of them is Erik Verlinde.

Verlinde extracts the value of this constant from the size of the cosmological horizon. Something about an insertion of mass into de-Sitter space changing the volume entropy and giving rise to a displacement vector that has something to do with the Newtonian potential. Among us, I think this is nonsense. But then, what do I know. Maybe Verlinde is the next Einstein and I’m just too dumb to understand his great revelations. And in any case, his argument fixes the free constant.

Then my student convinced me that if you buy what I wrote in my last year’s paper, Covariant Emergent Gravity doesn’t need an interpolation function. Instead, it gives rise to a particular interpolation function. So then, we were left with a particular function without free parameters.

If you have never worked in theory-development, you have no idea how hair-raisingly terrible a no-parameter model is. It either fits or it doesn’t. There’s no space for fudging here. It’s all or nothing, win or lose.

We plotted, we won. Or rather, Verlinde won. It’s our function with his parameter that you see plotted in the above figure. Fits straight onto the data.

I’m not sure what to make out of this. The derivation is so ridiculously simple that Kindergarten math will do it. I’m almost annoyed I didn’t have to spend some weeks cracking non-linear partial differential equations because then at least I’d feel like I did something. Now I feel like the proverbial blind chick that found a grain.

But well, as scientists like to say, more work is needed. We’re still scratching our heads over the gravitational lensing. Also the relation to Khoury et al’s superfluid approach has remained murky.

I breezed through your paper yesterday before going to bed. I have some questions/remarks.

1) You use spherical symmetry to derive your "\mu function", very little portion of the galaxies in the Spark catalog exhibit spherical symmetry. can you estimate how different would it be in the case of axis-symmetric ones?

2) I played this game pf plotting Verlinde's function (Sometimes called the Bekenstein function since it is the one he uses in his TeVeS paper) against McGough data a while ago. At first glance it looks surprisingly nice, but than I did it for a large bunch of other "interpolation functions" for most of them I couldn't tell the differences.

3) I didn't fully understand everything about your original paper (1703.01415), but doesn't imply a local modification of gravity? If so how can you imagine, within this framework, shutting of post-Newtonian correction to solar system dynamics. You mention that superfluidity might come to the rescue, but I'm unable to see how the two can be combined.

4) If solar system constraints apply, and if the "interpolation function" doesn't change drastically going from spherical to cylindrical symmetry, earth and mars perihelion advance are practically rolling out the predicted function. (Eg.astro-ph/ 0606197)

The equation that we use is strictly speaking for spherical symmetry only. That is of course not a realistic situation. It gives you an overall relation for the dependence of the two types of acceleration, which is present in the data as a general scaling. But if you wanted to look in more detail at each galaxy, you'd have to model their "normal" matter profiles and then calculate the additional ("modified gravity") force from that profile. Given that we don't do that but use one equation for 150 galaxies, it's not surprising the actual data has some spread around it.

3) We don't know either. But Justin is of course right to point out that the approximation which you use to derive the equations breaks down when the gradient of the potential (the gravitational potential) becomes too large. That in and by itself is not hard to see. The question what would replace this equation, however, is more difficult.

4) Not sure what the symmetry has to do with that, but thanks for the reference!

Sabine,As you know, MOND's "acceleration threshold", below which the strength of gravity is assumed to be enhanced beyond that given by Einstein, seems to be ill-defined. Many if not most bodies (stars, planets…) comprising a galaxy experience local accelerations far in excess of that threshold as they orbit inside star systems, etc. Only if those accelerations are vector-averaged over local orbital periods do they fall below threshold. So if you are aware of a mathematical definition of the "Milgrom threshold" that seems to make sense, it would be helpful if you could identify it. Also, does your new theory respect local Lorentz invariance (unlike TeVeS, Bekenstein's attempt to make MOND into a covariant theory)? Are your stars stable (unlike in TeVeS)? Do gravitational waves travel at the speed of light (unlike in TeVeS)?

I would like to start with: the world hates you now, but the universe supports you. Keep going!

I was wondering, does accepting emergent gravity affect the analysis of the history of the universe, and the Big Bang? I think analysis in that direction may be valuable. I'm merely suggesting, I don't "need" the answer.

"There’s no space for fudging here. It’s all or nothing, win or lose. " It is measurable in commercial desktop apparatus within an hour[1] given 1986 proof of concept spectra [2]. Baryogenesis, Milgrom acceleration, cosmological constant are all sourced.

Thanks. Is the size of your "grains" context-dependent? Let's consider an isolated "galaxy" consisting of only three [super?]massive black holes. Two of the three are in a mutual orbit at an acceleration just comfortably above the interpolated threshold, while the third is far enough away in an orbit about the first pair so that its acceleration is just comfortably below the interpolated threshold. All spatial scales here could be the size of galaxies. Does your effective theory somehow automatically define its "grain" just between the two scales here to avoid (my) confusion?

Course-graining in effective field theory isn't so much a dependence on actual size as on interaction energy or temperature respectively. So, I don't know what would happen in the case you envision. You'd have to put a bosonic fluid that can condense in the black hole's potential and calculate what it does. I can't do that off the top of my head, sorry.

"I’m almost annoyed I didn’t have to spend some weeks cracking non-linear partial differential equations because then at least I’d feel like I did something. Now I feel like the proverbial blind chick that found a grain."

to some-up for the slow people like me:

You had a "MOND Theory" to explain 'dark matter effects'(tm) and it sort-of worked for Galaxy's but it needed a tweak to have the effect go away at 'small scales' / 'high accelerations' like the size of the solar system (cause measurements don't detect the effects in these conditions) and the tweak you used was an interpolation function that zeroed out the effect at the scales where it needed to go away at. this was ok but some bright grad student made it better by providing a constant that limited the tweak to a single function and it worked. But you're not really sure why.

YAY?

This feels very much like the the orbital period relation for planetary systems which may or may not just be a coincidence https://en.wikipedia.org/wiki/Titius%E2%80%93Bode_law (if it is a coincidence after all the hits from the Kepler data 68% then wow we have bad luck or maybe just searched for possible pasterns for too long? why not 95% why any hits at all? #problems)

but I don't understand what the accuracy of the data-fit for this new MOND is is the fit one part in a billion or is it like one part in 5? If I have insulted by comparing your new thing to an old thing of dubious accuracy I am sorry that is not the intention. I guess I want to know how good are the results of this new thing anyway? please advise.

Congratulations on eliminating an arbitrary choice from the body of work. I regard chance matches and 'arbitrary choices between equal alternatives' to be a bad smell in hypotheses.

I'd be interested to see whether the distribution of blue boxes could, with the right choice of statistic, be falling tightly ON the line, rather than being scattered around it, i.e. understand the compromises of the statistic used by the data, and improve upon it (likely with more dimensions to the measurement).

The Lagrangian of CEG is non-linear in the condensed-phase. This is the regime that has effects very similar to MOND, and should also have the external field effect. However, this limit is only appropriate if the potential is deep enough (but not too steep). If it's too shallow, the stuff will not entirely condense. And the non-condensed stuff doesn't have the non-linearity. The cross-over is somewhere at cluster scales.

Khoury et al deal with that by introducing a two-component fluid (as you usually do with superfluids) and then (numerically) calculating profiles for each of the components. In principle you should be able to deal with clusters that way. But neither they nor we have actually done that.

(Btw, note that Khoury et al don't refer to their model as modified gravity. I believe they think of it as a type of particle dark matter. I merely insist on the "modified gravity" because of the additional force.)

The fewer parameters, the harder to fit data. You may think of it as a virtue, but it makes life difficult.

Having said that, our model has a total of three parameters. One of them is fixed by the cosmological constant (I showed this in my last year's paper) but doesn't play a role on galactic scales (hence doesn't even appear in the new paper). That parameter is essentially the mass of the new field. From the two others only the ratio enters for the rotation curves, and that ratio is fixed by Verlinde's argument.

We are working on that... I don't presently think there is a problem with the gravitational lensing, but it might be another opportunity to look for a way to tell modified gravity from particle dark matter. (Paper will come in maybe a month or two.)

You entirely misunderstood that and I encourage you to read it again. CEG does *not* have an interpolation function. That's the whole point. MOND has. You need this interpolation function in MOND to fit the rotation curves. CEG doesn't need it.

The solar system limit has nothing to do with this - that's accelerations much higher. As I explained above, we know that the MOND-like limit doesn't apply in this case because the approximation one needs to derive the equation breaks down. We don't presently know what else is going on. Clearly not good, but well, one has to start somewhere.

Yes of course I knew you said that semi jokingly, but I wanted to know what exactly was that YOU thought your original work was. I mean something that nobody thought about and could have great impact. I have read many of you papers and I agree minimum length is important but not sure of its resolution.

Okay Bee now that you are clearly in the MOG camp prepare for the backreaction from the DM fundamentalists who will want to scath your paper. They will first misrepresent your model and then review the adulterated form. For more information ask @DudeDarkmatter Good luck with getting work published!

[arXiv:1703.01415] "de-Sitter space is filled with a vector-field that couples to baryonic matter and, by dragging on it, creates an effect similar to dark matter" Ttrue (polar) vectors only, or can a pseudovector (axial vector) field add in?

Regarding gravitational lensing, many years ago when I knew very close to nothing about both relativities (I now know nothing + 1%) I would try and visualize the observed dynamics I read about. At some point I had come up with a general picture I could depict in my mind. To this day I am reasonably sure, wrong or right, the visualization is empirically sound with strong fundamental assumptions in depicting many dynamics like inseparable spacetime, time dilation, length contraction, and gravitational lensing; however as I learned more about GR I realized those dynamics were interpreted from SR and had no physical spacetime curvature in a gravitational field.

Only because after years of increased knowledge (probably more than the 1% I alluded too) I am still rationally sure that approach would be empirically sound with strong fundamental assumptions and ties to evidence. It’s why I bring it up as being worth considering for attempts at solving discrepancies that involve GR. You may have noticed I often make comments about human behavior’s influence on science. I make a sincere effort to be aware of, and keep my intuition and feelings out of scientific reasoning; even though my knowledge is lacking and I understand what that means. I think the approach I saw is where strong evidence points. I would try to amateurishly describe it if there is any interest.

I definitely didn't say definitely ;) It seems plausible to me, but I am a theorist, and I don't know how well one can measure this. Let me just say I believe you can measure it. What I don't know though is how representative the results from the particle dark matter simulations are that we quote. The ones that we refer to in the paper (see last figure) show a substantial redshift, much larger than in our model (and also much larger than MOND - we didn't plot this because the curves look almost the same as in CEG). But how am I supposed to know what other simulations would predict? Maybe you can twiddle particle dark matter so that it does the exact same thing? I don't know.

Really we were writing this paper in the hope of stimulating others to make some predictions. Because how else are we supposed to make progress on this?

do you plan to publish more papers with your grad student on this topic? perhaps get in touch with Stacy McGaugh to coauthor papers with data? some topics include CEG and CMB and CEG and weak gravitational lensing ;)

I think you are onto something with that paper, however, I think antigravity is 2000 times stronger (tentative). Maybe you might be able to clean up your math:) in that case the antiparticles might have that property and that is why we don't see them they might have been pushed to the horizon and might be responsible for the expansion. I did guess that you pulled a Dirac after reading a couple of pages, that is smart. But what do I know:)

> [1] further uses the field n which the normalized u and dimensionless.

seems to have typo(s). It wouldn't hurt to add an explicit definition of n in the next revision, and make your CEG paper more standalone from Verlinde's paper (which I can barely make any sense of at all). :-(

"If you have never worked in theory-development, you have no idea how hair-raisingly terrible a no-parameter model is. It either fits or it doesn’t."

But if you "win", the model is quite convincing. It seems that you and your grad student (and Erik Verlinde) seem to be on to something.

I have always wondered why Verlinde's 2010 paper about emergent gravity and the follow ups caused so much animosity. At the time, it seemed that theory was spinning in circles about the link between GR&QM. I found this emergent framework refreshing.

The same seems to be happening now with MOND vs Dark Matter. It is not that there have been many other advances to celebrate.

Sabine, you state “Verlinde fixes this constant (‘L’) by the following argument, hereafter referred to as ‘Verlinde-matching.’ The additional force acting on baryonic matter is caused by the change in entanglement entropy induced by the presence of the matter. This change comes about because inserting a baryonic mass into an asymptotic de-Sitter space slightly shifts the de-Sitter horizon, thereby changing the volume inside the horizon. Verlinde then requires that the horizon-shift induced by the presence of baryonic matter is identical to the shift p quantiﬁed by the new ﬁeld, p which leads to 1/L = (Λ/3)^1/2 in a universe with Ω_Λ = 1 and Ω_m = 0, and 1/L ≈ 1.05×(Λ/3)^1/2 in a universe with Ω_Λ = 0.7 and Ω_m = 0.3.”.In a deSitter Universe (Ω_Λ = 1 and Ω_m = 0), L is simply the radius of the ‘Horizon’. In a Kottler ‘Universe’ with positive Λ (deSitter-Schwarzschild ‘Universe’, which includes a positive Λ and a [spherical] Mass, m), the ‘Horizon’ inverse radius (1/L) has a maximum of (Λ)^1/2 (when Ω_Λ = 0 and Ω_m = 1). One could presumably use the analytical solution for the Kottler Metric to solve for 1/L when Ω_Λ = 0.7 and Ω_m = 0.3 to see if it generates the value of 1/L ≈ 1.05×(Λ/3)^1/2 that you indicate result from Verlinde’s modelling....

CORRECTION TO MY LAST SUBMITTED POST: Sabine, you state “Verlinde fixes this constant (‘L’) by the following argument, hereafter referred to as ‘Verlinde-matching.’ The additional force acting on baryonic matter is caused by the change in entanglement entropy induced by the presence of the matter. This change comes about because inserting a baryonic mass into an asymptotic de-Sitter space slightly shifts the de-Sitter horizon, thereby changing the volume inside the horizon. Verlinde then requires that the horizon-shift induced by the presence of baryonic matter is identical to the shift p quantiﬁed by the new ﬁeld, p which leads to 1/L = (Λ/3)^1/2 in a universe with Ω_Λ = 1 and Ω_m = 0, and 1/L ≈ 1.05×(Λ/3)^1/2 in a universe with Ω_Λ = 0.7 and Ω_m = 0.3.”.In a deSitter Universe (Ω_Λ = 1 and Ω_m = 0), L is simply the radius of the ‘Horizon’. In a Kottler ‘Universe’ with positive Λ (deSitter-Schwarzschild ‘Universe’, which includes a positive Λ and a [spherical] Mass, m), the ‘Horizon’ inverse radius (1/L) has a maximum of (Λ)^1/2 (when Ω_Λ = 1/3 and Ω_m = 2/3). One could presumably use the analytical solution for the Kottler Metric to solve for 1/L when Ω_Λ = 0.7 and Ω_m = 0.3 to see if it generates the value of 1/L ≈ 1.05×(Λ/3)^1/2 that you indicate result from Verlinde’s modelling.

Since I am not "2FO", I'm trying to follow all the math in your papers (& Verlinde's), and distill a minimum set of physical assumptions that would amount to (an equivalent of) CEG.

Since you don't "sign up" to Verlinde's interpretation as the displacement vector field of an elastic medium, I guess one must then regard it as "just" a distinguished dimensionless vector field "U" on a background Riemannian manifold with ordinary metric h. Then there's an additional hypothesis that the physical metric is actually given by g := (h - UN), but since the effect of U is only noticeable at large distances, one needs a distance scale constant L, and we set U = u/L, where now u is dimensionful. Then the Lagrangian gets constructed using the "physical metric" g, (leading to your L_int term), and the fact that there's only a limited number of reasonably simple possibilities for u's kinetic term .

I don't know why you say that g-un/L is the "physical metric". I am positively sure I never wrote anything like that. The metric is "g" and it remains "g". Replacing g with g-un/L in the matter Lagrangian induces an interaction between baryonic matter and the new field in much the same way that minimal coupling induces an interaction between fermions and gauge-bosons. That's really all there is to it. The thing is just that because the coupling has this form, you can then express the interaction with the vector field so that it *appears as if* caused by a contribution to the Newtonian potential which you then assign to dark matter.

That's why I called it the "impostor field". Pretends to be gravity, but isn't. Just happens to have a coupling that allows the above mentioned reinterpretation. Best,

Quite possible you saw those discussions here. As I keep pointing out, particles are fields and fields are particles. Sure, the impostor field has particles to come with it. It's hard to condense particles if you don't have any.

Sabine, indeed, we don’t live in a deSitter Universe - or any mathematical model for that matter. It’s just that, in case, you state that your modelling uses Verlinde’s result - which you note builds on deSitter modelling by recognizing that the deSitter-modelled Universe’s volume decreases when mass is embedded therein. My suggestion related to your comment in this regard. I’ll give the Kottler-related calculation a try for fun to see how it compares to Verlinde’s estimate.

"If you have never worked in theory-development, you have no idea how hair-raisingly terrible a no-parameter model is. It either fits or it doesn’t. There’s no space for fudging here. It’s all or nothing, win or lose."

A zero free parameter, zero adjustable interpolation function theory is the Holy Grail of physics. You didn't just win. You won with a Hail Mary pass in overtime without even trying.

Rob van Son:> I was wondering whether the imposter field also has particles attached to it?> And what kind of particles that would be?

It's vector field, massive (if I understand Sabine's paper correctly). Therefore, the associated "particles" would naively be presumed massive, spin-1. But (imho) things are not so simple, because the photon field is massless, spin-1, and it couples to the metric *very* differently (iiuc) via the ordinary Einstein-Maxwell equations.

andrew:> A zero free parameter, zero adjustable interpolation function theory> is the Holy Grail of physics. You [Sabine] didn't just win.> You won with a Hail Mary pass in overtime without even trying.

It's way too early to be spouting such hyperbole. Sabine's theory has an unmotivated vector field. (I say "unmotivated" because she doesn't subscribe to the Verlinde interpretation of it as an elastic displacement vector field.) Certainly it's interesting, but istm that the theory is more like: "well, dark matter could perhaps be modeled using this mysterious vector field, which couples to the metric in a rather strange way".

Then there's also all the other astronomical + cosmological phenomena that are so far unaccounted for by this model.

In 1703.01415v3, at the bottom of p6, you mention that (your) differential eq(11) was proposed long ago in Bekenstein+Milgrom. Is that a typo? Your eq(11) is not a differential eqn, but it matches up with B+M's eq(8). Otoh, your eq(10) is a differential eqn which seems to match up with B+M's eq(3) or eq(4). Is that right?

Indeed, this seems to refer to the wrong equation, it should be (10). In any case, the equation is the same but as the new paper makes clear, the field is a different one, that being the relevant distinction between the two theories.

Regarding to reply to Andrew. The vector field is a spin 1 field, but not a gauge field. It has a normal (minimal) coupling to the metric, so I don't know what you are referring to. What's unusual is the coupling between the field and the baryons.

Hi Bee, congrats to the CEG vs. RAR correspondence!Since I am a proponent of DM (meaning dark matter, not Deutsche Mark), I am going to make dirty notices. I know it, and I apologize for it in advance (I guess it is the most you can expect from someone who considers FZ a saint).

I would like to ask you how the fit looks when a random interpolation function is used instead of your derived one. The border parts should probably be mostly fixed, since interpolation is 0, 1, respectively, in there. The main thing is the connecting bow. And it seems a bit as if the "g_{tot} = g_B" curve were sort-of obfuscating the bowing part, with the CEG curve actually going somewhat above the data cloud there.I would state that if the bowing does not change that much for different interpolation functions, that the argument on the derived interpolation (even if it is not an interpolation in the CEG language) is not a strong one.

Sorry again; I like your writings even though I (silently) disagree with you frequently. I take your blog as a sanitizer, keeping me from getting enclosed in an asocial bubble of alike-me-thinking followers of FZ, let him be blessed anyway.

In your Twitter log on the right side of the page (another one of the great things about this blog for those of us who aren't on Twitter) there is a link to an article about a "galaxy without dark matter", that is a low-density (in bright matter) galaxy which doesn't need any dark matter addition to balance its apparent rotation. No doubt this has already occurred to you, but I wonder if the low density would predict the same rotational behavior in your CEG model?

I have no idea how one makes a fit for a random function. I would expect that a random function with probability one fits infinitely badly. If you make an assumption of it being a 2nd order polynomial with some initial and end point, you are stuffing in your assumptions into that very definition. We haven't assumed specific end points and no specifics about the function. And in any case, if you want to do it better, please feel free to go ahead.

Is it possible to test MOND/MOG/CEG directly by launching a satellite in the point between Earth and Sun where their gravitational forces compensate each other? It is not a stationary point due to Moon, Jupiter, Galaxy etc., but never mind. Estimates give that the (newtonian) gravitational acceleration is less than 10^-11 m/s^2 in a region with the size of several meters. Let us imagine that we can use the technique developed for LISA for acceleration measurement.But my question is: would MOND or CEG manifest itself in this region? Or it coincide with GR everywhere in the Solar system, or difference take place only on the border of the galaxies? What does your theory predicts?

"The vector field is a spin 1 field, but not a gauge field." Assuming your model is someday demonstrated to be the correct solution to the DM/DE conundrum, or at least a close approximation, would this be the first vector field, that is not a gauge field, in the Standard Model (SM)?

I confess I don't fully comprehend what a "gauge field" is, beyond the simplified explanations given in pop-science books/articles. But I was under the impression that all the fields in the SM were gauge fields, including in some sense the gravitational field, with only the Higg's field being a scalar field. But I may be wrong in that appreciation.

Dear Dr. Hossenfelder, I wanted to ask you (as a non-specialist) how is your emergent gravity theory going to explain away the apparently wildly variable dark matter/regular matter ratios in the known galaxies. This is probably best exemplified by the extreme cases NGC1051-DF2 and Dragonfly 44. Do you think that gravitational lensing under your modified gravity has a chance of explaining what's going on, without invoking some kind of dark matter?

You have it backwards. The challenge isn't to explain individual cases. You can always do that by choosing suitable initial conditions. The challenge is to explain the regularities. Modified gravity does that, particle dark matter doesn't.

even so, my understanding is that your model does not have much fudge room in the form of free parameters. So lets say one takes a well-studied galaxy with "99.9% dark matter" and another one with "0% dark matter" and in between them 10% 20% etc. and then plots them onto your graph - what will the fit look like? In another words, when you are comparing how well your theory matches the observed regularities, does it fit better within some representative subset of examples that have a certain dark matter ratio according to the competing WIMP models, or does it fit across any such apparent ratio? I am fully aware you model does not have any dark matter - doing away with dark matter completely would be great - but could it be also that some dark matter makes it easier to explain what is seen in the microwave background?

If all you give me is a single galaxy, I can pull parameters out of my head like a magician and fit you pretty much everything. Not that I would spend the time trying, because there's nothing to be learned from this.

The assumption that goes into the above plot is that the system is static and spherically symmetric. Of course no galaxy is actually static and spherically symmetry. The only thing you can do with that is capture the general correlation that survives in the data despite of that. You capture the regularity.

You (and many others who have asked the same question in the past few days) seem to have a big misunderstanding about modified gravity. You think that somehow secretly gravity is the same as general relativity, so if you know what the visible matter does that's all you need to tell what gravity does. But the whole point of modifying gravity is that it's not general relativity! If gravity is modified you have additional degrees of freedom, you have additional fields, meaning you get more solutions and you need more initial conditions.

This means that modified gravity has a past-dependence and a large variety of out-of-equilibrium solutions, just like particle dark matter has.

As I said above, the challenge isn't to get the variety. The challenge is to get the regularity.

I can give you a galaxy with 0% "dark matter" by simply setting the additional degrees of freedom to zero, done. I am very sure there are vacuum solutions that correspond to 100% dark matter, but I haven't look at those (it's actually an interesting question, I might look into this). I can also do anything in between if you wish, but what's the point in that?

You don't learn anything from that. What you *can* learn something from are statistics. What are the typical galaxies that you should get assuming structure formation works this or that way. What's the mean, what's the variance? Are there any correlations in this data? These are questions that can be used to tell one explanations from the other. Best,

Mond, Verlinde and such stuffs have been kindly put in the trash bin in the paper "How Zwicky already ruled out modified gravity theories without dark matter", arXiv:1610.01543.The point is that in galaxy clusters the Mond regime lies so far out, that Mond does not solve anything. Without any doubt this applies also to the other McGravity models; a few are discussed in that paper.

Sabine, your theory-model is extraordinarily complex and sophisticated, and I wouldn't even pretend to say I understand it, except in the most general sense. That said, I'm wondering if someone like Natalie Wolchover might be working on a popular exposition of it, that is more accessible and comprehensible to the lay public?

One thing I'm curious about is whether your model utilizes a condensation phase of the vector-field to explain the transition from Newtonian dynamics in the inner regions of galaxies to the flat acceleration regime, beyond a certain distance? Since your model dispenses with particle dark matter, my first thought was that this couldn't be the case, since actual particle matter would be required, as in laboratory experiments with optically trapped cesium atoms at very low temps. But I wasn't completely sure about that, as maybe it's possible for a field to undergo condensation.

The Lagrangian of this model is quite similar to what Justin Khoury and collaborators use. I wrote about this here. They're not the same because we have a vector field and we also have a somewhat different coupling to baryons. But I believe that the estimates about critical temperature and coherence length should be largely the same. That's why I write in the paper that we suggest this as an interpretation.

The transition to Newtonian in the inner regions of galaxies doesn't have anything to do with the condensation itself but with the properties of the condensate. It's that the description of the stuff as a condensate ceases to make sense if the gradient of the potential has non-negligible chances on distances of the coherence length. So it's not at a certain distance, it's at a certain relation between gradient and coherence length. Roughly speaking you could say it's in the vicinity of large masses.

(Of course the real situation is much more complicated because it also depends on the geometry and whether the system has had time to reach equilibrium.)

Sabine, for whatever it is worth (probably not much to you, since you ‘Verlinde match’), assuming that the parameter L measures the (reduced) radius of the de Sitter horizon when a mass M is embedded in a de Sitter universe, the Kottler Metric (quantitatively ‘a spherical mass embedded in a de Sitter universe’), predicts a numerical parameter of 1.195 in the L-related equation you quoted in your paper (and I copied in a post above) - which is different to the value (1.05) used in your ‘Verlinde matching’ (both analyses assume Ω_Λ=0.7 and Ω_M=0.3). As I said, it probably means little to the structure of your CEG modelling, but may relate to Verlinde’s analysis.....

so if the recent discovery of diffuse galaxies without or almost without DM is confirmed, how should we understand that within Verlinde and SH theories: as far as i have understood one needs at least 3 domains : the large scale regime in which the vector field is not condensed, the galactic domain in which it is condensed (to produce MOND phenomenology), and the small scale (large gravity gradients) in which it nees to be superfluid to decouple from matter and give back GR.

Now what about those diffuse galaxies without DM : are we in the normal, condensed, superfluid , or another additional stuff we need to consider ? quite puzzling !

Regarding the superfluid approach (that looks interesting), they state that it is about particles-based DM, and that it is just mimicking the MOND phenomena. No sociologist was successful in forcing them to hide that they deal with particles.May be that you may try to state it explicitly too (and not just letting your readers to guess it from field-particle correspondence). If you start state it explicitly, may be that it'll lead you into dealing with expected properties of the respective particles (alike the superfluidics ones do).

I don't see the purpose in making unnecessary statements. If an assumption isn't needed to arrive at a conclusion I prefer not to make it. The Lagrangian I'm using may or may not be a superfluid condensate of some particles. I don't know. I think it's plausible is all I am saying. But at least for what's in the present paper it's not necessary to know anything about the underlying theory.

It has been more than a year since it was suggested in the paper arxiv.org/abs/1610.06183 that we could look at the redshift-dependence of the galactic rotation curves to tell apart modified gravity from dark matter. You also repeat the same suggestion in this paper.

But I'm wondering: Why has nobody looked at it so far? It sounds like a low effort & high gain task for someone who is already familiar with computer simulations. (I would do it myself immediately if I felt like I could).

I know, this test will not rule out modified gravity or dark matter, whatever the result is. But I guess it will put some valuable constraints on the models. Moreover, the redshift-dependence of galactic rotation curves sounds to me like one of the first things we need to know when we are investigating the mechanism behind the galactic rotation curves.

I'm just wondering whether you've noticed Milgrom's update to his Scholarpedia MOND article (http://scholarpedia.org/article/The_MOND_paradigm_of_modified_dynamcs) in which he says that your MOG theory is in fact a rediscovery of the relativistic Einstein-Aether theory of Zlosnik et al (arXiv:0607411). Do you agree with Milgrom's assessment?

No, it is not. This is patently obvious because I do not use the constraint in Eq (5) and in return the Zlosnik et al approach does not have the coupling that I have. They get the coupling from using Eq (5).

The reason I don't have that is, as I have said several times, that my model is a covariant generalization of Verlinde's model, not more and not less. Verlinde does not use a constraint on what he calls the "displacement vector" and I can see no reason why it should be normalized to 1.

Now, you may want to argue that you like the Zlosnik model better for whatever reason and we can discuss about this, but no, it is not a covariant realization of Verlinde's idea.