Pages

Tuesday, January 03, 2017

The Bullet Cluster as Evidence against Dark Matter

Once upon a time, at the far end of the universe, two galaxy clusters collided. Their head-on encounter tore apart the galaxies and left behind two reconfigured heaps of stars and gas, separating again and moving apart from each other, destiny unknown.

Four billion years later, a curious group of water-based humanoid life-forms tries to make sense of the galaxies’ collision. They point their telescope at the clusters’ relics and admire its odd shape. They call it the “Bullet Cluster.”

In the below image of the Bullet Cluster you see three types of data overlaid. First, there are the stars and galaxies in the optical regime. (Can you spot the two foreground objects?) Then there are the regions colored red which show the distribution of hot gas, inferred from X-ray measurements. And the blue-colored regions show the space-time curvature, inferred from gravitational lensing which deforms the shape of galaxies behind the cluster.

The Bullet Cluster comes to play an important role in the humanoids’ understanding of the universe. Already a generation earlier, they had noticed that their explanation for the gravitational pull of matter did not match observations. The outer stars of many galaxies, they saw, moved faster than expected, meaning that the gravitational pull was stronger than what their theories could account for. Galaxies which combined in clusters, too, were moving too fast, indicating more pull than expected. The humanoids concluded that their theory, according to which gravity was due to space-time curvature, had to be modified.

Some of them, however, argued it wasn’t gravity they had gotten wrong. They thought there was instead an additional type of unseen, “dark matter,” that was interacting so weakly it wouldn’t have any consequences besides the additional gravitational pull. They even tried to catch the elusive particles, but without success. Experiment after experiment reported null results. Decades passed. And yet, they claimed, the dark matter particles might just be even more weakly interacting. They built larger experiments to catch them.

Dark matter was a convenient invention. It could be distributed in just the right amounts wherever necessary and that way the data of every galaxy and galaxy cluster could be custom-fit. But while dark matter worked well to fit the data, it failed to explain how regular the modification of the gravitational pull seemed to be. On the other hand, a modification of gravity was difficult to work with, especially for handling the dynamics of the early universe, which was much easier to explain with particle dark matter.

To move on, the curious scientists had to tell apart their two hypotheses: Modified gravity or particle dark matter? They needed an observation able to rule out one of these ideas, a smoking gun signal – the Bullet Cluster.

The theory of particle dark matter had become known as the “concordance model” (also: ΛCDM). It heavily relied on computer simulations which were optimized so as to match the observed structures in the universe. From these simulations, the scientists could tell the frequency by which galaxy clusters should collide and the typical relative speed at which that should happen.

From the X-ray observations, the scientists inferred that the collision speed of the galaxies in the Bullet Cluster must have taken place at approximately 3000 km/s. But such high collision speeds almost never occurred in the computer simulations based on particle dark matter. The scientists estimated the probability for a Bullet-Cluster-like collision to be about one in ten billion, and concluded: that we see such a collision is incompatible with the concordance model. And that’s how the Bullet Cluster became strong evidence in favor of modified gravity.

It might sound like a story from a parallel universe – but it’s true. The Bullet Cluster isn’t the incontrovertible evidence for particle dark matter that you have been told it is. It’s possible to explain the Bullet Cluster with models of modified gravity. And it’s difficult to explain it with particle dark matter.

How come we so rarely read about the difficulties the Bullet Cluster poses for particle dark matter? It’s because the pop sci media doesn’t like anything better than a simple explanation that comes with an image that has “scientific consensus” written all over it. Isn’t it obvious the visible stuff is separated from the center of the gravitational pull?

But modifying gravity works by introducing additional fields that are coupled to gravity. There’s no reason that, in a dynamical system, these fields have to be focused at the same place where the normal matter is. Indeed, one would expect that modified gravity too should have a path dependence that leads to such a delocalization as is observed in this, and other, cluster collisions. And never mind that when they pointed at the image of the Bullet Cluster nobody told you how rarely such an event occurs in models with particle dark matter.

No, the real challenge for modified gravity isn’t the Bullet Cluster. The real challenge is to get the early universe right, to explain the particle abundances and the temperature fluctuations in the cosmic microwave background. The Bullet Cluster is merely a red-blue herring that circulates on social media as a shut-up argument. It’s a simple explanation. But simple explanations are almost always wrong.

The idea of modifying gravity is that there's no particle dark matter. I mean, strictly speaking it could be both, but if you have particle dark matter anyway, you don't need modified gravity, so it's kinda pointless to combine them. So, you modify gravity instead of adding particle dark matter.

Then the question is what's the difference? Well, if you add particle dark matter you add quantum fields to the standard model of particle physics. If you modify gravity otoh, you add classical fields to general relativity. The main difference is (besides the one being quantized and the other not) the way that the additional fields couple. For what is relevant here, however, is only that there is a priori no reason for the focus of the additional fields in modified gravity to be located where the 'normal' dark matter is.

I say 'a priori' because to figure out where it is you have to solve the dynamical equations. Which I haven't done. On that matter I can merely tell you that people working on modified gravity claim they can fit the Bullet Cluster without too many problems. I haven't looked into this too deeply and can't say much about this. Hence, all I am saying here is that at least for what the theoretical structure is concerned the focus of gravity can be offset from the normal matter distribution in modified gravity too. Best,

"I'd love to hear more about how different the early universe could be under MOND compared to LCDM. Is it possible we have something as basic as the age of the universe wrong?"

Doing early-universe calculations with MOND is a hard problem. However, even most MOND supporters concede that dark matter (not necessarily particle dark matter; it could be primordial black holes, though many mass ranges have been ruled out) works better in this context. (The asymmetry comes about because most "conventional" astrophysicists don't admit that MOND works better in the "MOND regime".)

Age of the universe wrong? No. First, there is non-cosmological evidence for the same age. Second, it doesn't look like MOND would predict something different here.

Can you explain to a layman, given that projects like Gaia are discovering lots of unknown stars and not-shiny objects, isn't it possible that we have under estimated the amount of ordinary matter in the galaxy ? Perhaps we have not looked carefully enough yet.

How would Verlindt's Entropic "Emergent" be expected to perform under the circumstances. He himself does not provide a solution for dynamic situations like bullet cluster and early universe. At least the idea provides theoretical motivation and derives both Einsteins field equations and MOND from first principles.

Can modified gravity explain the gravitational lensing ? If not then it would still point towards the dark matter, however unlikely the collision might have been. Is it not ?

Sean Carroll indulged in a nice FB hangout recently where he argued in favour of dark matter against rhe modified gravity. He presented a wide variety of arguments, for instance the one involving CMB oscillations. The video is available on his blog.

I really don't understand why people keep talking about MOND. MOND is a non-relativistic limit. You don't expect it to work in general. It's an approximation. Complaining that MOND doesn't work is beating a dead horse. If you want to know if modified gravity is a promising explanation, you really have to look at a relativistic theory. As to who says what, you find the relevant references here, see section 8.3. Best,

That's an old idea which isn't entirely dead but it's extremely disfavored. The reason is that if there were sufficiently many such objects to make up all the dark matter (we know how much it has to be in total) these would cause frequent gravitational lensing which hasn't been seen. The situation is somewhat murky however for objects that are significantly lighter than typical solar masses because in this case the lensing wouldn't be strong enough. This stuff is called 'macro dark matter' and I wrote about this here. The problem in this case is to understand how they would form and what they would be made of. Best,

Last time I looked Verlinde didn't derive the field equations. Besides this, I've run into some trouble trying to reproduce what's in the paper and I'm hence somewhat confused about what he does or doesn't show. Sorry for being vague, I presently really don't know a good answer. Best,

Yes, the 2nd acoustic peak is the somewhat more enlightened argument. Yes, for all I know modified gravity can reproduce the lensing. As I mentioned above, for a quick summary and literature references, see here, section 8.3.

My personal inclination towards the concepts of "dark matter" and "dark energy" is that they are "parametrizations of our ignorance", rather than something refering to some genuine physical substances. But without having these fields as areas of expertise, I might of course be completely misguided, as well as appearing somewhat arrogant. In both cases, I apologize. But as long as all experiments, ever more sensitive, searching for dark matter particles turn up empty handed, I have at least some empirical backup for this point of view.

I wonder if we are missing something in connection with angular momentum, including spin: Einsteins theory of general relativity, GR, is based on (pseudo-)Riemannian spacetime where curvature is generally present, but torsion is non-present. The teleparallel equivalent (?) of GR is based on Weitzenböck spacetime where torsion is generally present, but curvature is non-present. Although locally equivalent, as they have the same Euler-Lagrange equations, as far as I recall, these theories are, I guess, topologically non-equivalent; or am I wrong?!

Perhaps more importantly, though, The Einstein-Cartan theory, which is still a viable theory, uses spacetimes in which both curvature and torsion are present, curvature coupling to matter, and torsion coupling to spin. But, as far as I recall from some articles by Trautman I read some years ago, the torsion field arising from spin does not propagate because it is algebraically, i.e., non-differentially, related to the spin density.

Coming now to my point: It has always puzzled me why of all the properties of particles - their mass, electric charge, isospin, color, and spin - it seems that only spin is not the source of some propagating field. This in connection with the rotation curves of galaxies, the origin of the "dark matter" concept, being readily related to angular momentum, makes me wonder if we are missing some coupling of angular momentum to some (propagating part of a) gravitational field. What it should be, though, I am completely ignorant.

This is an eye-opening post, and I really enjoyed the playful way Bee introduces this controversial subject matter to us. If Bee hadn't researched the facts about the exceedingly low probability of explaining this cluster's dynamics from Lambda-CDM, I, (and many others), likely would never have become aware of this important detail. Despite battling a bad case of the flu, fogging the mind, this provides much fodder to chew on, and think about, in the coming days.

"Can you explain to a layman, given that projects like Gaia are discovering lots of unknown stars and not-shiny objects, isn't it possible that we have under estimated the amount of ordinary matter in the galaxy ? Perhaps we have not looked carefully enough yet."

There are various ways to estimate the mass of the Milky Way. Of course we can't just add up the currently visible matter and expect to account for all of the mass. The fact that this doesn't work is the motivation for dark matter in the galaxy. For various reasons, it is extremely unlikely that new objects found by Gaia would make up a significant amount of this dark matter.

But as long as all experiments, ever more sensitive, searching for dark matter particles turn up empty handed, I have at least some empirical backup for this point of view.

Would you have argued the same about the neutrino between postulation and detection? Or the Higgs?

Absence of evidence is not evidence of absence, unless you have searched the entire parameter space and found nothing. Of course we find something in the last place we look, because after that we stop looking.

My personal inclination towards the concepts of "dark matter" and "dark energy" is that they are "parametrizations of our ignorance", rather than something refering to some genuine physical substances.

There are several reasons to believe that the cosmological constant is "real": 1. It was introduced a hundred years ago, before it was known that the universe is expanding, much less accelerating; it's not a fudge factor nor even an additional parameter invented to fit the data. 2. The value derived from astronomical observations, together with other astronomical observations which suggest a flat universe, result in a value for the density parameter which is confirmed by several other independent observations. 3. It would be an incredible case of fine tuning if something which is fundamentally different than the cosmological constant just happened to behave exactly like it. If it walks like a duck, and quacks like a duck, then it is a duck. 4. There is not a shred of evidence that "dark energy" is anything but the cosmological constant, i.e. the equation of state is w = -1 and this and the value are both constant in time. There is a reason that it is known as the "concordance" model; any alternative explanation for one phenomenon (say, accelerated expansion) has to fit all other observations (say, cluster abundance as a function of redshift, CMB power spectrum, age of the universe) with the samey value.

As soon as you start adding extra fields that couple to the gravitational field, you're no longer modifying gravity. You're doing something that is much closer to what dark matter is doing, with the only difference being that you're invoking an additional particle-free field rather than a field that does have particles. Which is frankly just weird because any field should be quantizable and thus any field should have something that looks like a particle, in principle. Your proposal would basically be to throw quantum mechanics out in order to explain the bullet cluster, which is frankly a lot more problematic than simply adding an extra field to the already existing ones.

I have always been a Dark Matter skeptic and now am moving towards a full on critic. Like many of the crackpot explanations we have for currently unexplained phenomenon (see cosmic inflation and many worlds theories) D.M always seemed like a crutch, a stab in the dark without any logical connection to existing, accepted knowledge (like standard model or quantum mechanics). To me these crackpot shots are a cry of ignorance. The modified gravity theories, do challenge many well established beliefs (like Einstein's gravitational theories, but his work, groundbreaking as it was, is based on math and thought experiments not observational facts. It is a framework way to look at gravity (as a warping of space-time fabric) and much of that quality has been demonstrated, but it is no way solves or resolves the interaction of all forces in the universe on large or small scales or combined. As he poster above me stated, a cosmological constant, that is, an energy source inherent in space time itself has more logic and continuity with current knowledge than some cock-eyed dark matter idea which is based on nothing and is being attributed to every cosmic "Bigfoot" from super-symmetry to WIMPS to leftover anti-matter. What a joke, but as the article suggests, even the enlightened thinkers we imagine theoretical physicists to be can get stuck on a comfortable crutch. The facts, emerging evidence and logic will eventually rebuke them.

I'm sure that you've read Lisa Randall's New York Times article (4-1-2017) about Vera Rubin and dark matter. Yes, Rubin probably should have won a Nobel Prize, but Randall talks as if dark matter is an established scientific fact.

I cannot judge on the proper discrimination of MG from DM, but to me and I guess most not so sophisticated readers a theory invocing new fields, which in turn create 'their' gravity, would be regarded as some (perhaps very strange) matter in the way of DM rather than a Modification of Gravity (understood as a property of gravitational interaction).I have some problems to understand, why / how such a field should better 'explain how regular the modification of the gravitational pull seemed to be', if it has it's own inertness (?) and dynamics creating structures differing from the normal matter, as in the Bullet Cluster. But may be this shows, how little is known about DM/MG and how far from 'normal' categories the solution might be.

There are around 10 million superclusters in the observable universe and each supercluster has around 10 clusters (ref wikipedia). So I overall 10^8 clusters. With 10^(-5) as the probability to find Bullet-like cluster (I have approximated 6.4*10^(-6) as 10^(-5)) in the dark matter scenario, we can find around 1000 Bullet-like clusters in the observable universe. So it may not be that unlikely that we see one.

On the other hand supposing modified gravity scenario yields much higher probability, as argued in this article, say around 10^{-2} for example, there would be 10^{6} Bullet-like clusters. Too many. We would have seen many such clusters by now. Not just one.

So the fact that we see only one such cluster and not many, is in fact in favor of dark matter rather modified gravity. Or maybe I am missing something.

I understand why you interpret a low probability for this kind of galaxy collision to be observed as evidence against dark matter and in favor of MOND. Is there any reason to believe MOND would predict such a collision is more likely? Unless there is a difference in what the two theories predict there, then I don't think it should weigh in favor of one or the other.

Both solutions to the observations seem to have huge problems, which your article points out. Thus it may be that both the solutions talked about in your article could be wrong.

In short, something is wrong. While the James Webb Space Telescope is a while away, I would think that more resolution will go a long way to determine what is happening and thus waiting for better observations may be required. Of course there are other observations coming before the JWST is active. Astronomy used to be a dark corner of physics, but now we have astronomy finding 95% of the universe was simply not predicted at all by physics. It is physics that is lagging now!

Well, as I already said above, please don't blame me for the terminology. I am merely pointing out that what is called 'modified gravity' in the context of explaining dark matter works by introducing additional fields. I explain this exactly because people tend to think it's very different from particle dark matter, but philosophically it isn't. The main difference is, besides the field being in an unquantized limit, the way the fields couple. That different coupling however makes all the difference for the difficulty of dealing with the theory.

Particle dark matter couples as usual matter. General Relativity works exactly the same that it always does. In modified gravity the fields have a different coupling to gravity. It's a coupling that to any particle physicist tends to look 'awkward' or 'contrived' and I've heard many similar aesthetic misgivings that I can't take seriously. But that's the way it is.

Having said that, in some effective limit whatever causes the effect pretty much has to look like either one or the other. The only alternative is arguing that effective field theory breaks down. Which is possible, but is a very difficult argument to make. (In a nutshell that's the reason I'm skeptical Verlinde's approach really tells us something new.)

Actually, most calculations - not counting modified gravity - use Keplerian profile, and there is the root of the problem, rotation curves of galaxies and clusters, seem to require some extra matter, the dark one.

The work of Fred Cooperstock et a., use purely a general relativistic profile, without any modifications, and galaxy curves and clusters match what general relativity calculates.

https://arxiv.org/pdf/astro-ph/0610370.pdf

There is no need for modified gravity or dark matter, just a purely general relativistic approach.

I've spent a day reading papers on modified gravity explaining Bullet Cluster and so far they all look unconvincing to me. Bullet Cluster provides pretty clear view of _something_ being separated from visible matter but still exhibiting gravitational effects. There's also lesser known Train Wreck Cluster which shows similar effects.

I don't particularly care if it's particulate matter or something else, but it doesn't look like simple modified gravity.

Cosmology is surely not my field but I worked with climate models, which are usually simpler than simulating galaxies with general relativity taken into account. And I know perfectly well that it's way too easy to bias models when you need to make quantitative predictions from multiple runs.

@Phillip Helbig: Would you have argued the same about the neutrino between postulation and detection? Or the Higgs?

Point taken.

There are several reasons to believe that the cosmological constant is "real" [...]

I guess you derive your arguments from within the paradigm set by the general theory of relativity. Being no expert in the field, I have no reason to doubt the validity of such reasoning and such arguments. What I am airing is just/rather the idea that there is something fundamentally missing in the GR-paradigm, and even more so, in fact, in the geometrical formulation itself of gravity. As I flatly admit, I have no real idea of what any such alternative might be, except that it need to be, of course, relativistically invariant. Concerning the acceleration of the universe: What would happen to that conclusion if the GR-paradigm, and thus the FLRW models of the universe, is incomplete? Are the distances to the supernovas, and thus also the inferred conclusion for the need for a positive cosmological constant and "dark energy", not dependent on that very paradigm?

I'm grateful, Bee, for your revelation (to me at least) of this fundamental weakness of cluster collisions as DM evidence...

"Can you explain to a layman, given that projects like Gaia are discovering lots of unknown stars and not-shiny objects, isn't it possible that we have under estimated the amount of ordinary matter in the galaxy? Perhaps we have not looked carefully enough yet."

As I understand it, we know with high confidence from Big Bang Nucleosynthesis (BBN) that there just aren't nearly enough baryons in the universe to serve as the Dark Matter, even if they could somehow hide in, e.g., MACHOs. BBN relies only on the Standard Model and some nuclear physics, i.e. laboratory data, so I guess it is probably the most secure aspect of cosmology. Here we may refer to an example of what you meant Bee, in your previous post— the ultimate authority worthy of trust— the Particle Data Group. In their latest Review of Particle Properties (RPP), they include a concise review of BBN, updated in 2015, from which I hope I may quote (I hope legibly):

"The rates of these reactions depend on the density of baryons (strictly speaking, nucleons), which is usually expressed normalized to the relic blackbody photon density as eta ≡ n_b/n_{gamma}… All the light-element abundances can be explained with eta₁ ₀ ≡ eta×10¹ ⁰ in the range 5.8–6.6 (95% CL). With n_{gamma} fixed by the present CMB temperature 2.7255°K, this can be stated as the allowed range for the baryon mass density today, ρ_b = (3.9–4.6) × 10⁻ ³ ¹ g/cm³, or as the baryonic fraction of the critical density, Omega_b = ρ_b/ρ_(crit) ≃ eta₁₀ /h²/274 = (0.021–0.024)/h², where h ≡ H₀/100 /(kms Mpc) is the present Hubble parameter."

A more detailed BBN review is available from Cyburt et al. (2015) arXiv:1505.01076 who say, e.g.,"Big bang cosmology can be said to have gone full circle. The prediction of the CMB was made in the context of the development of BBN and of what became Big Bang Cosmology. Now, the CMB is providing the precision necessary to make accurate prediction of the light element abundances in SBBN… The agreement between the theoretical predictions and the abundance D/H is stunning. Recent developments in the determination of D/H has produced unparalleled accuracy."~~~~~Risking excess verbosity here, let me just add that I like the proposal of Clesse & García-Bellido (2016) for an alternative to particulate DM to be Primordial Black Holes (PBHs) that start too small to spoil CMB, and then combine with one another to be too big to be detected by micro-lensing. (They don't count as baryonic.) Since I saw Paul Steinhardt's devastating critique of Inflation (that he helped to invent), I am skeptical that we need to worry about whether inflation can make PBHs. We probably have no idea what happened then. I hope, Bee, that you might be interested enough in this topic to post (again?) about it at some point.

However, a few years later some inventive humanoids had optimized the dark-matter based computer simulations and arrived at a more optimistic estimate of a probability of 4.6×10-4 for seeing something like the Bullet-Cluster. Briefly later they revised the probability again to 6.4×10−6.

The first paper mentioned is https://arxiv.org/abs/1410.7438while the second is https://arxiv.org/abs/1405.6679 ,so the temporal ordering is incorrect.

Also, I think it is fair to say that the general conclusion expressed here is not the consensus.

Note that the 1410.7438 paper mentioned above actually concludes that"...hence the Bullet Cluster does not present a challenge to the ΛCDM model."

Also relevant, at least to me, are

https://arxiv.org/abs/1406.6703"We conclude that the observed properties of the Bullet Cluster are completely consistent with Lambda-CDM."

and

https://arxiv.org/abs/1412.7719"We estimate that the number of dark matter halo pairs as or more extreme than 1E0657-56 in mass, separation and relative velocity is 1.3+2.0−0.6 up to redshift z=0.3. However requiring the halos to have collided and passed through each other as is observed decreases this number to only 0.1"

I don't think that a 10% probability represents that much of a challenge to LCDM.

"What would happen to that conclusion if the GR-paradigm, and thus the FLRW models of the universe, is incomplete? Are the distances to the supernovas, and thus also the inferred conclusion for the need for a positive cosmological constant and "dark energy", not dependent on that very paradigm? "

Two points. First, science works by explaining things with a hypothesis, or theory, which can be rejected if it conflicts with observations (and, of course, the observations are right). If not, confidence in it is increased, but it can never be proved in a mathematical sense. In cosmology, there is not a single observation to suggest that GR is wrong or even just incomplete. (No, the fact that we don't know what dark matter is is not a shortcoming of GR. GR says no more about baryons than it says about dark matter.) Any alternative has to at least explain all the observations which GR explains. It should also make testable predictions which are different from GR and/or have something else going for it (simpler, derived from something else, etc). So, the burden of proof is on those who doubt GR. George Efstathiou, setting the bar low, said at a conference on alternative cosmological models a couple of years ago (so much for the idea that alternative models are not seriously debated), said that he would offer a job to anyone with an alternative theory which did nothing more than explain all current data as well as the standard model. I don't think he's hired anyone yet as a result of this challenge. Second, some cosmological tests provide results which are valid in a class of models much larger than GR. George Ellis and collaborators have done much work in this area, i.e. which conclusions depend on which assumptions.

Even if GR is wrong or incomplete, which it probably is at some level, this does not necessarily imply that everything based on it is wrong. Newtonian gravity is now known to be only an approximation, but I will still die if I jump out of the 14-th storey window.

"And never mind that when they pointed at the image of the Bullet Cluster nobody told you how rarely such an event occurs in models with particle dark matter.e"

The collisional statistics of the Bullet Cluster might very well amount to evidence against the existence of Dark Matter. But there is absolutely no possibility of any influence whatsoever emanating from a development like this, into the pre-existing context in which the Bullet Cluster is strong evidence for the existence of Dark Matter.

This is because the original case for Dark Matter existence is not a dynamical calculus at all, but simply the fact of baryonic matter from the centre of gravity as observed via gravitational lensing. Between Dark Matter and Modified Gravity the fact that displacement exists is a categorical confirmation of the existence of Dark Matter.

It's irrelevant how rare the collision actually is, or that such collisions should not happen if DM exists. These scenarios are abstract, and each stands alone. Besides those two, a theoretical infinite number of other variant contexts inhabit a possibility space of, as measures that nobody thought of yet, with that changing in the future as a possibility.

So I guess this piece was more toward the tongue-in-cheek and the polemical than logical

"Additional fields that couple to gravity that could focus in different place than normal matter in a dynamical system" are the exact words I could use to describe particle dark matter. On the communication side, I don't find the remark on the scientific consensus very useful. You can replace "scientific consensus" in your sentence by "peer review" (which is essentially how scientific consensus is produced) and you see my issue. Dessiminating to general public the scientific consensus is a pretty good strategy, even if not flawless, And by and large I don't think the problem is that the public focuses too much on scientific consensus, don't you think?

Aristotle used pure reason to rigorously deduce women had fewer teeth than men. The Church of Rome's placed the Earth at the center of the Sol system, supported by inerrant scholarship. Heliocentrism used "perfect" circles requiring unending parameterization, as did epicycles. Mercury's perihelion rotation was desperately anything except General Relativity, even to Bob Dicke who was nobody's fool.

Physics cannot reason its way into quantum gravitation. A defective assumption has off-diagonal terms that cannot be vanished by elegant acres of LaTeX and MathJax. Theory versus observation anomalies and violations are instead diagnostics.

Perform experiments that amplify the anomalies. Drag physics out of its Aristotelian hole. Matter exists in excess of antimatter. That is where physics factually begins.

I'm sorry but you're presenting an entirely one sided opinion to the point of making false claims.

" it failed to explain how regular the modification of the gravitational pull seemed to be"

Which is outright untrue. The paper you are quoting is an observational paper. They do not consider a Cold Dark Matter model or consult simulations. They do not conclude that dark matter fails to explain with what they have found, you are misrepresenting the paper. Not only that but there are now several paper showing it's consistent with dark matter simulations and is a natural outcome of galaxy formation (e.g. Ludlow et al. 2016, Navarro et al. 2016). It's bad enough that you ignore any papers which don't fit your narrative but the paper simply doesn't say what you're claiming it does.

Again, when you reference the challenge of the Bullet Cluster you completely ignore the fact that other authors have found the infall velocity of to be consistent with simulations (e.g. Thompson et al. 2015 and the references within). You don't even acknowledge that the claims you are quoting are controversial, it's "true" you claim. How hypocritical is it to rant about "consensus" when you just cherry-pick papers as you please? You state dark matter is not consistent with the Bullet Cluster and modified gravity can definitely explain it, despite the fact no modified gravity even 10 years later can explain the lensing without dark matter.

Personally I find science to be a more satisfying pursuit when I consider all the arguments, not just the ones which support my prejudices and when I read what authors actually say, instead of what I would prefer. Any argument can be made to seem bulletproof simply by ignoring inconvenient facts.

Yes, the whole purpose of this post was to make a one-sided claim, as one-sided as the claims that the Bullet Cluster is evidence for particle dark matter. Infuriating, if someone cherry picks their evidence, isn't it?

I am sorry for not having referred to the two papers you mentioned, I confused one with another I did refer to and the other one I seem to have missed.

Having said that, what I learn is that there are a lot of smart people who know how to optimize numerical simulations to fit pretty much any data. That by itself isn't the problem. It's all well and fine to develop a model that fits data and is predictive, if what one wants to do is fit data. If one wants to distinguish one explanatory hypothesis from another, it's poor practice.

Or maybe let me ask the question differently: How many people working on LambdaCDM are trying to "improve" the numerical models so that the Bullet Cluster becomes a less likely event? The answer is probably: nobody. This means that the papers you quote are mainly demonstrating confirmation bias. If you'd want to correctly gauge the statistical significance, you'd have to take into account all possible numerical "improvements" that they could have done but that didn't work and that therefore were never reported.

[...] science works by explaining things with a hypothesis, or theory, which can be rejected if it conflicts with observations [...]. If not, confidence in it is increased, but it can never be proved in a mathematical sense.

[...] this does not necessarily imply that everything based on it is wrong. Newtonian gravity is now known to be only an approximation, but I will still die if I jump out of the 14-th storey window.

Perhaps then you should mark this article as satire because it's clear from the comments that many don't appreciate how false your representation of the issues is. If your intention was to combat a popular myth then it's entirely counterproductive to simply start another one. Further muddying the waters helps no one. Intentionally misleading people to make a point is pretty low.

I don't think it's fair to hang this on the popular science media! Modified gravity has gotten a fair amount of coverage. Goodness knows the media are deeply flawed, but blaming them—as is routine these days—deflects attention from the real culprits.

It's not a satire, and I have very clearly explained in my post that I am telling a story that could have been told in another universe. But if you don't understand what I'm explaining that is certainly my fault. (*That* was satire.)

"It's not a satire, and I have very clearly explained in my post that I am telling a story that could have been told in another universe."

Would seem to conflict with:

"It might sound like a story from a parallel universe – but it’s true. "

Furthermore it's clear from several comments here that people have misinterpreted your "story" as factual and objective. Have you made any attempt to clear that confusion up? No. Clearly you weren't very clear on this.

I understand the point you're trying to make but this is not the way to do it. Fighting a misconception by starting a few new ones is a pointless exercise.

> How come we so rarely read about the difficulties the Bullet Cluster poses for particle dark matter? It’s because the pop sci media

I think it can be argued, rather successfully, that the people who have a framework for understanding these issues - i.e. scientists - have done a poor job of explaining the issues to people who don't have a framework for understanding them - i.e. everyone else. The public doesn't have the knowledge to understand the difficulties the Bullet Cluster poses for particle dark matter, which hardly matters because they don't *care*, which is largely the result of years of hearing scientists bitch about pop-sci voodoo instead of explaining what is actually happening.

The LHC/Higgs thing is a perfect recent example. All of that fanfare, but with the exception of science popularizers like Brian Cox and Sean Carroll nobody explained to the public why all of the scientists were so excited about what amounted to a spreadsheet and a few graphs. Most people don't have the statistics background to understand what 5-sigma means... people kept asking "did they discover it, or not?" and scientists kept talking about confidence intervals and P-values - talking about them, but not *explaining: them. The average non-scientist has no idea what the difference is between a probability that what we saw was not chance and a probability that they actually found the Higgs. There was an important and hugely visible teaching moment there, and science just dropped the ball.

And if the public doesn't understand anything the big kids are saying, the only thing pop-sci can report on are dumbed-down things that any idiot can understand.

Dark matter should be a HUGE embarrassment, in that respect. Telling the public "we know everything, we're just not sure where 90% of the universe is" just tells the layman that scientists are crazy. You hear people claiming that "science is a religion, and scientists are its priests"? Those people are interpreting what gets filtered down to them, often by people who are upset that the capital-T Truth of dark matter is not as obvious to the layman as it is to people with degree-level knowledge of physics. Any other field would have to admit that what they think is going on is nowhere near complete; in physics, we call theories that only explain 4% of the data "a triumph".

If scientists aren't going to say something the public can understand, pop-sci commenters with imperfect knowledge will. And the public will continue to doubt that scientists are doing anything useful.

Actually it seems you still did not understand what I wrote. The story I tell is true. It's just incomplete. It's similarly incomplete as the story that the Bullet Cluster is evidence for dark matter. As I explained in my post. And besides you everybody understood that.

I agree with what you are saying. I am, however, not sure that any scientist is obligated to explain what he or she is doing to the general public. Accountants don't have to explain in detail what they do for a living; why should scientists?

What Dr. Hossenfelder is doing here, on a largely voluntary basis, is very much appreciated by most of us, but I don't believe it is her obligation to make sure everyone understands everything she says. After all, she has undoubtedly invested a great deal of her time and energy to achieve her professional expertise. I think it is presumptuous that it can all be explained so that all non-experts can understand it.

I appreciate your support, but I don't think Benny's comment was about me specifically - I didn't write about the Higgs discovery at all (I don't normally write about topics that are all over the blogs anyway). Leaving that aside, the only thing I seem to recall about the press converage was that it constantly explained what 5 sigma means, so I don't quite get that particular criticism. But maybe I read very different outlets from Benny. In any case, by and large I agree that scientists could do more and do better to explain their research. But as long as they're not getting paid for it that isn't going to happen. Simple as this. I could tell you long stories about how many institutions don't get this and expect scientists to do public outreach for free, but let me make long stories short and just say it doesn't work. Best,

Modified Gravity (MOG) has been used successfully to explain the rotation curves of galaxies, the motion of galaxy clusters, the Bullet Cluster, and cosmological observations without the use of dark matter or Einstein's cosmological constant. We review the main theoretical ideas and applications of the theory to astrophysical and cosmological data.

"Accountants don't have to explain in detail what they do for a living; why should scientists?"

" In any case, by and large I agree that scientists could do more and do better to explain their research. But as long as they're not getting paid for it that isn't going to happen."

As a scientist, let me point out that most scientists working on fundamental research are "getting paid for it", by the public taxpayers funding their research, including salaries in many cases. (Most accountants are not publicly funded.) Hence we are obliged to try to justify its public cost by explaining the significance of our work to the development of technology, and to promote a sense of awe about what humans are learning about the natural world. Modern culture seems to need all the help it can get to divert itself from its focus on the trivial.

As Henry said, Bee is making a precious contribution to such outreach (although perhaps for good reasons other than those above).

What a nice bone you threw to the dogs, Sabine!I must say I laughed a lot reading the above posts.Because, in truth, nobody really knows nothing.Everybody, on the other hand, has a pet theory.Epicycles they are, methinks.(Yoda)

I am also a physicist, retired from Los Alamos National Laboratory. And, yes, most (but not all) of my research at LANL had been funded by the DOE and, therefore, would be considered publicly funded. But it was never implied or directed by the DOE that we had an obligation to explain, let alone justify our research expenditures to the general public.

The leadership at LANL, of course, needed to convince all of the funding agencies, including the DOE, that the research at LANL was being conducted for the benefit of our country and the welfare of our people, but there was no explicit expectation (nor any need) to convince the general public that our research was justified. That, presumably, was the responsibility of the funding agencies themselves, if their funds were derived from the taxpayers.

I am not saying that it is not a good idea to inform those of the general public who have an interest in understanding what scientific research can learn about the laws of our universe. I am saying that such an understanding has its limits among the general public and it should only be made available by the researchers at their own discretion.

Henry,Thanks for clarifying your perspective on outreach. Yes, we understand that nuclear and "high energy" research in the US has had a half-century run of relatively generous funding, in large part because of the conflation (and some fortunate confusion?) in the minds of congress and the American public of such research with defence of the nation, especially with the nuclear deterrent. I'm a bit surprised that you may be unaware that the situation elsewhere in the world is quite different. Certain political parties in some countries have little understanding of the importance of maintaining a strong academic community in science, and consider research as only a tool for technical development for industry. In such environments, basic research funding is more vulnerable if the public is unaware of the real benefits. I think that many scientists outside the US consider outreach as one tool for survival.

But I fear that Bee may not approve how our discussion has wandered far from the topic of her original post!

The presentation here completely misses the point of the 2nd paper linked to the probability the cluster in ΛCDM. As others pointed out, the probability must be multiplied by the rate, giving an order one probability of the Bullet Cluster being observed. If one bothers reading the 2nd paper's abstract, it concludes, "Our findings suggest that ΛCDM straightforwardly produces massive, high relative velocity halo pairs analogous to Bullet Cluster progenitors, and hence the Bullet Cluster does not present a challenge to the ΛCDM model." Therefore, this blog post should be removed or corrected.

It would help me if you could please let me know which paper you are referring to. This is the second paper I referred to and I can't find the quote you have above.

But since you are here, let me ask you a simple question. How many computer simulations have been made which did not increase the probability of a Bullet-Cluster like event and how many of these negative results did get published? How many people would publish these results? How many journals would publish them? What's the merit of fiddling with numerical simulations after it turns out they don't fit the data until they do? This is an obviously biased method of procedure and I can't see that your community is doing anything to account for this bias. Best,

"Click-baity" title, Ms H. Your actual point seems to be that the Bullet Cluster data doesn't exclude certain descriptions of modified gravity. Although, based on what I skimmed from comments, those interpretations still require dark particles, so ... ?This seems like a rare post where your emotions led you beyond the limits of your expertise.

Bee [love the nomiker] Bit late to the party, but as an engineer always in awe of [not really fathoming all its math....] fundamental/particle/you-name-it physics, I continue to be educated by your posts/explanations. Here I thot [as many apparently...] that the bullet cluster was proof positive of dark matter. Matter, as in matter matter ? What is matter matter? A simple shift of viewpoint and my whole world turns upside down. I guess it is the same for scientists, in some sense. I only perceive via my senses... there are lots of others. Keeps bringing me back to the wacko AGWers. How can that be settled science when I read fascinating stuff like this? Keep up the good explanations.... on to your next post.... John

For an explanation of how modified gravity can fit colliding cluster data see, for example, this paper by Moffat:

"The galaxy cluster system Abell 1689 has been well studied and yields good lensing and X-ray gas data. Modified gravity (MOG) is applied to the cluster Abell 1689 and the acceleration data is well fitted without assuming dark matter. Newtonian dynamics and Modified Newtonian dynamics (MOND) are shown not to fit the acceleration data, while a dark matter model based on the Navarro-Frenk-White (NFW) mass profile is shown not to fit the acceleration data below ~ 200 kpc."

Deur's discussion of the same issue in this 2009 paper, explains it this way:

"Dark matter was first hypothesized to reconcile the motions of galaxies inside clusters with the observed luminous masses of those clusters. Estimating the non-Abelian effects in galaxy clusters with our technique is difficult: 1) the force outside the galaxy is suppressed since the binding of the galaxy components increases (this will be discuss further at the end of the Letter), but 2) the non-Abelian effects on the remaining outside field could balance this if the remaining outside field is strong enough. Since clusters are made mostly of elliptical galaxies for which the approximate sphericity suppresses the non-Abelian effects inside them, we ignore the first effect. We assume furthermore that the intergalactic gas is distributed homogeneously enough so that non-Abelian effects cancel (i.e. the gas does not influence our computation). Finally, we restrict the calculation to the interaction of two galaxies, assuming that others do not affect them. With these three assumptions, we can apply our calculations. Taking 1 Mpc as the distance between the two galaxies and M=40×109 M⊙ as the luminous mass of the two galaxies, we obtain b = −0.012 in lattice units. . . . Gaseous mass in a cluster is typically 7 times larger than the total galaxy mass. Assuming that half of the cluster galaxies are spirals or flat ellipticals forwhich the non-Abelian effects on the remaining field are neglected, we obtain for the cluster a ratio (M′/M)cluster = 18.0, that is our model of cluster is composed of 94% dark mass, to be compared with the observed 80-95%.

Non-Abelian effects emerge in asymmetric mass distributions. This makes our mechanism naturally compatible with the Bullet cluster observation [15] (presented as a direct proof of dark matter existence since it is difficult to interpret in terms of modified gravity): Large non-Abelian effects should not be present in the center of the cluster collision where the intergalactic gas of the two clusters resides if the gas is homogeneous and does not show large asymmetric distributions. However, the large non-Abelian effects discussed in thepreceding paragraph still accompany the galaxy systems."

Trust me, if I was interested in doing click-bait I could do better than writing about modified gravity, out of all things. You are incorrect to think that modified gravity still requires particle dark matter to explain the Bullet Cluster and I don't know what makes you think so.

I also don't know why you think that has something to do with my "emotions." I am merely telling you that the evidence isn't as clear as many bloggers and science journalists try to make the reader believe. I am pretty agnostic on whether particle dark matter or modified gravity. Some days I prefer the one, some days the other. What bothers me is that the discussion isn't unbiased. Best,

I disagree with Andy that because a scientist is tax-payed they are obliged to do public outreach. This isn't so. They're hired for a certain job and if that job description doesn't contain public outreach then they are under no obligation to do that. What I am telling you is that this is presently the case.

I don't think every scientist should do public outreach, but I think that there has to be a certain fraction of them, and these people should get adequately paid for it. And yeah, I am saying this of course because I don't get paid for it. The only way you presently can get paid for public outreach is by doing your institute's PR, and that's the source where all the overhyped university press releases come from, so one can't say that this is very beneficial.

Anyway, my point was merely to say that if you want more scientists to communicate their work to the public, there needs to go money into it. And that isn't happening. Best,

"I disagree with Andy that because a scientist is tax-payed they are obliged to do public outreach."

In fact, I agree with you. I'm sorry that I may have misled with what I intended as a minor note: "including salaries in many cases". I have not heard of research scientists or faculty members being required or expected to do outreach, unless they have a "service position" with that in the job description. I do not propose to change that.

I can understand that you may experience the cost of theoretical research to be dominated by salaries, but the much larger funding of experimental particle physics is dominated by the costs of national laboratories, major facilities, experimental hardware and instrumentation. At least some of this funding is less secure than salaries of most academic faculty, and more vulnerable to politics. While such funding for work by scientists at American national laboratories is often allocated by the labs themselves from their own budgets, in other countries the funding of national labs may not support equipment even if it's to be used at the labs specifically for individual experiments — that is allocated by a separate federal agency directly to individual projects whether or not they involve national laboratories. The scientists proposing the experiment have to personally defend the funding request to a federal agency, and I have seen them explicitly mention their outreach efforts at such a time. (Their training of students and postdocs in preparation for careers in either academia or industry is considered much more important, and in some countries must be explicitly defended. Federal funding agencies may even consider this training to be more important to the nation than the experimental results, and I would understand that point of view, even though I may feel differently. I hope this isn't controversial!)

My only point (which is only an observation rather than a policy proposal) is that many experimental scientists and national laboratories sense the importance of outreach to defend their program funding, even if they don't feel obliged to do it personally. Those scientists that chose to do so typically report it on their annual performance review, and are recognized for it. Most scientists anyway typically do more than is explicitly stated on their job description. (I never did outreach except on annual Open House, when it was required.:-/)

I'm sorry to distract you on a topic that is actually near the bottom of my "list" of scientific interests.

Sabine, a very important point here is that we are not talking about particle physics or other laboratory physics. And we are not talking about clean, linear CMB physics. Instead, we are talking about nonlinear, messy, hard astrophysics. Unfortunately it is simply very hard to determine the predictions of LCDM when it comes to nonlinear scales (relevant to galaxy clusters) and in particular when baryonic feedback becomes important. So yes, various approximations, simplifications, assumptions, and parameterizations are necessarily made.

In principle, LCDM is completely predictive since all the relevant physics is specified. In practice, it will be many years before we can decide these sorts of questions. Check back on the field in a couple of decades...

Your blog is awesome! This post on bullet cluster is enlightening for me. Do you know if any modified gravity model try to address this observation? I mean for this sentence "Indeed, one would expect that modified gravity too should have a path dependence that leads to such a delocalization as is observed in this, and other, cluster collisions".

... some inventive humanoids had optimized the dark-matter based computer simulations and arrived at a more optimistic estimate of a probability of 4.6×10-4 for seeing something like the Bullet-Cluster. Briefly later they revised the probability again to 6.4×10−6.

Either way, the Bullet Cluster remained a stunningly unlikely event to happen in the theory of particle dark matter.

I get the strong impression you're misinterpreting that probability (whether 4.6 x 10^-4 or 6.4 x 10^-6) as, somehow, the probability for finding a single BC analog in the entire visible universe.

It's not.

It's the probability for an individual cluster pair which has individual-component masses > 10^14 solar masses and separation < 10 Mpc to also have a pre-merger velocity > 3000 km/s (the current best estimate for the BC).

To know how many BC analogs to expect in the visible universe, you have to multiply that probability by the number of halo pairs matching the mass and separation prerequisites in the simulation (or the visible universe). As an example, Thompson et al. (2015; arXiv:1410.7438), the source of the "4.6 x 10^-4" probability, found about 300 BC analogs in their largest-volume simulation (which is still smaller than the volume of the visible universe, of course).

So, not "a stunningly unlikely event to happen in the theory of particle dark matter." (Which is why Thompson et al.'s Abstract concludes with "Our findings suggest that LambdaCDM straightforwardly produces massive, high relative velocity halo pairs analogous to Bullet Cluster progenitors, and hence the Bullet Cluster does not present a challenge to the LambdaCDM model.")

Why, if I want to know the probability of seeing the event, would I multiply it with the number expected in the universe, given that we don't observe most of what is out there? That's how I interpreted the number. It seems that I got that wrong, but I would appreciate if you could clarify the relevance of the number you refer to. What do I learn from knowing there are so-and-so many clusters somewhere?

I think it is clear that the low number is not the probability that we would observe one bullet cluster. Thus, multiplying the probability per "event" by the number of "events" makes sense. However, in one of the papers they state a probability of finding something like the bullet cluster as one such system per 10 billion cubic megaparsecs. So this is one in a cube a bit more than 2 gigaparsecs on a side. This is comoving proper distance. 2 gigaparsecs is a little less than the distance to a redshift of 0.6, so probably within the reach of cluster surveys which could find things like the bullet cluster.

So, it is about in line with expectations that we observe one bullet cluster. To what extent this observation influenced the simulations leading to the estimate is a separate question.

Statistics with one object is a bit complicated. The Poisson error of 1 is 1. However, as Neta Bahcall once remarked (regarding the observation of a high-redshift cluster, a challenge to CDM in the Einstein-de Sitter universe, which many still believed in at the time), if one pink elephant walked into the room, would any theoretician say "The Poisson error of 1 is 1, so whether or not there is one elephant here doesn't matter"? Probably not.

Well, I was thinking it's the probability that a Bullet-Cluster like event has a relative velocity in the range that's observed. How many such events have we observed in total (disregarding the velocity)? I thought it's a handful or so. Hence, I was thinking, the two probabilities are pretty much the same, take or give a a factor accounting for the handful. Not sure what to say about the probability of getting these clusters to begin with. Or what the relative velocity of the other cluster collisions is. In any case, you're right that getting hung up on one event is never a good idea stat-wise. I still fail to see though how it counts as evidence for LambdaCDM that it's possible to adapt the model to fit observations. If a similar amount of numerical simulations were done with modified gravity, I'm sure the same 'success' could be scored in that area. Best,

Why, if I want to know the probability of seeing the event, would I multiply it with the number expected in the universe

Multiplying the probability of seeing event X in a single object of class A by the total number of objects in class A gives you the expected number. If the expected number is 1000 and you see 940, not a problem; if you only see 1 or 2, that's a problem. Similarly, if the expected number is 0.0001 and you see 1 or 2, that's a problem. (The number I quoted is the number of BC analogs found in the largest-volume simulation of Thompson et al., which had a volume within an order of magnitude of the observable universe out to a redshift roughly similar to that of the BC.)

Since an exhaustive search for BC analogs hasn't been done yet, we don't know whether the true number in the visible universe (out to some specific redshift) is 5, 10, or a hundred; we *do* know it's at least one.

One reason it's not simple to say "what's the probability of seeing a BC analog at all" is that it depends on, among other things, the search volume. The papers looking at this have, more recently, tended to include predicted number densities for BC analogs for just this reason, since that can be compared with the results of a comprehensive search. For example, Thompson et al. 2015 say in their abstract, "... we find the comoving number density of potential Bullet-like candidates to be of the order of 10^-10 Mpc^-3." The comoving volume (assuming a concordance LCMD cosmology) out to a redshift of 0.3 (approximately the redshift of the BC) is about 7 Gpc^3, so you would expect ~ 40 BC analogs for their number density. So it's not at all surprising that we've found one so far.

I would agree that the particular type of probability these papers have been using is a slightly strange way to approach the problem, at least for people outside their sub-discipline. I think it's one of those cases where the first paper or papers adopted a particular approach for idiosyncratic reasons, and the subsequent papers use the same approach for purposes of comparison. The first paper in this context was probably Hayashi & White (2006). (I think their logic might have been: our estimates of the cluster mass are reasonably good; our estimates of the initial velocity are less good, so let's look at things in terms of a probability distribution for different initial velocities, given the known cluster mass. Then, when better measurements and dynamical modeling give you new and better estimates of the initial velocity, it's easier to read off the probability for the new initial velocity value. That's just a guess, though.)

Thanks for the explanation, but I think you misunderstood my question. See Phillip's comment above, who understood it. I don't know why the number in the observable universe is relevant. What's relevant seems to me the number in the actually observed universe. Or else I might misunderstand what you mean by 'observable.' I take it to mean anything that's in causal contact.

You say nobody has adapted the model to fit observations. Are you telling me there's only one numerical code that you guys all agree on is undoubtedly the one that follows from LambdaCDM and that one numerical code hasn't changed since you learned of the Bullet Cluster? Best,

I don't know why the number in the observable universe is relevant. What's relevant seems to me the number in the actually observed universe.

In order to predict "the number in the actually observed universe" (for comparison to the actual observations), you would have to simulate the actual observing process to date -- which parts of the sky have been observed and which haven't, to what depth, which clusters have accurate mass measurements, which have velocity measurements for different subcomponents, which are actually BC analogs, etc., etc. This would be insanely difficult, because the actual observations and observational analyses are extremely heterogenous. And then your predictions would become increasingly useless as new and better observations were made.

Instead, "the number in the observable universe" is useful as a bound on what could be observed. Right now, the observed value is "at least one". If the observed value increases due to new and better observations, you can still compare that with the prediction. (E.g., if you predict between 0.1 and 5 in the observable universe, then right now there's no problem with LCDM. If future observations turn up another 100 BC analogs, then you can immediately see that there is a problem.)

Statistics with one object is a bit complicated. The Poisson error of 1 is 1

Well, the Poisson "error" of 1 is the Poisson distribution for a rate of 1, which excludes any value <= 0. You're assuming the Gaussian approximation for Poisson statistics, which really breaks down for small numbers. (But, yes, statistics involving just one object are tricky.)

You say nobody has adapted the model to fit observations. Are you telling me there's only one numerical code that you guys all agree on is undoubtedly the one that follows from LambdaCDM and that one numerical code hasn't changed since you learned of the Bullet Cluster?

There's no such thing as a numerical code which "follows from LambdaCDM", and you wouldn't necessarily want just one code in any case; instead, you'd want several different codes from different groups, so that problems with an individual code could be more easily detected.

The most commonly used code for these simulations is probably the one called GADGET. GADGET is not a "LambdaCDM code" -- it's a general N-body code allowing for an expanding universe (Friedman-Lemaitre model) with options for hydrodynamical simulations using smooth-particle-hydrodynamics (the papers we're discussing don't use the latter option, because they're doing DM-only simulations). You can set up the initial conditions, expansion, etc., using a LCDM model, but you can also set up a model with no cosmological constant, or no DM, or warm DM, or whatever; people have also modified it to include self-interacting DM. It can also be used (typically with a mixture of DM, stars, and SPH gas particles) for localized galaxy simulations not involving any particular cosmology.

There are other numerical codes, which are equally general in terms of not being tied to LCDM or even to cosmological simulations; they are extensively tested to ensure they do physically sensible things (preserve energy and angular momentum in the right contexts, produce appropriate standard shocks when using gas, etc.). The idea that these codes would be modified to account (how?) for a single object like the Bullet Cluster is beyond ludicrous.

The paper by Lee & Komatsu (2010), which claimed a strong discrepancy between LCDM and the BC, used GADGET and a standard WMAP-based LCDM cosmology (Omega_m = 0.25, Omega_lambda = 0.75, H_0 = 70, n_s = 0.95, sigma_8 = 0.8). Thompson & Nagamine (2012) also used GADGET and a very similar cosmology (Omega_m = 0.26, Omega_lambda = 0.74, H_0 = 72, n_s = 1.0, sigma_8 = 0.8) and also claimed a strong discrepancy. Thompson, Dave, & Nagamine (2015) also used GADGET and a similar cosmology, updated based on Planck results (Omega_m = 0.3, Omega_lambda = 0.7, H_0 = 70, n_s = 0.96, sigma_8 = 0.8). But they found their results to be consistent with the existence of the BC -- not because they tweaked the simulation (they didn't), but because they used what they argued was a better and more sophisticated analysis of the results of the simluation: a better halo-finding algorithm, and a better characterization of the non-Gaussian nature high-velocity tail of the cluster-cluster velocity function.

(Note that Thompson & Nagamine 2012 and Thompson, Dave, & Nagamine 2015 are basically the same authors, publishing in the same journal. I mention this as a corrective to your groundless speculations about people and journals systematically suppressing papers suggesting problems with LCDM). Of course, the fact that you're vaguely aware of papers claiming that the Bullet Cluster contradicts LCDM shows that this probably isn't happening.)

Thanks again. I don't understand why predicting what we observe requires to model the observation. If we observe N cluster collisions, then can't we say how many of these are expected to have masses of m_1/2 and a relative velocity of x? That's what I was asking for. That doesn't require to predict N to begin with. Either way, the total number of such events in the observable universe is arguably entirely useless.

"I mention this as a corrective to your groundless speculations about people and journals systematically suppressing papers suggesting problems with LCDM"

You mention a paper by, as you say 'basically the same authors' that used 'what they argued was a better and more sophisticated analysis' to arrive at a result more favorable to LamdaCDM to argue against my concern? Leaving aside that, as we've just seen, statistics of one are never good to rely on, that's exactly what I'm concerned about. And sorry for having thrown together the code with the result analysis.

I am not saying btw that anybody does this deliberately. But let me ask you the same question I already asked above. Where are the people trying to improve the numerical simulations for matter forming structures in modified gravity? What would you do with an 'improved analysis' for LambdaCDM that leads to a worse data fit? What are you doing to guard yourself against confirmation bias?

Thanks again. I don't understand why predicting what we observe requires to model the observation. If we observe N cluster collisions, then can't we say how many of these are expected to have masses of m_1/2 and a relative velocity of x? That's what I was asking for. That doesn't require to predict N to begin with. Either way, the total number of such events in the observable universe is arguably entirely useless.

This is pretty basic observational astronomy and cosmology (or, really, observational science in general). We don't have anything like a complete, unbiased census of clusters, let alone colliding clusters, let alone BC analogs. Parts of the sky have been surveyed well enough to get a detailed census of *clusters*, but not which ones of these are collisions; other parts of the sky have not been, or have been observed to different depths. There are also obvious issues of projection: the Bullet Cluster was easy to identify because the collision was in the plane of the sky, but collisions closer to the line of sight will be harder to identify. And so on.

So if you want to compare your model to the observations, you either have to figure out how the procedure that produced a given catalog of clusters would transform your simulation into an equivalent catalog -- that's an example of what I meant by "modeling the observations" -- or else present your results in such a way (e.g., number density of objects) that the observers can use their knowledge of how their survey/catalog/whatever was done to transform their results into something they can compare with your results. (E.g., "OK, we estimate that the limits on our survey mean that we undercount the true density of such objects by a factor of five, so we scale our results up by that factor in order to compare them with the published simulation results.")

And you can still make simpler comparisons using predicted numbers. If your model yields an expectation number of ~ 10^-6 such objects in the observable universe, and the observed number is "at least one", then you know there is potentially a problem with your model.

More specifically: we don't have anything like an accurate, complete catalog of cluster collisions with the kind of velocity data you seem to imagine exists. In fact, the relative pre-collision velocity of ~ 3000 km/s for the Bullet Cluster -- which is what the simulations are using a reference -- is not something that was directly observed; it was deduced using detailed dynamical models, compared to a complex suite of multi-wavelength data. People are very gradually producing similar models for other colliding-cluster examples, but it's a complicated process, and not something you need to do for the gravitating-mass-versus-baryonic-matter comparison that's more immediately interesting. So while there are other BC "analogs" in the sense of side-on, post-collision clusters with dissociated gravitating and baryonic mass concentrations, most of these don't (yet) have estimates for the pre-collision relative velocity.

(The last sentence of your paragraph is, to me, quite bizarre. Would you say that knowing the total number of cases of different types of diseases in the world today is "almost entirely useless"?)

You mention a paper by, as you say 'basically the same authors' that used 'what they argued was a better and more sophisticated analysis' to arrive at a result more favorable to LamdaCDM to argue against my concern?

Yes, because your "concern" was that papers like Thompson et al. (2015) had "adapted the model to fit observations". There's a distinction between the model, and the analysis of the model to understand what's going on and to derive results which can (hopefully) be compared with observations.

One of the ways that science progresses is to develop and use better methods of analysis, such as -- in the case of Thompson et al. -- not assuming that everything has a pure Gaussian shape, or using velocity as well as positional data to identify halos in cosmological simulations. (The relevance of the latter is that the older, position-only halo identification technique could frequently mis-identify two about-to-merge halos as just a single halo.) You can, if you like, adopt a slightly paranoid assumption that they must have tried many different methods and only used the ones that gave better agreement, but there's no evidence of that. (They did try two different approaches to non-pure-Gaussian fits of the relative-velocity values from the simulation, and found no significant difference.)

So, no, there's no evidence that they tweaked the analysis of their model, either, just to "arrive at a result more favorable to LCDM".

Also, if you really want to suggest that they must have tweaked and fudged their analysis to get a more favorable result, then how do you explain the fact that 'basically the same authors', a mere three years earlier, publish a result distinctly unfavorable to LCDM? Were they bribed or blackmailed sometime between 2012 and 2015? Did the All-Seeing LCDM Conspiracy get to them? (I apologize for being a bit snarky, but I feel this is starting to wander within distant hailing distance of slightly conspiratorial arguments.)

What would you do with an 'improved analysis' for LambdaCDM that leads to a worse data fit?

That's exactly what Lee & Komatsu (2010) and Thompson & Nagamine (2012) were. Hayashi & White (2006) claimed that the BC was consistent with LCDM. The 2010 and 2012 papers were improved simulations and analyses with respect to Hayashi & White (and also using more recent estimates of the cluster mass and initial relative velocity from the modeling of the collision), and they argued for worse agreement between the BC and LCDM. So the answer is: they get published.

Well, I admit it isn't entirely useless knowledge. At least you know there shouldn't be more. But look, to use your example: You want to know how many diseases you'll be exposed to in your lifetime. A doctor tells you there are 20 billion diseases somewhere in the universe. The only thing you learn from that is at least you won't contract more than 20 billion. Besides that, as I said, it's pretty much useless knowledge.

In any case, I don't really see why there should be an observational bias that is velocity dependent, but if you say so, I think you know this better. All I can conclude from that then is that, well, you don't know whether or not it's likely to observe the BC, so I'm not sure what I've learned from that exchange except that the probability I've quoted isn't particularly meaningful.

Regarding your accusations that I construct conspiracies. First, yes, what I've said is exactly that there will be many people using different analyses and models that give worse agreement which they then discard. That's exactly why enlightened collaborations chose analysis methods before they unblind data. Your attempt to "prove" that scientists in your field are unbiased using the example of a single paper is ridiculous and I'll not even comment on this. I find it remarkable, and it only adds to my concern, that you're simply denying this might possibly be the case.

How many computer simulations have been made which did not increase the probability of a Bullet-Cluster like event and how many of these negative results did get published? How many people would publish these results? How many journals would publish them? What's the merit of fiddling with numerical simulations after it turns out they don't fit the data until they do? This is an obviously biased method of procedure and I can't see that your community is doing anything to account for this bias.

Let me explain why I think this line of argument is dubious (at best). A "file-drawer effect", which is essentially what you are alleging, would actually work in the opposite direction to what you are suggesting. The problem that fields like psychology are having -- and that people accuse high-profile general journals like Nature, Science, and Proceedings of the National Academy of Sciences of promoting -- is this: dramatic, exciting results which are surprising or which contradict prevailing paradigms are more likely to be submitted and more likely to be published, because they are more "interesting" and more newsworthy. Studies which merely reproduce accepted past results or which otherwise confirm paradigms are correspondingly less likely to be submitted or published. In cosmology, this would create a bias against results which merely support and confirm LCDM, since that's the dominant paradigm.

Now, from my experience the main astronomical journals do not suffer from a strong "new-and-surprising-things-only" bias the way some journals in psychology (and the aforementioned "tabloids") apparently do. Nonetheless, even a very weak file-drawer effect would still bias things slightly in favor of submitting and publishing results which cast doubt on LCDM cosmology.

(Not to mention the fact that running a cosmological simulation and performing a careful analysis can take months, which is an extra incentive to publish the results regardless of whether your results contradict or confirm LCDM -- otherwise your productivity suffers.)

You are alleging a bias against generating, submitting, or accepting work that disfavors LCDM cosmology, a claim for which you have no evidence. I have tried to point out that the actual publication record shows that people are not dissuaded from publishing such work -- in fact, some authors are perfectly comfortable with publishing papers arguing both sides. (In addition to Thompson & Nagamine, the Komatsu of Lee & Komatsu 2010 is Eiichiro Komatsu, who is the lead author on several of the WMAP results papers, which support or assume LCDM. E.g., Komatsu et al. 2014: "The WMAP observations (sometimes in combination with other astrophysical probes) convincingly show the existence of non-baryonic dark matter, the cosmic neutrino background, the flatness of the spatial geometry of the universe, a deviation from a scale-invariant spectrum of initial scalar fluctuations, and that the current universe is undergoing an accelerated expansion.")

In the particular context of the Bullet Cluster, the relevant papers are probably the following (if you want the full references, you can search for them on ADS):

From the Introduction: "Cosmological simulations and other calculations of the formation of a large-scale structure in the modified Newtonian dynamics (MOND) of Milgrom (1983) have a substantial literature, despite the absence of an agreed upon cosmological model." (They go on to give examples.)

One of the things they discuss is that cosmological simulations with MOND (or TeVeS) systematically fail to reproduce observed large-scale structure unless they also include some form of dark matter (the authors like sterile neutrinos, which as far as I know have not actually been detected...?) and maybe also allow the MOND acceleration parameter to vary with redshift. They are straightforward about admitting that their simulations work on some scales and fail on others (e.g., they produce far too many really massive clusters). So MOND cosmological simulations engage in at least as much (I'd argue more) "tweaking the model to fit the observations" as LCDM simulations, while still failing to agree as well as LCDM does with the observations. And they don't seem to have gotten to the point of trying to produce actual galaxies after all the large-scale structure formation, at which point they'll have to deal with the same "subgrid physics" issues that LCDM folks do.

(There has been some potentially interesting, if rather preliminary, work on how galaxy evolution might differ in MOND: e.g., the idea that without a massive DM halo, all galaxy disks -- not just massive ones -- would be gravitationally unstable, and thus more likely to form spiral arms and bars than in the CDM case. Without dynamical friction from DM particles in halos, there would be fewer galaxy mergers, and thus more isolated, undisturbed disks. And so forth. But it's probably still too early to tell how useful these will be.)

You continue to accuse me of opinions I do not hold which you then attempt to tear down. It's a classic straw-man argument. Look, I wrote this very post to point out that there *are* papers arguing that observations contradict LambdaCDM. Why do you think you have to tell me that? What I'm saying is that this evidence is widely ignored and quickly discussed away and I've seen this so often that I cannot believe any more that this is a rational, scientific discourse. Just look at the comments here with people saying they didn't know about this!

Regarding the probability, as I said above, I was assuming that the total number of 'similar events' to be multiplied with is of the order one. In that case the probability I quoted is similar to the number of expected observations (I'm a theorist, I don't believe in factors of order one). I still don't know what's the right number to multiply with. You basically seem to say nobody knows. Still you quote the paper as confirming LambdaCDM. I can't make sense of that. Either nobody knows or it confirms, but not both.

I know the paper you quote above according to which MOND supposedly requires dark matter. As I've said many times elsewhere, it's beating a dead horse. MOND is an approximation. We already know it's not the correct theory. It's not even covariant.

(There are various reasons why the situation in your field is different from other disciplines in which controversial results are a good thing. To begin with none of the super-high-impact journals publishes papers in that area anyway. But it's not a line of argument I think is fruitful.)

Leaving aide our differences for a while, you seem to be a smart guy and you know your stuff. What I am trying to tell you is a really basic point: Human analysis is by default biased unless deliberate measures are taken to prevent that bias. There aren't any such measures in your field, hence I must assume the analysis is biased. If you claim that isn't so, you're just denying loads of published literature on cognitive biases.

As I said above, that's the very reason why experimental collaborations have strict procedures for their data analysis (and in that, physics is ahead of many other disciplines). I understand that this method for various reasons doesn't work in your field, primarily because you're still developing the ways to analyze the data to begin with. But I am sure there are other aids that could be used and I'd be happy to hear your thoughts on that. If you are interested in having this exchange, please contact me by email, it's hossi[at]fias.uni-frankfurt.de

"You are alleging a bias against generating, submitting, or accepting work that disfavors LCDM cosmology, a claim for which you have no evidence."

Yes, maybe, probably not. Let's take them in reverse order.

I don't think that there is a bias against non-LCDM results in journals, i.e. accepting. As to whether someone with a non-permanent position wants to wait until the permanent job arrives before publishing non-LCDM stuff, i.e. submitting, is a real possibility. This certainly happens occasionally, but most people who work on such things submit them, at least eventually. As to generating, there is definitely a bias regarding funding. One could argue that there is much more work to be done (both because not as much has been done, but also because e.g. N-body simulations in MOND are much more difficult), so if anything there should be more funding, but there is less. Also, even if one has funding, will good people apply for such jobs? There is a very real danger that someone who has worked outside the mainstream will have fewer job opportunities. Yes, I know some MOND people who have really good jobs, but the question is how many more good people could have worked on MOND if there hadn't been the fear (which is, I think, founded) that it could be career suicide? Even if there is no ill will, merely the fact that there isn't as much funding, for whatever reasons, means that there are less opportunities, so fewer people enter the field, etc.

"Also, check, eg, the discussion of the BC in this paper., page 5. That's exactly the kind of 'argument' I've heard countless times, repeated by literally thousands of people who work on particle dark matter."

Actually, this is pretty tame. The BC is evidence that dark matter and baryons behave differently. They do. The whole idea of MOND is to explain everything with baryons. (Personally, I think that there could be baryons and something like MOND, but I am in a minority here.) There is no mention of particle dark matter on page 5. What people do sometimes overlook is that dark matter doesn't have to be WIMPs. It could be self-interacting, there could be macro-dark-matter, some mass ranges for primordial black holes, etc, are still possible. The point is that all of these behave differently from baryons. The rest of page 5, about structure formation, is a point where even MOND enthusiasts concede that LCDM works better.

Quantitative analysis is something different. In any case, no MOND person said: "if two clusters collide, it will look like this", while this is a pretty generic interpretation in LCDM.

Since Kevork Abazajian never replied, what he referred to was the paper https://arxiv.org/abs/1410.7438 which concludes "that ΛCDM straightforwardly produces massive, high relative velocity halo pairs analogous to Bullet Cluster progenitors, and hence the Bullet Cluster does not present a challenge to the ΛCDM model."

Comment moderation on this blog is turned on. Submitted comments will only appear after manual approval, which can take up to 24 hours. Comments posted as "Unknown" go straight to junk. You may have to click on the orange-white blogger icon next to your name to change to a different account.