May I ask, t marvell, how familiar would you say you are with the two "Nobel Prize" papers, the data the two teams used, the JLA and the data it made public, the 1a SNe data that was made public between 1998 and 2011 (or 2014 or 2016), etc, etc, ?

It's fine, I think, to criticize the Nobel Committee(s), but such criticism should be based on all the relevant facts, not some drive-by one-liner (in this case a highly misleading one).

Like Phillip Helbig, I too hope to get the time to listen carefully to video; I've started, but already have over a page of notes that need following up on ...

Perhaps a reason to distrust videos, when discussing observational cosmology? Here's my partial transcription of an early, key part (lots of snipping):

SH: "In 2011 the Nobel was awarded for the discovery of DE, based on the supernova data. Reanalyzing this data"SS: "The data you referred to was made public only in 2014. In a way that enabled other people to analyze it."

The two papers, containing observations and analyses which led the authors to conclude "dark energy", are (per the Nobel website) S. Perlmutter et al., “Measurement of Ω and Λ from 42 high-redshift supernovae”, Astrophys. J., 517, 565-586, (1999), and A.G. Riess et al., “Observational evidence from supernovae for an accelerating universe and a cosmological constant”, Astron. J., 116, 1009-1038, (1998).

It's true that it is not immediately obvious how easily the data used in the former could be extracted from the paper itself (and those it cites) and analyzed independently (personally I think it could, but would be quite challenging).

However, Sarkar's statement - if it refers at least in part to Riess+ (1998), which is much less than certain! - is flat out wrong.

Jean Tate - right, I am a social scientist, not a physicists, but the some issues are the same in all fields. In this case, if Sarkar is correct, an important issue is the need to do what, in my field, is called "robustness checks." I gather that the extent of a local effect has long been problematic. In that case, the original paper should have redone the analysis with various sizes of the local effect. I don't have the ability to tell whether they did that or not.

Phillip Helbig4:41 AM, March 02, 2020" In the meantime, a summary would be nice."

The summary is that the Cosmologists are cowboys using an out-of-date, mathematically simplified standard model based on some historical now unjustifiable assumptions. And the supernova data were insufficient to make statistically strong conclusions, and the conclusions contained assumptions which aren't justified. So dodgy model and dodgy data. Dark Energy is pure speculation. Oh, what a surprise.I seem to remember telling you that Brian Schmidt was so dim that any of his output should be treated with a pinch of salt. Q.E.D.The guy in the video was on a team which was the first to check the data. This, *after* the Nobel Prizes had been handed out. Unbelievable. What a shambles. I wouldn't trust Brian Schmidt to point out the moon correctly in the night sky.

This situation reflects a great uncertainty due to the massive increase in the data base. The original measurements involved a small set of SNIs, and now the sky surveys are far more extensive and analysis of data far more complex. It is the case that the original measurements, which were backed up by additional work, had nowhere near the extensive amount of data required for 5-sigma. It is not quite clear to me how that standard has to apply to all of science.

In the US primary election there are pollsters claiming statistical experts say Bernie Sanders will win and others saying Biden will win. So, between 538 and MSN there is this range of “statistical expertise” that appears almost no different from opinion. One problem I see, from arguments over n-sigma to p-values or p-hacking is science is getting mired in statistics and what appears to be a growing uncertainty as a result.

As is so often the case, the actual history is messy. For example, pace Steven Evans, the teams which did the work that led to the Nobel gong did everything "by the book" ... of that time. And the two papers cited by the Nobel Committee are based on far too few datapoints to have permitted the kind of analyses Sarkar describes in the video. Ditto, largely, subsequent work up to the time of the Nobel Committee's deliberations (say 1999 to 2010).

Observational cosmology has had dramatic growth in the past decade or two, with (for example) publicly available data going from ~dozens of 1a SNe to thousands (and it'll grow to ~millions in the next decade or two). In one way, this "local" dataset is in its infancy; compare that with the CMB dataset, where the only significant frontier - post Planck - is in polarization (though, indirectly, the SKA may turn up a surprise or two).

Steven Evans, re "The guy in the video was on a team which was the first to check the data." As a scientist, you might want to consider checking your facts. It seems that you are not a regular reader of astronomy/observational cosmology papers or reviews (are you?); if so, be prepared for your fact checking to take some time.

"had nowhere near the extensive amount of data required for 5-sigma. It is not quite clear to me how that standard has to apply to all of science. "With the question of the expansion of the universe, say, in radiation from *all* the millions (?) of objects that have been checked redshift has been confirmed (except maybe from a few local objects where gravitation is strong locally). The precision of the measurements is such that redshift can be confidently confirmed and the only explanation for redshift observed from every object checked is expansion of the universe. This is a direct measurement confirmed to be true in all the millions of cases checked.For dark energy, in contrast, the amount of data is not up to the standards of other areas of physics, and the conclusion of acceleration of expansion is not based on direct measurement but on hanging a complex and known-to-be-imperfect model on these insufficient data. Also, methodologically the supernova data hasn't undergone much critical analysis by other teams according to the video. Additionally, this purported extra energy determined to exist by hanging a dodgy model on dodgy data is invisible.

It is laughable to conclude from this that there is dark energy, don't you agree?

I'm not a scientist. But my fact is good. From the video:"So that was our paper in 2016 and in fact it was rather a surprise to us that this was actually the very first time that somebody had in fact looked at the data and the analysis in detail"

"the actual history is messy."Yes, but the overview is that a dodgy model is being tweaked based on insufficient data and the grand conclusion is that the expansion of the universe is being accelerated by invisible energy.Sure.

"I wasn't asking for a summary of your opinion, which you've inundated us enough with here."

That's right. My claim that there is zero evidence for universal fine-tuning is just an opinion, because you can provide evidence, can't you? Except you haven't yet, and you run away intellectually terrified every time you are asked for evidence. Do you want to take up my challenge this time??

And why don't you explain why the "evidence" for dark energy is convincing, because all I can see is a dodgy model being fitted to insufficient data?

And if you can't explain it, then my summary is good and your complaint is invalid.

Re "I'm not a scientist." and "Yes, but the overview is that a dodgy model is being tweaked based on insufficient data and the grand conclusion is that the expansion of the universe is being accelerated by invisible energy.": Towards the end of (today's) comment is "Mohamed 10:46 AM, March 03, 2020" (I don't know if the timestamp will be different for you). I would urge you to read it, and ask questions about the parts you do not understand. To turn up the contrast, spouting ignorant nonsense contributes little to this discussion.

Re "But my fact is good. From the video" [...].

Indeed. May I ask, what efforts did you make to try to establish the accuracy of that snippet from the video?

As I have mentioned, several times, I have yet to independently dig up all the papers which are referred to in the video. And read them. Carefully. Until I do - and I encourage every curious reader to do likewise - I regard statements like those in the video as "unconfirmed".

In this regard, I consider my approach as very much consistent with the spirit of your blog, and as explicitly shown in several blogposts.

For the record, I find this part very hard to accept (the bit in bold): "and in fact it was rather a surprise to us that this was actually the very first time that somebody had in fact looked at the data and the analysis in detail."

Sarkar+'s "analysis in detail" may have been "the first", for that particular analysis (and that particular detail), but I tend to doubt it. Rather, it's more likely that this was "the first" detailed analysis of this kind that was published in a peer-reviewed paper.

Re Lawrence Crowell "It is the case that the original measurements, which were backed up by additional work, had nowhere near the extensive amount of data required for 5-sigma. It is not quite clear to me how that standard has to apply to all of science."

And several others who have commented on the sigmas (this includes Sarkar, in the interview).

Of the widely publicized observational cosmology results, does any reader know, with a high degree of confidence, which pass "five sigma" muster?

It may be laudable, and long past due, to require this of results from observational cosmology (and astronomy in general) ... but if we start to throw stones at "dark energy" based on this criterion, shouldn't we also apply it to the Hubble relationship? the acoustic peaks in the CMB? Big Bang Nucleosynthesis?

Subir Sarkar's paper shows that dark energy cannot be inferred from the supernova data. And remind me what other evidence there is of dark energy. Mmmmm....

Brian Schmidt wrote a pile of nonsense as a preface to Luke Barnes' "physics" book which presented Barnes' insane delusions as plausible facts of nature; Phillip Helbig in his review of the book failed to point out it was a mixture of evidence-free fantasies and the delusions of a madman. It is no surprise that they are now being shown to be wrong about dark energy, too.

Brian Schmidt thinks what is in a dodgy model on a computer is reality, and Luke Barnes thinks the crazy delusions in his head are reality.

@Evans: I think news of the death of dark energy is premature. Sarkar has been doing some analysis of data that is a bit orthogonal to what the rest of the community is doing. However, this does not mean it is right.

The de Sitter or FLRW metric naturally embed a cosmological constant. It is not hard to see that dark energy is some aspect of the quantum vacuum. The zero point energy is a very natural source for the cosmological constant. At least from a theoretical perspective there is no mystery about the existence of dark energy. The real problem is how does the quantum vacuum result in this very tiny vacuum energy, the so called 123 orders of magnitude problem. That is where things get strange, and is even stranger because supersymmetry appears nowhere evident at low energy, or E < 10TeV.

This dipole effect extends out 100 or a few 100 Mpc, which is really a small deviation over a distance of ~ 3000Mpc. It also is not entirely commensurate with SDSS data. So the death of dark energy is far from certain at this point.

@Tate: I am not sure about what the statistical state of observational results are. To be honest 20 years ago I could read astronomical papers with some degree of understanding, and of late I have found them to be frustrating. The same occurs with particle physics I must confess. I really find solid state and quantum optics experimental papers preferable.

If I were to work on issues of big data management I would try to pursue the use of neural networks and learning systems to get data to converge to some model system, or diverge away from such. I question whether standard statistical methods are just leading to nests of quibbles and uncertainty.

Back to this, by Steven Evans:"I'm not a scientist. But my fact is good. From the video:"So that was our paper in 2016 and in fact it was rather a surprise to us that this was actually the very first time that somebody had in fact looked at the data and the analysis in detail"".

In a comment I have just posted (waaay down, awaiting moderation), I looked at what seem to me to be the relevant publication. I found that Sarkar's comment, on its face, is wrong.

However, if he meant (but did not say) something like "the first time anyone had done the kind of analysis, in detail, which we presented in our 2016 paper" he could be right. I'm still doubtful ... but I was not at all the relevant meetings, workshops, symposia, colloquia, etc from ~2014 to 2016, so I have no specific, concrete counter evidence.

"I think news of the death of dark energy is premature."You have this the wrong way around, again. It was the announcement that dark energy effects had been observed which was premature. There never was, and now even less so, enough evidence to claim the expansion of the universe is accelerating as an empirical fact.

"Sarkar has been doing some analysis of data that is a bit orthogonal to what the rest of the community is doing."Sarkar's work appears to show 2 things (1) Dark energy cannot be inferred from the supernova data, which was the key evidence in the claim of the existence of dark energy (2) The standards of research in some areas of Cosmology appear to be a tad slipshod.

" However, this does not mean it is right."The original research team are welcome to argue the criticisms away. So far they appear to have adopted the head-in-the-sand approach, though.

"It is not hard to see that dark energy is some aspect of the quantum vacuum." and "The real problem is how does the quantum vacuum result in this very tiny vacuum energy, the so called 123 orders of magnitude problem."

Right, so you have no evidence of dark energy and don't have an even close-to-working theory. That's fine. It's current research and it's difficult to get observational data. But let's not announce that dark energy exists when we have no idea whether that is true or not.

"That is where things get strange, and is even stranger because supersymmetry appears nowhere evident at low energy, or E < 10TeV."Conclusion: there is zero evidence of supersymmetry.

"This dipole effect extends out 100 or a few 100 Mpc, which is really a small deviation over a distance of ~ 3000Mpc."So, 1 in 10, and then there weren't that many supernova data to begin with...

"It also is not entirely commensurate with SDSS data."Which apparently has observed 4,607 "confirmed or likely" Type 1a supernovae. It's hardly the kind of data size where "not entirely commensurate" is a strong argument.

"So the death of dark energy is far from certain at this point."Dark energy has never been alive. It's a possible physical phenomenon (like inflation) but sufficient evidence has never been presented thus far. Tellingly, your comment is supposed to be defending the dark energy hypothesis, but you have been unable to present any positive evidence at all.

"I wasn't asking for a summary of your opinion, which you've inundated us enough with here."

No response again. So you claim what I wrote was "opinion" and yet again fail to provide any evidence of your claim. You have no interest whatsoever in the evidence or truth, do you? Fine-tuning - no evidence. Multiverse - no evidence. And now Dark Energy - insufficient evidence.

In a previous thread, I pointed out that Brian Schmidt's judgement was shot which you disputed, but sure enough here we are discussing Schmidt's sparse data and dodgy analysis exposed by Subir Sarkar. Quelle surprise!

Just in case it is not clear, my criticism is directed much more at the over-the-top exaggerations by Evans than at Sarkar's work. In other words, as I've said before but apparently have to say again, even if his work is completely correct (which I admit is possible), then almost none of the overhyped conclusions follow.

As to why Evans brings up fine-tuning in this thread, I don't know, but bringing in irrelevant things when one has no good arguments is not uncommon. He has criticized me here for not having already published a refereed-journal paper backing up my claims about fine-tuning, but by his own admission is not a scientist himself and has never written such a paper. It takes time. Suffice it to say that it is in the works and I will duly announce it after it has been published. In the meantime, raising this point again is nothing but more evidence of bringing in irrelevant topics into a discussion.

Another part of my criticism is that Mohamed spends time criticizing what I think is my rather balanced opinion because it slightly differs from his own, but doesn't criticize at all the over-the-top exaggeration by Evans, presumably because they agree in some points. In other words, if Mohamed and others involved in this work would distance themselves from the wild claims of Evans, and some claims in the popular media which are not much better, it would probably convince more people that their work is worth looking at. If one's claim is that a more careful analysis needs to be done, this is not supported by slash-and-burn allies who essentially question all of modern science.

"That's right. My claim that there is zero evidence for universal fine-tuning is just an opinion, because you can provide evidence, can't you? Except you haven't yet, and you run away intellectually terrified every time you are asked for evidence. Do you want to take up my challenge this time??"

OK, here is the evidence. Since 212 pages won't fit into the comment box, I direct you to a review paper by Fred Adams, published in a respected refereed journal. I will respond to further remarks on fine-tuning from Evans only after he has demonstrated, in a refereed-journal paper, at least one problem in the review by Adams. (Since most people interested in that topic are probably reading in this thread, I don't think that it is necessary to point this out again the next time Evans launches some ill-motivated attack against me.)

"by slash-and-burn allies who essentially question all of modern science."Incorrect again. I, along with many others, question that there is any evidence at all for strings, universal fine-tuning, MWI and the multiverse; and I question whether there is sufficient evidence to claim that dark energy, dark matter and inflation are empirical facts (based completely on other people's work. Obviously I'm not claiming any original work)

" a review paper by Fred Adams"Here is the first sentence of the abridged abstract:

'Both fundamental constants that describe the laws of physics and cosmological parameters that determine the cosmic properties must fall within a range of values in order for the universe to develop astrophysical structures and ultimately support life. '

There is no suggestion here that the universe could be any other way than it is. There may be a physical reason that the universal constants and laws have to be not just within the posited ranges, but exactly the values they are. This has not been ruled out by the paper. In which case we wouldn't call it fine-tuning.

One logical step to defeat Fred Adams' 212 pages. It's like shooting fish in a barrel.

Also, he is talking about known forms of astrophysical structures and life. He has no idea whether there are other very different possibilities.

Et cetera, et cetera.

You have failed to present any evidence of fine-tuning once again. Oh dear :(

The overhyped conclusion was the claim that dark energy had been discovered in the first place. The evidence for dark energy was unconvincing (model-dependent claims, limited data and unjustified assumptions) even before the Sarkar paper.

@LawrenceCrowell: "@Tate: I am not sure about what the statistical state of observational results are. To be honest 20 years ago I could read astronomical papers with some degree of understanding, and of late I have found them to be frustrating."

If you can obtain it easily, and have some time to spare, I'd be interested in your take on "High-redshift radio galaxies and divergence from the CMB dipole", Colin, Jacques; Mohayaee, Roya; Rameez, Mohamed; Sarkar, Subir, 2017MNRAS.471.1045C. It is, I think, one of the papers mentioned in the video. I'll like post some comments on it here later.

"We find marginal (i.e. <~3σ) evidence for the widely accepted claim that the expansion of the universe is presently accelerating."Marginal and therefore insufficient evidence to make the acceleration claim. Universal acceleration exists only in models of the universe, and now one of those models has shown to be questionable. Nobody knows if the expansion of the universe is actually accelerating or not because the technology doesn't currently exist it to check it.

The last line of the abstract of the paper by Fred Adams reads: " Finally, for universes with significantly different parameters, new types of astrophysical processes can generate energy and support habitability. "

OK, the corona virus is giving me some time. (The plague led to the Principia, so who knows what will come of the current pandemia.)

Around 25:40, he makes it sound like no-one had suggested acceleration before the two supernova teams. The current concordance model, and the implied acceleration, had been in the air for several years before that.

I think what he says about confirmation bias is good. (Related to this, it is not good that some journals won't publish results which merely confirm other results; this is the opposite of confirmation bias. Unfortunately, the two don't cancel.) Later on, he points out that too many lambda measurements are within the WMAP 1-sigma error bar. (I remember Richard Battye making a similar comment about measurements of w: the individual error bars are much larger than the error bar of the average of the measurements.) However, this could be due to not publishing results which don't fit expectations, and not to new physics or whatever.

He dismisses some results as "inferences", which is correct. For example, the CMB together with measured Omega=0.3 implies Lambda via the sum rule. But at about 33:15 he discusses an experiment which will allegedly directly measure negative pressure. Nope, I don't think so, not in the way most people use the word "direct". Of course, almost all "measurements" are really "inferences"; the question is whether some are more direct than others. But selectively calling measurements which one does not agree with "inferences" is not good style.

Also, at about 35:40, he suggests that the current standard model, at least the homogeneous and isotropic aspects of FRW models, are a result of the fact that such simplifying assumptions were made decades ago, mainly because they simplify calculations. True, but irrelevant, as there is a huge amount of observational evidence that the large-scale Universe is well described by an FRW model. So the fact that FRW models were initially chosen because of their mathematical simplicity is a red herrig.

At about 40:15, in a parenthetical remark, he notes that the Hubble constant is not constant. In general, it is not constant in time, that is true, but he makes it sounds like it is some kind of misnomer or something. It is a constant like a is a constant in y = ax + b. That the Hubble constant can change in time is true, but the idea that this somehow means that the term "Hubble constant" is wrong is not.

For what it's worth, I suspect that his take on the Hubble "tension" (at the end of the video) is probably correct. There, he's saying essentially that there is nothing mysterious going on, just sloppy work, but that has produced fewer headlines than the implication that "there is no evidence for dark energy" which some read into this work.

I followed the video as best as possible. There are references to techniques I lack detailed knowledge of. I am not able to comment too deeply on these.

The claimed result is the universe has this net dipole moment that extends beyond 100Mpc. This sounds as if he has found something similar to the “axis of evil” that was popular 10 or 15 years ago. It is tempting to say there is some great attractor or Mother of All Black Holes, MOABH, but this would tend to imply red shifting along an axis and blue shifting around the plane normal to this axis. Why there would be a redshift in a certain direction with a blue shift in the antipodal direction is mysterious. This is not expected from a MOABH. Sarkar talks about baryon acoustic oscillations, BAOs, and this would tend to be a more normal explanation. This would occur though on a scale that is not well reflected in the anisotropy of the CMB.

At the end Sarkar says the response is disappointing. I think this reflects how the rest of the astrophysics community has responded in less than in the affirmative. It is not possible for me to completely comment on these assessments. The dipole claim Sarkar advances might mean some inconsistency between CMB anisotropy and what BAO would predict. The Sloane Digital Sky Survey (SDSS) found anisotropies in galaxy distributions that did conform to BAO estimates. Hence if Sarkar is right there is then some issue with how his statistical analysis appears to deviate from SDSS data.

There is another issue that I don’t see being raised. It might come down to Mark Twain’s quip, “There are lies, damned lies and then there are statistics.” I notice of late that much of science is becoming mired in kibitzing and quibbles over statistics. Statistics is an area of mathematics that has two main schools of thought that fundamentally disagree with each other. This state of affairs does not afflict number theory or algebraic geometry and other areas. Physics may not just be getting lost in math, or only lost in math, as it may be getting lost in statistics.

The history of trying to match the observed CMB dipole (which is very firmly established) with the observed distribution of mass in the "local" universe is a long one. With many papers reporting inconsistent results, and lots of systematics understood at the time and only later. For example, Sarkar refers to the 2MASS survey, whose result have been used in several of these studies (including the one he himself mentions, though I have yet to track down the paper).

It's not really surprising that statistics is increasingly mentioned, and argued over, in astronomy, cosmology (of the observational kind), etc; in Hubble's day who cared whether you were using Bayes or not, you had data from only a handful of objects (Cepheids, galaxies, etc)!

It appears we are getting buried in data. As I see this the case against dark energy does not appear convincing. This dipole moment problem, largely on a 100Mpc scale, does not appear to rule out dark energy.

With the Hubble constant around 70 km/sec per megaparsec, 100 megaparsecs correspond to a redshift of roughly 7000/300000 = 0.023. Not very much, though I suppose structure on even this scale is a problem for cosmology.

Another way of looking at it is if the observable universe is 28 gigaparsecs but has structure on the scale of 100 megaparsecs, that is less than how much Mars deviates from being a featureless sphere. (Olympus Mons is 22 kilometers high and the radius of Mars is 3400 kilometers.)

That is, maybe structure on the scale of 100 Mpc is a problem for cosmological theories, but in terms of departure from homogeneity and isotropy, the universe is still smoother than Mars.

The Dark Matter and Dark Energy topic always generates interesting comments. No exception here, which is why I enjoy reading your blog. For me, this topic always gets me thinking about Edwin Abbott Abbott’s book Flatland, first published in 1884. The story of a society living in a two dimensional space. I always wonder about what their physics would be like. You have to presume that they have some form of gravity. And since we live in a 3D world I can imagine that the gravity we know of envelopes the 2D world they live in. Since this 3D gravity interaction has to be affecting their matter in all possible 2D directions and it is coming from someplace they are unsure about I wonder if they would call this 3D gravity “Dark Energy?” Similarly, any sufficiently massive 3D particle interacting with their 2D world might look like “Dark Matter.”

Not a mathematical treatise I understand, but an interesting way to think about things during a quiet time. And, I guess you might be thinking that I am a fan of Theodor Kaluza’s original theory, I am.

Almost raw YouTube Transcript:Hello everybody today I'm visiting the University of Oxford where I'm talking to Subir Sarker who's professor for theoretical physics and I've told you about him before because he wrote this paper about the supernova data that may not provide the evidence for dark energy that we thought it was. So maybe we can start with that. So in 2011 a Nobel Prize was awarded for the discovery of dark energy based on the supernova data and you and your group you have been reanalyzing this data and think that this claim does not hold up maybe you can briefly summarize what you found.Well this now goes back over 20 years so let me try to recap the situation Well first of all the data that he referred to was made public only in 2014 that is to say in a way that enabled other people to analyze it and the first thing that we did was to notice that in extracting cosmological information from the data in fact the statistical analysis that was being done was not what one would call principled in the sense that it was assuming the model that they were actually meant to be testing. So there is an error bar added to each data point which is adjusted until you get a good fit to the assumed standard model of cosmology. So this is perfectly okay if you want to estimate parameter values but it is not really right for model selection. So our first step was to use a principled statistical method This is called the maximum likelihood estimator It's industry standard and using this and in fact this work was done essentially by my then master student at Copenhagen yep a Nissen and my other author co-author was Albert Ponty who is actually somebody who works with bots on distribution functions but he's a statistical expert And we established that in fact the evidence for acceleration in the data was marginal It was far short of the 5 Sigma that you'd expect for a discovery of fundamental importance It was in fact less than three standard deviations So that was our paper in 2016 and in fact it was rather a surprise to us that this was actually the very first time that somebody had in fact looked at the data and the analysis in detail Partly this is because of what I said earlier that the data had simply not been available in undoctored form This is the important thing and we are very grateful to the so-called joint likelihood analysis collaboration which essentially included every supernova expert in the world including the Nobel laureates And they made their data public and I think that is a very healthy thing to do because it means other people can independently reexamine your analysis and to that end we simply wanted to emphasize that the evidence is not as strong as it should be But this created a bit of an impact and quarters going on trying to look at the data more closely and this analysis we are done had been using exactly the same cosmological model as had been adopted by the previous authors In other words something which is assumed to be isotropic on the sky We had wanted to do this in order to enable comparison with the previous results was the 2016 paper session so subsequently we had also been doing other work I was involved with another group of astronomers this is Rui and Jacques Colin who were at the Institute of astrophysics in Paris and Mohamed ramiz who was then a postdoc with me Finnegan

ctd.And we had been actually looking at something which is not directly related to the supernovae but reflects on the underlying assumption of isotropy and that has to do with the following That in fact when you look at the sky it is not as a tropic the cosmic microwave background which is the information that you get from the deepest point in the universe has a very pronounced dipole anisotropy and this has been known for you knows since almost the beginning of the disco shortly after the discovery of the CMB Now this is attributed to the fact that we are actually not in the so-called cosmic rest frame we are moving locally due to something pulling us Now this is not unusual I mean if you look for example that nearby galaxies there again our hydrometer is actually falling towards as it's not going away from us in the Hubble expansion However these motions are meant to be local so if you average over a large enough scale they should disappear and you should then so-called converge to the frame in which the universe looks isotropic and therefore the regional standard model of cosmology which assumed exact isotropy and homogeneity has been if you like improve to take into account that actually the present-day universe is no no inhomogeneous and the statement is that yes it is but that is only on small scales If I average over large scales bigger than 100 mega parsecs and a mega parsec is roughly the distance to Andromeda it's actually 0.8 mega parsecs so if I average over a scale which is like a hundred times bigger than the typical inter galaxy separation then I should arrive at that idealized framework which is the usual Lee assumed theory so that's an assumption that people made At that's right however we wanted to test this by looking at the anisotropy in similarly this and astronomical sources so there are radio sources which are at high redshift and they have been catalogued from the Very Large Array in New Mexico and in that it is known that if you move with respect to a uniform isotropic distribution of sources the first effect you should see is called aberration just like aberration of starlight which was in fact first discovered by Bradley who was a professor here And you also have the Doppler boosting of the frequencies of the light so if you are looking at the distribution of sources whose fluxes the numbers depend on the flux then you have the law for that and all this had been computed long ago and you can therefore by simply counting how many sources are there let's say in one Hemisphere versus the reverse hemisphere you can work out how fast you're moving with respect to them So when we did this exercise we found that the velocity was in fact in the same direction as the CMB dipole but it was four times larger okay now actually we are not the first to find this A radio astronomer called single had already said this some years earlier but he had not been taken seriously because he was looking at a catalog that was made from one point in the earth So it was not full sky and there was a concern that if you had very nearby objects that they can give you an artificial dipole in the sky which is not reflecting these effects that I mentioned earlier

3. ctd.So when you talk about these radio sources being at high redshift just what do you mean by high I mean something but the more than one that truth is that most of my research ships have not been measured they're just point light sources on the sky so they are taken to be at high redshift but we in fact did something novel which had not been done earlier We cross correlated that catalog of radio sources with a catalogue of objects measured by in the so called 2ma survey These are mainly nearby galaxies infrared emitting galaxies and anything that was in common between the two catalogs we threw out So he therefore ensured that the objects we are looking at are actually distant and we could remove the possibility that just by chance there happens to be a radio source quite close to us which would then give us an artificial so called clustering dipole We are interested in the kinematic dipole So we did this analysis We obviously satisfied the referees that we had done it properly We removed the you know Galactic plane and other such possible sources of contamination and we confirmed singles result now we had in our catalogue something like eight hundred thousand galaxies because we had to throw out quite some of them in order to make sure that the catalog was clean But we had also combined them with a catalogue made from Australia to cover the missing part of the sky and in to dog it came to less than a million Now I say that because if we then calculate by Monte Carlo techniques what are the odds of finding a dipole which is as high as we found purely by chance by fluctuations well I could say that in different ways but I could express it in terms of Sigma's which is something that would part of the phases understand And it's actually less than three Sigma so in other words although it's a very striking result it's not convincing for the same reason that I gave earlier about the supernova data We need you know ten times hundred times that number in order to do this job properly and the good news is that that will happen because the Square Kilometre Array which is currently under construction will be able to give us a much much bigger catalog on radio sources And indeed this is one of the tasks that we that they will carry out but even in the slum town version I I read they have some that's true there are issues and in particular Germany has believed dropped out of the Square Kilometre Array so Dominic Schwartz Benefield who is involved with the SK and who is very aware of this result because he had also independently looked into this radio source dipole He has been a no pressing for this to be a flagship project in SK because it's a simple thing to do It's very very morally independent of course you have to in mind in practice you have to allow for all kinds of biases and so on which we do But it's a easy to intuitively understand what is going on So what what would it mean if the velocity that you get from that kind of measurement is is four times larger than the other world that means that the CMB dipole is not kinematic enology

ctd.4.Uuh-huh right so in other words all we see is a dipole The interpretation is based on a model and if the CMB dipole is not entirely kinematical in origin then it affects the usual analysis of cosmological data which always assumes that by making a special relativistic transformation we can boost to the frame in which the CMB is isotropic and therefore the universe is isotropic and we can use the same equations of you know friedman robertson limit walker cosmology which were drafted you know nearly 100 years ago So this is rather important now when we found this or defect we also in the process established essentially by doing tomography so what we did was to take a catalog of galaxies which had their redshifts measured the harbour supernovae so actually we are grateful to the supernova discovery for having provided the resources necessary to you know get more data because that's important People keep saying is avalanche of data but if you actually start looking closely at it there isn't really as much data as one would like It's nothing in comparison cosmologies the data is nothing in comparison to particle salutely and other difference is that in particle physics if you don't have sufficient data you will soon have it you don't have to do calculations of you know look elsewhere effect and so on You just get more data and you know whether that little nascent peak is actually the Higgs or not right In cosmology it takes much longer to get the necessary data So anyway to come back to what I was saying we had established by doing tomography of the local Hubble flow that there is actually a dipole in the supernovae themselves this was in an earlier catalog called Union 2 which had fewer galaxies a fewer supernovae host galaxies in them but that was enough to establish that there is in fact a dipole in the supernova distribution in again in the same direction with but much more uncertain within about 30 degrees of the same way dipole direction And what however we could establish was that this flow or whatever was causing the dipole extended much beyond a hundred megaparsecs it in fact went out to the so called Shapley supercluster which is at something like 260 mega parsecs and subsequently another group called the nearby supernova Factory who are professional astronomers In fact they're led by Saul Perlmutter they did the analysis the same analysis on their data and they showed that the flow in fact extended possibly even deeper So so to to tie this back together with what you said earlier it means that this assumption that we're converging to this cosmological rest frame at something like a mega parsec is just wrong Hundred megaparsecs yet it's it's definitely beyond 100 mega parsecs So this is all rather puzzling but what it meant is that we are in this peculiar flow this non Hubble flow and that it extends out much further Now this is a very tricky measurement because in order to do this you have to have independent measures of distance than the redshift and measuring distances is the hardest thing in cosmology in astronomy in general So the way this has been done is by using other empirical properties of galaxies there is the Tully Fisher relation that you know about which is a empirical correlation between the speed with which spiral galaxies are rotating and their luminosity

ctd.5. There is similarly something for elliptical galaxies is called the fundamental plane and using these techniques you can measure distances but not very precisely Perhaps to about 10-15 percent That's about the level of accuracy But recently a survey that has been done from Australia the so called six degree field galaxy redshift survey has measured the peculiar velocities of 11,000 galaxies which is really the biggest sample to date Now I'm not sure the authors will ever claim that this is the last word because that is still a small patch of the sky and you really need to do a bigger Sky Survey to be sure that this to get more data and to confirm this conclusion But what they are showing is that our initial assessment that the flow extends deeper than expected is in fact correct Their error bars are a lot smaller and clearly discrepant with the usual expectation which is the standard model of cosmology that the growth of structure is from small fluctuations that we see imprinted on the Cosmic Microwave Background and this has happened just through gravity So you can compute it using linear perturbation theory and the expectation then is that you do have structures today they have gone on linear you see the blick city of the galaxies and the clusters and superclusters and so on But when averaged over scales bigger than 100 mega parsec oh so you should be able to recover the underlying simple model that is the basic presumption Now all these things that I'm talking about make one question these presumptions we don't really know why we are flowing at the speeds why the flow is extending that deep These are issues that we can talk about separately but they immediately impact on how the data is actually treated or processed in order to draw cosmological conclusions and here we find that the supernova data the supernovae host galaxies also have peculiar velocities about three-quarters of the ones in the jail a catalog there are seven hundred and forty of them three-quarters of them are within this bulk flow So two to briefly summarized as Yuri analyzed the supernova data without this assumption of convergence to the cosmological average at 100 mega parsec That's right so we are analyzed we decided to go back to the analysis that had actually been done in the first papers which was simply looking at what is measured in our frame in the India centric frame But obviously you make corrections for the fact that the earth is going round the Sun and all that but it's the heliocentric frame and we undid the corrections that had been made for peculiar velocities because we found them to have been done using rather out-of-date flow models and also done in a inconsistent manner In fact unphysical so it's a technical term after use here But if I tell you that the covariance matrix for the peculiar velocities had large negative terms in the off diagonal elements this makes no sense whatsoever They have to be physical they have to be positive definite so essentially we dug down into what had been done and found that we are not for it satisfied that it had been done in a appropriate manner

ctd.6. Okay so yeah you you got the data and actually had to undo the corrections that had already been done Precisely to and then when you look at the so we put ourselves in the position of what if we had this big data set but we are now let's say wind back twenty years right and we are doing the same analysis and we examine whether there is acceleration in the expansion rate but we now do it dropping the assumption that it's isotropic on the sky In other words that the directions of the supernovae don't matter we drop that assumption We take that into account So now when we look for a direction dependence in the inferred acceleration we found to a great surprise that it is almost entirely a dipole The universe is accelerating locally in one direction and decelerating in the opposite direction and this direction is pretty close to the CMB dipole It's within twenty three degrees and I want to emphasize that I should not use the word universe here We are talking about what we actually observe and what we infer from that this does not mean that the universe has an axis it means that the sample of supernovae that have been so far observed if analyzed without any presumptions they show this dipole What one makes of this is another matter that I'm not commenting on here I am making this comment simply because a lot of people immediately get very concerned that you know it's kind of hard to imagine how the universe could have some axis and how one could have a directionality in the metric We certainly are not talking about that we are simply talking about what we observe It's very much empirical So when he found this we also of course pointed out that the monopole in the acceleration that is to say the isotropic component was consistent with being zero at you know at one point force Sigma So now the interesting thing is that if you were to ascribe this acceleration to a cosmological constant it definitely to this vacuum energy it would have to be isotropic So in other words the evidence for an isotropic acceleration is non existent and it is this that has made a major impact not just on cosmology but also on fundamental physics for example you are aware that this whole recent controversy about the swampland insane theory is based on whether we have a cosmological constant in the background or not And that is predicated on the acceptance by the string theory community that the sonomas have shown that there is an isotropic component of the acceleration Yes well more specifically that the constant is actually positive as opposed to being now negative that's what gives the string theory said I would be happy so actually as I tell my string theory friends they have to in any case uplift to a to a positive cosmological constant they could just as well uplift to 0 because string theory has not addressed the cosmological constant problem per se which we can come to later And it I want to just emphasize that the interpretation of this as due to a cosmological constant right is simply based on measuring observables such as the luminosity distance or in other context the angular diameter distance and then interpreting this as due to vacuum energy or whatever This is not what is written in the sky these are all interpretation So we are trying to make a clear distinction between what is measured and the interpretation

ctd.7. So and the result of your analysis is basically that he don't need dark energy to explain the observations that come from the supernova so what's with all the other evidence for dark energy the people are so proud of you need that is the first thing that to say so when you publish this paper for example Adam Reese who is one of the Nobel laureates you know he criticized us publicly You were saying there is a whole body of evidence so so to that have several things to say First of all I think it's a cultural issue So as you're well aware the standard motion of particle physics is confirmed by a very large number of pieces of evidence right there is something called a G filter plot which shows at least 42 different data points and they all agree perfectly in the standard model Nevertheless most of the committee is engaged in an attempt to find some way beyond the standard model by finding one piece of data that doesn't fit hence they interests in for example the anomalous magnetic moment of the muon and/or the anomalies that have been revealed in B decays and so forth Because that's the only way we know we are going to make progress right We are not going to be satisfied with just having a model that fits all the data because that does not allow any progress However in the cosmology community I find for whatever reason that that is the status quo they do not like the standard model to be questioned That's the first thing The second point is that all this confirmation of the standard model that was initially triggered by this discovery of the acceleration is actually somewhat superficial because most of it is done assuming the standard model of cosmology So to give a explicit example for example there is a phenomenon called baryon acoustic oscillations So these are the analog of the peaks that you see in the Cosmic Microwave Background anisotropy and this was actually remarked on long ago In fact when I first heard about them they were called Sakharov oscillations and they would have been very pronounced because at the time they didn't know what dark matter if it would have been a purely baryonic universe and these features should have been very prominent In fact in today's universe they should be extremely subtle So the statement is that if I pick a galaxy and then if I draw a shell around it at some distance of order 150 mega parsecs they should be a one person over density of galaxies in that shell compared to the average So you're looking for a one person defect and this we can look for in the so-called two-point correlation function or in the power spectrum and so on Now it is easy to figure out that in order to measure such an effect at this five Sigma that I talked about earlier you need to have several million galaxies red shapes measured precisely Precisely means spectroscopically there is a cheaper way to measure red shapes you called photometric red shapes where you just look at the galaxies through different bands But if you don't have enough of those bands and due date we don't the measurement of the redshift has large uncertainties

ctd.8. So that's not good enough So you might ask do we have this necessary data same samples and the answer is no The first claim of the barren acoustic oscillation P was made using a sample of something like 40,000 so-called luminous red galaxies How you might ask could you actually detect such a thing when you don't have the numbers Answer is what I had told you earlier You use the LCD em template so you're actually answering a different question You're not asking is there a peak somewhere or what are the odds for there being a peak from random fluctuation you're asking is there a peak at the expected position for the standard cosmology And in fact there had been analysis where no peak has been found even with more data than the first one So if that sort of thing happens in particle physics if you see a initially a little bump at 125 GeV and then you take three times more data and that has dropped in significance you would say it was a fluctuation But cosmologists know this and they acknowledge that in any given sample there is only a 10 person chance of seeing the peak So you have a rather uncomfortable situation where of course you only publish something if you see the peak not if you don't So there is a possibility of confirmation bias and in fact only two things I'm not trying to criticize any particular analysis In in particular all I'd say is that the same data that is considered to be confirmation of the standard model using baryon acoustic oscillations it's also consistent with models where there is no acceleration and this has been shown by alan blanchard and his collaborators so that tells you about the statistical power of the data So that would be if you only look at the baryon acoustic oscillations But then usually the argument is if you also take into account the supernova data that it speaks very strongly for lambda-cdm indeed and if you combine if you don't have the supernova data and the baryon acoustic oscillations alone won't do it correct So it's a matter of what's called concordance or combining the datasets and in particular the Cosmic Microwave Background also comes in there because that the position of the first acoustic peak provides say a good measure of global special curvature which is close to zero And therefore you infer from that that there must be something to make up unity because you're using the so-called cosmic sum rule in which matter plus curvature plus the cosmological constant add up to one And therefore if you plot all these measures on the two-dimensional plot of the cosmological constant versus the matter then they peak out the standard cosmological model So I have two comments about this the Cosmic Microwave Background fluctuations by themselves are insensitive to dark energy because they decoupled at a time when the universe was a thousand times smaller and therefore the cosmological constant was less important than matter by a factor of thousand cube The way that you inferred energy from the Cosmic Microwave Background is primarily by using this Sam rule therefore the conclusion is only as valid as the sum rule is valid and the sum rule I would remind you is was constructed for the simplest possible cosmological model which has only these three components

ctd.9. So whereas I certainly agree that within that simplified model there appears to be concordance between different measures my remark is that that concordance might be somewhat manufactured because there is first of all some confirmation bias as I mentioned earlier and one knows that many other techniques which by themselves don't up the statistical power to give you a good measurement are nevertheless consistent with the standard model So it comes to mind that Rupert Croft who was cosmologists he did an analysis with a colleague of all the measurements of lamda that had been reported in the literature between the discovery 1998 and something like 2011 and I think if I remember correctly they said that of the 28 measurements that had been reported only two were outside the 1 Sigma bound of the W map people right W map was supposed to have made the most precise measurement it was actually an inference But even so so you know by chance you would expect at least 1/3 of these to be outside the 1 Sigma bound right but in fact only 2 where now they in fact said that this sort of highlights the importance for what would be called blind analysis in cosmology And I think many cosmologists are in agreement with that that one should avoid any possibility of confirmation bias by doing blind analysis But that has not been the case to date but the good news is that there is data on the horizon For example the recently commissioned dark energy spectroscopic instrument that will measure millions of red chips using optical fibers and so on And this is also important because there are direct s of lambda or dark energy which will actually measure whether it has this negative pressure that is characteristic And what that means is that the negative pressure stops the or slows down the formation of structure it fights gravity now normally the Cosmic Microwave Background and the fluctuations are unaffected by what happens after the radiation decouples But there is this exception called the late integrated Sachs Wolf effect which is that as the photons propagate towards us they pass through in homogeneities so both over densities and under densities and to linear order this cancel out However if lambda has come to dominate the universe that would have been recently within a redshift of 1 then the photons as they cross the structures that are forming at that time when they fall in and they climb out the redshift of the blue ship don't cancel because in the interim the gravitational potential has changed And this should reflect itself in a correlation between large-scale structure and the Cosmic Microwave Background and this is the latest W effect and if you can detect it that would be a very interesting direct test of lambda or something else that can do this like curvature Now there was the interesting paper by a cosmologists near shops Ruthie many years ago and he pointed out that this was a very interesting test but you need 10 million redshifts to see this at 5 Sigma I've already told you that we are less than 1/10 of that number Yet there are any number of papers that claim to have seen this effect because the significance of the deductions are all below three sigma sometimes they act together different attempts to do so and combine the covariances in a rather risky manner I would say to try to get it above 4 Sigma

ctd.9. But really what we really need is Deedle yeah that's right I remember seeing some of these papers yeah seeing reports about it so the bottom line in my opinion is that this concordance is partly manufactured and more to the point dress on the underlying assumption of a cosmological model that had its roots a century ago and which is very difficult to modify Because ultimately as you know fell exact solutions of Einstein's equations are very hard to find except in highly symmetric situations So this model was constructed when there was no data and it is quite understandable that Einstein and others assume maximally symmetric situation to formulate the model It's very hard to deviate from that because then the mathematics gets very complicated and it is then harder to confront any deviations from this model with the data - you can't do so in simple situations So for example there is the lemon storm and Bondy model which drops the assumption of homogeneity but preserves isotropic It just allows for a radial variation and that in itself is enough to get rid of the evidence for acceleration from supernovae but if you then ask me you know we could be in a large void there is astronomical evidence for that then this could all be true But then one would be unable to give you a worked out model for how the fluctuations grew from initial decoupling to the structures that we see today if indeed we are in a LD be locally LTB universe So the great advantage of the standard model is precisely that by making all points in the universe equ valid and therefore equally special in some sense they simplify the mathematics incredibly I recall being a well-known mathematical cosmologists Kaczynski who has written a book on inhomogeneous and anisotropic cosmologies and I have her I told him I'm so glad I don't have to read you book because it was really very hard going But in fact I'm afraid one might need to do that mm-hmm so you also told me that this reanalysis that you did of the supernova data impacts us discussion about the tension and the Hubble rate Well that's right so this actually is getting into the nitty-gritty So I said that the data was made public in the joint likelihood analysis group and they did the community a big service Now subsequently other data has been released but not in the same detail So there is something called the Pantheon competition which added to the JLA catalog another set of supernovae which were found in the so called pan-starrs survey done from hawaii plus some others And that has then increased the number above a thousand and then there are other catalogs from the dark energy survey from another survey done by people from harvard and so on But most of these are not provided in the detail that we require so as I mentioned earlier we needed to undo the effects of the peculiar velocity Corrections that have been made these are not provided separately for this catalogs and therefore we cannot do the exercise And I do find it a bit unfair that we are then criticized as Adam listed for not using quote the latest data which makes a impact on those who are not expert in the subject

ctd.11. We would be very happy to use the latest data if it is made available So that's reads this to a situation where there are different catalogs and then when you look at the existing catalogs my collaborator Ramiz found that there are discrepancies between for example the GLA and the Pantheon catalogs lots of supernovae they're red ships in the pantheon catalog are different from those in the genie catalog to a level which is far bigger than the coated uncertainty of the measurement Yeah I looked at the paper I was quite shocked I have to say and of course in the particle physics community we are used to the idea that if you make some data public you are responsible for the data and you entertain questions and any queries that people might have about using that However remains reports that he has been unsuccessful in getting satisfactory answers to why this data is discrepant and the hubble tension that you referred to is based on the claim that there is a possibility of measuring the Hubble constant as it is called although it is of course not a constant but you can measure it locally to a pretty high precision okay initially it was 3% now the claim is you can get to what almost 1% however even when the first Hubble parameter local Hubble constant measurement was done as a major project it was called the Hubble key project using the Hubble Space Telescope This was one of the flagship missions of the Hubble Space Telescope and Wendy Friedman led this collaboration and they had only about I think 50 to 60 measurements of objects on the basis of fees that determined the Hubble constant within about 30 MegaPath six of us relatively locally and this was looked at later by McClure and Dyer I think And they pointed out that if you looked on the sky there was say something of order seven percent I think variation in the Hubble constant in different directions okay now this was I mean in a sense it would not be unexpected If you really are living in a highly homogeneous universe where the rate of expansion reflects the local matter density The point I'm simply making is that if you want to claim that you can make a one person measurement you will have to explain how you account for those seven percent deviations and how you correct for them Perhaps part of it can be corrected using models of peculiar flows and so forth but you need to do it to that position I'm trying to calculate in my head how the seven percent relate to the observed tension which in the plank measurement and just make it go away It was just made it would be within the uncertainty Absolutely So our point is simply that it is premature to claim a Hubble tension because there are still systemic uncertainties in the supernova data What has now happened is those local measurements using the local distance ladder have been married to extend it using the supernova data but depending on which supernovae catalog is used we get a different answer for the inferred Hubble parameter And what was pointed out is that the inferred Hubble parameter jumps by enough to remove this so-called tension that has I mean had been called a crisis for cosmology

ctd.12. So this to me reflects a rather disturbing fact which is that the analysis of the data is rather specialized Very few people know how to handle these catalogs to go into the details of how the error Corrections have been done the covariances and so forth And it's largely a black box unless you spend time doing this and somewhat to my surprise it is understandable of course of the particle physics community which has a great interest in the outcome Nevertheless is not really able to assess the evidence to this degree of detail but I am a bit surprised that even in the astronomical community there has been far less in my opinion critical assessment of the procedures being used then there should have been And because in fact the fact that these discrepancies that there is on record we have not had any satisfactory response and the situation continues this One gets this impression what could have happened there is that there were some rather basic mistake in in some of the data analysis and then people build stuff on top of it right It could well be I mean one would rather hear it from the people who actually put the data there because they are the people who really no But certainly there are some trivial errors in another webpage where the Pantheon catalog is listed we have pointed out that the columns which show the red shifts in the heliocentric frame and the ones in the CNB frame are actually identical That was clearly some kind of a typo it has not been corrected over a year later it's still there and the our concern really is that meanwhile a lot of people pick up these catalogs uncritically and used them to publish papers which then say well the universe is exactly isotropic or whatever not realizing that the datasets that they're using do need closer attention So a lot of misinformation is being propagated and this is not very healthy So it's it's a big mystery that I guess will you know how just for a little longer I think that's a good place to wrap up so I would like to thank you very much for your time and for this interesting conversation and thank you everybody for watching see you next week

If I may say so, the transcript is good evidence for why we do science via paper (and databases, etc), not video.

I have it on my TBD list to track down all the papers mentioned (at least those directly so), and in the process will surely be making a lot of edits (e.g. "our hydrometer" is actually "Andromeda", a.k.a M31, the other large spiral galaxy in the Local Group). Won't get done today and likely not by tomorrow evening either.

Subir Sarkar and others have made very interesting new messurements - the comments here should first of all discuss that and what that could mean for cosmology.If no black energy fights gravity, this could really change cosmology.Many scientist is too critical to new messurements and new theories - at the same time few progress are made the last decades.

It's important to be very clear what Sarkar and his team have done. As Sabine says in the video intro, he's a Professor of Theoretical Physics at the University of Oxford. Although it's not alway 100% clear in the video, he did not claim to have made "new measurements"; hardly surprising because neither he nor his team did so.

What he/they did do is take data from publicly available sources and (re-)analyze it. I haven't dug into this much yet, but it seems that he/they didn't even work from the original raw data, but took a dataset that had already heavily processed the original raw data and played with it on a computer (using different models).

In science, generally, new knowledge often come from looking at theories and the data behind them. In this case the data from the Nobelprice winning paper are “unfiltered” and, as I read it, the questions Subir Sarkar and others raise are: Are there data enough, can the data be interpreted otherwise. And also the very important question: Were the Nobelprice Winners too much influenced by the “fact” that Dark energy exists.

The best evidence for dark energy is probably the age of the Universe. If the cosmological constant does not exist, either the ages of the oldest objects in the Universe have been overestimated by 2 or 3 billion years, or the density parameter Omega must be much less than 0.1 rather than 0.3, or the Hubble constant must be less than 60, or some combination of these, or the Universe is not well described by a Friedmann model. Extraordinary claims demand extraordinary evidence. Even assuming that the supernova data indicate that there is no dark energy (which is different than claiming that they don't rule out a positive cosmological constant), any alternative theory has to explain why all of the other evidence is wrong.

No one is talking about "alternative theories". He is simply pointing out that the data analysis does not support the claim that was made based on it. I find it stunning, to say the least, that you basically reject the result of a study just because you do not like the conclusions. This is not how science works.

I notice that from the beginning of the discussion about this analysis (in Sabine's previous posts), you have been criticizing our work from a status quo POV. Consequently I'd like to point out the following:

a) Your above argument is based on the presumption that the simplest, maximally symmetric solution to the Einstein Field Equations (FLRW), continue to hold in the real late time Universe where all of matter has collapsed into structures. The vast majority of General Relativists disagree. See this exciting new paper for eg: https://arxiv.org/abs/2002.10831 . You may find Table 1 very illuminating.

b) It is an indisputable fact as of today that the local Universe out to at least 300 Mpc has a directionally coherent flow. How exactly is this described by a Friedmann model? In our paper, we point out that we are motivated by the covariant predictions of Tsagas https://arxiv.org/abs/1507.04266

c) That the real Universe cannot be described exactly by a Friedmann model is not an extraordinary claim. All you need to do is look at the sky and see that matter and energy are not a smooth isotropic homogeneous fluid.

d) As Prof Sarkar mentions in the above video, the way the coherent flow of the local Universe has been handled in SN1a data is unphysical. This is because the SN1a observables have been corrected for both the motion of the heliocentric frame w.r.t. the CMB rest frame, as well as the motion of the SNe w.r.t the same frame. However we have never seen the scale at which the latter converges to zero. And since the flow models used for these corrections provide definitive evidence for a residual flow, what these corrections do is induce an arbitrary discontinuity within the data, of about 0.07 mag in the case of JLA data. (See Figure 2 of 1808.04597) and related discussion. Note that evidence for acceleration itself is just a ~0.15 mag dimming of high redshift SNe in comparison with the low redshift ones. Do you think this is the correct way to analyze data?

General relativists like Christos , and McClure and Dyer (https://arxiv.org/abs/astro-ph/0703556 ) have always argued that peculiar velocities in any covariant sense should be thought of as differences in the expansion rate of the Universe.

I would be grateful if you could take time out to read and introduce some nuance into your discussion.

@Philip,"The best evidence for dark energy is probably the age of the Universe".I'm afraid that's wrong. In universes that are neither accelerating nor decelerating (coasting), the age of the universe is EXACTLY equal to the inverse of Hubble's constant, 1/H0. Using current values of H0, this turns out to be very close to 14 Gy, just like in LCDM. This is actually quite an extraordinary coincidence, if you think about it. It means that, for such coasting cosmologies, the age of the universe is determined by a single parameter H0 (and a measurable one!), while for LCDM you need at least 3 (H0, Omega_L and Omega_M), two of which cannot be observed directly. So one might ask, which model actually demands extraordinary evidence?A very recent review of coasting cosmologies was published here: https://doi.org/10.1007/s10509-019-3720-z

"I notice that from the beginning of the discussion about this analysis (in Sabine's previous posts), you have been criticizing our work from a status quo POV."

I haven't criticized the work per se and I hope that it hasn't come across that way. My intention was to point out that the cartoon version of science often reported in the popular media, i.e. "fascinating discovery means all textbooks will have to be rewritten" is usually not the way it works. As I've said before, my main point is that the concordance model rests on many pillars (hence the name) and even if one of them is completely destroyed by your work, then that does not imply that the concordance model is doomed. First, it needs to be shown (not necessarily by you) why essentially all other cosmological tests point to the concordance model. Also, "absence of evidence is not evidence of absence" applies here as well: even if the supernova data do not strongly imply the concordance model, that is a different claim from saying that they rule it out entirely.

"a) Your above argument is based on the presumption that the simplest, maximally symmetric solution to the Einstein Field Equations (FLRW), continue to hold in the real late time Universe where all of matter has collapsed into structures. The vast majority of General Relativists disagree."

No-one claims that. The claim is that the large-scale Universe is well described by a Friedmann model (it is clear that my living room is not). I could (and have elsewhere) cite several papers which back up that claim. If that is the case, then one can use tests such as the age of the Universe to constrain the cosmological parameters. As I'm sure you know, there is a genuine debate in the community as to the importance of backreaction effects on the large-scale properties of the Universe, with many famous names on both sides. I think that it is fair to say that the jury is still out.

"b) It is an indisputable fact as of today that the local Universe out to at least 300 Mpc has a directionally coherent flow. How exactly is this described by a Friedmann model?"

Again, that is a red herring; no-one claims that.

"c) That the real Universe cannot be described exactly by a Friedmann model is not an extraordinary claim. All you need to do is look at the sky and see that matter and energy are not a smooth isotropic homogeneous fluid."

Sure. Again, no-one seriously claims that. The claim is that the large-scale Universe is well enough described by a Friedmann model that high-redshift cosmological tests and global properties of the Universe (such as its age) can be used as reliable cosmological tests.

"d) As Prof Sarkar mentions in the above video, the way the coherent flow of the local Universe has been handled in SN1a data is unphysical. This is because the SN1a observables have been corrected for both the motion of the heliocentric frame w.r.t. the CMB rest frame, as well as the motion of the SNe w.r.t the same frame. However we have never seen the scale at which the latter converges to zero. And since the flow models used for these corrections provide definitive evidence for a residual flow, what these corrections do is induce an arbitrary discontinuity within the data, of about 0.07 mag in the case of JLA data. (See Figure 2 of 1808.04597) and related discussion. Note that evidence for acceleration itself is just a ~0.15 mag dimming of high redshift SNe in comparison with the low redshift ones. Do you think this is the correct way to analyze data?"

Again, my point is that even if your analysis is 100 per cent correct it does not automatically follow that the large-scale Universe is not well described by a Friedmann model. That's all I've been trying to claim here (as an antidote to the sensationalistic popular press).

"General relativists like Christos , and McClure and Dyer (https://arxiv.org/abs/astro-ph/0703556 ) have always argued that peculiar velocities in any covariant sense should be thought of as differences in the expansion rate of the Universe."

I don't see how that is relevant here. As one can see from the "expanding space" debate, there can be different ways of looking at things; in the end, that doesn't matter as long as one gets the correct results.

"I would be grateful if you could take time out to read and introduce some nuance into your discussion."

It should be clear that already my discussion is more nuanced than that of some other commentators here (which is independent of who agrees with whom). Again, I haven't criticized your actual work, but merely the jumping to conclusions (not necessarily by you).

"No one is talking about "alternative theories". He is simply pointing out that the data analysis does not support the claim that was made based on it. I find it stunning, to say the least, that you basically reject the result of a study just because you do not like the conclusions. This is not how science works."

As I pointed out in my reply to Mohamed, I don't take a stance on the analysis at all. My goal is to point out that even if it is completely correct then that does not prove that the concordance model is completely wrong, dark energy doesn't exist, etc.

I've always found it strange when people prefer some values of some parameters over others. The Universe is what it is and it is the business of observational cosmology to find them out. I think that one sees the opposite effect, both in many comments here and in popular-science articles: people are convinced that Sarkar's work must be correct because they like the implication some draw from it (i.e. "I always knew that dark energy was bullshit"). It cuts both ways. Of course, unexpected results get more press coverage. Historically, however, most unexpected results have turned out to be wrong.

"I'm afraid that's wrong. In universes that are neither accelerating nor decelerating (coasting), the age of the universe is EXACTLY equal to the inverse of Hubble's constant, 1/H0. Using current values of H0, this turns out to be very close to 14 Gy, just like in LCDM. This is actually quite an extraordinary coincidence, if you think about it. It means that, for such coasting cosmologies, the age of the universe is determined by a single parameter H0 (and a measurable one!), while for LCDM you need at least 3 (H0, Omega_L and Omega_M), two of which cannot be observed directly."

True, but irrelevant, for several reasons. 1. The age of the Universe is the same as in LambdaCDM NOW, but not always. That does appear to be some sort of coincidence, but it is not exact. Check out https://arxiv.org/abs/1001.4795 and https://arxiv.org/abs/1607.00002. 2. A coasting cosmology (otherwise known as the Milne model) has no matter in it (otherwise it would decelerate), so is not a good model for our Universe. 3. One can also not observe the Hubble constant directly, whatever that means. 4. Since everyone agrees that there is a significant amount of matter in our Universe, and there is no debate that Omega is about 0.3, then everything I said still follows; it is irrelevant if some other cosmological model which has the same age but is otherwise nothing like our Universe is possible. 5. There is much more evidence than just the age of the Universe, and coasting cosmologies are definitely ruled out. To sum up, it is an approximate coincidence that the concordance model has (NOW, and only now) an age of about a Hubble time, but this is no more profound than the coincidence that the angular sizes of the Sun and the Moon are the same, or that the brightest stars, planets, and meteors are roughly the same brightness.

In that case you are aware that your arguments (The best evidence for dark energy is probably the age of the Universe .... and following) apply only to an exact Friedmann model and not an averaged Friedmann model.

It would be nice if you started off with arguments that reflect your level of knowledge. Otherwise it looks like you have an agenda to mislead.

"In that case you are aware that your arguments (The best evidence for dark energy is probably the age of the Universe .... and following) apply only to an exact Friedmann model and not an averaged Friedmann model."

Whether an averaged Friedman model has the same age is not immediately clear. Even the concept of averaging is not well defined. But there is good evidence that the large-scale Universe is well described by a Friedmann model and would have very close to the same age as an ideal one. If you believe that the Universe deviates substantially from an ideal Friedmann model on large scales, then you have to explain why this approximation "just happens" to work so well, leading to the concordance model. At the 2015 conference "Beyond LambdaCDM" in Oslo, George Efstathiou said that if anyone had an alternative model which---setting the bar really low---did nothing more than explain all current data as well as the concordance model, then he would give them a job in Cambridge. I don't think that he has hired anyone as a result.

I have no qualms at all with pointing out mistakes in other people's papers. In fact, about a quarter of my own papers point out mistakes in other papers. But if one piece of a puzzle doesn't seem to fit, that doesn't automatically imply that the whole edifice comes tumbling down. It might, as when Kelvin referred to two small, dark clouds, but usually it doesn't. I agree that the magnitude--redshift relation for type Ia supernovae doesn't provide overwhelming evidence for cosmic acceleration or (a weaker claim) even a positive cosmological constant. But it is a much stronger claim to say that, properly analyzed, they rule it out (as at least some popular descriptions seem to imply), and an even stronger one that the Universe is radically different from FRW on large scales.

"Define "observing". Define "observing directly". Suggest an observation which, even if not realistic to perform, would be a better indication of the existence of dark energy."

Observe and plot the redshift of every galaxy in the universe precisely against time. An upward slope in each graph indicates acceleration.Rather than - make lots of assumptions about various parameters in a model which is only partially hung on observations, and conclude that because acceleration is apparent in this clearly imperfect, mathematically simplified model, that acceleration is there in reality; and rather than - observing supernovae inside a few thousand out of 100 billion galaxies and again based on model assumptions conclude the universal expansion is accelerating.

"That aside, of course by "best evidence" I meant "best evidence so far".""Best evidence" is probably not the best phrasing in the case where the purported evidence is model-dependent, observations sparse, analysis of the observations faulty, and the research standards lax. "Possible hint by the standards of a GCSE physics practical" would be better.

"people are convinced that Sarkar's work must be correct because they like the implication some draw from it"The original research team are free to refute Sarkar's analysis.

"i.e. "I always knew that dark energy was bullshit"There's insufficient evidence for it, that's the point. It could be a real phenomenon, obviously.It's not dark energy per se that is BS, it is the claims of people like you that there is good evidence for it. There isn't.Same with fine-tuning, strings, the multiverse, Many-Worlds, inflation and the eminent Dr. Luke Barnes' theory that the supernatural being that raped a poor Palestinian woman and conceived a demi-god child in the Bronze Age "created" the universe expounded in the wonderful **physics** book "A Fortunate Universe", wonderful according to your review at least. We don't want to upset the apple cart by telling the truth, do we now?

"that does not prove that the concordance model is completely wrong, dark energy doesn't exist, etc."This is the wrong way round from you as usual. The starting point is not assuming that dark energy is there, cos it's invisible. The question is whether there is sufficient evidence for the dark energy claim, and clearly the model-dependent claims and sparse, faultily analysed data put forward are not convincing evidence.

Yet again, you think if something exists in a model then it exists in reality. When do you hope to correct this fundamental misunderstanding? And when will you re-write your review of "A Fortunate Universe" to reflect that there is zero evidence of fine-tuning, zero evidence of a multiverse, and adults who believe in fairy tales need to seek psychiatric counselling?

A non-isotropic cosmology is not "the whole edifice tumbling down", it's GR, one of the best-confirmed theories of all times.

"If you believe that the Universe deviates substantially from an ideal Friedmann model on large scales, then you have to explain why this approximation "just happens" to work so well, leading to the concordance model."

The whole point of this discussion is that it's not working well. How about if those who conjecture that LCDM is a justified average actually try to prove their assumption?

"But if one piece of a puzzle doesn't seem to fit, that doesn't automatically imply that the whole edifice comes tumbling down."

You are providing a good demonstration for what Popper called "immunizing stratagems".

A non-isotropic cosmology is not "the whole edifice tumbling down", it's GR, one of the best-confirmed theories of all times.

The question is not whether there is local anisotropy, or even whether this significantly affects the supernova results or some other cosmological tests, but rather whether it is so extreme that the large-scale Universe is not well described by a Friedmann model. Until some extraordinary evidence convinces me otherwise, I'll side with Green and Wald (yes, that Wald) on this one. (I am aware that there is a backreaction debate and that the jury is perhaps still out, as I've mentioned before. That, however, is a very civil debate without hyperbole. I've met many of the people on both sides and they are all nice, even though I might not agree with them on all points (scientific or othewise).)

"If you believe that the Universe deviates substantially from an ideal Friedmann model on large scales, then you have to explain why this approximation "just happens" to work so well, leading to the concordance model."

The whole point of this discussion is that it's not working well.

It is perhaps not working on small scales. No-one has presented any evidence that it doesn't work on large scales, which is my only claim.

How about if those who conjecture that LCDM is a justified average actually try to prove their assumption?

There is even a Wikipedia article on it. OK, it's more complicated, and doesn't fit into a comment box, but George Ellis and others have been thinking for literally decades about the averaging problem and what we can and cannot conclude from given observations. Again, I'm referring to the large-scale Universe here, so I still claim that the age of the Universe is good evidence ("the best so far") for a cosmological constant.

"But if one piece of a puzzle doesn't seem to fit, that doesn't automatically imply that the whole edifice comes tumbling down."

You are providing a good demonstration for what Popper called "immunizing stratagems".

A Life in the Time of Corona shows, sometimes immunizations would be quite helpful. :-)

I am familiar with this debate. In fact, I just recently wrote about it. If you do not know how the equation on the average look like, how do you know that the observations are evidence for dark energy and that the supposed dark energy is not a contribution created by the correct averaging procedure? This is a rhetorical question. The answer is you don't know.

"Rather than - make lots of assumptions about various parameters in a model which is only partially hung on observations, and conclude that because acceleration is apparent in this clearly imperfect, mathematically simplified model, that acceleration is there in reality; and rather than - observing supernovae inside a few thousand out of 100 billion galaxies and again based on model assumptions conclude the universal expansion is accelerating."

Errm, the interpretation of the redshift drift is also model-dependent. It does not "directly" measure acceleration.

"I am familiar with this debate. In fact, I just recently wrote about it. If you do not know how the equation on the average look like, how do you know that the observations are evidence for dark energy and that the supposed dark energy is not a contribution created by the correct averaging procedure? This is a rhetorical question. The answer is you don't know."

The problem is not that I don't know. The problem is with people hyping the Sarkar result as if it overthrows modern cosmology in a manner similar to the Copernican revolution, which is simply not the case (but might make for an effective tabloid headline).

With regard to backreaction (i.e. inhomogeneities so large that the affect the metric and/or expansion history of the Universe), while it is difficult to disprove, an argument against it is that there is no reason it would just so happen that it results in observations which are completely interpretable within the context of 1920s cosmology. That would be like saying that fossils are not evidence for evolution because the Devil created them to look like they indicate evolution in order to fool us. That is an appropriate comparison because more and more the debate about the Sarkar results (which, whatever its implications, is, in the grand scheme of things, a relatively minor technical point) reminds me of creationists who seize on a debate between Gould and Dawkins or whatever as if merely the fact that they don't agree somehow disproves evolution.

“A non-isotropic cosmology is not "the whole edifice tumbling down", it's GR, one of the best-confirmed theories of all times.”

Is GR really so well confirmed? Or is there a biased choice of samples as Sarkar has said it. I think one should postpone such statement until the problems of dark matter and dark energies are solved and the cause for the cosmological inflation is found. And we should keep in mind that the problem of dark energy is immediately solved if we admit that the speed of light was slightly greater in some past billions of years.

On the other hand, which evidence do we have that c was constant all the time? I see more a counter-evidence in the results of Riess and Perlmutter.

"The problem is with people hyping the Sarkar result as if it overthrows modern cosmology"

And who are those "people", if I may ask? For all I can see everyone is doing a good job ignoring a result that disagrees with what they believe to be true. This is hugely worrisome and very unscientific.

"That would be like saying that fossils are not evidence for evolution because the Devil created them to look like they indicate evolution in order to fool us."

This is an entirely inappropriate comparison. A CC term is THE most generic correction to the field equations. It would be odd if there was no such term in the averaged equations. The question is merely how large it is.

"And we should keep in mind that the problem of dark energy is immediately solved if we admit that the speed of light was slightly greater in some past billions of years."

We can't admit something until we know that it is true, and we don't. People have looked at this, but there is a reason why it has not become a mainstream idea.

"On the other hand, which evidence do we have that c was constant all the time? I see more a counter-evidence in the results of Riess and Perlmutter."

If you think that those results prove the inconstancy of the speed of light, then please write it up and collect your Nobel Prize. (No, wait, that won't work, because the Nobel Committee is part of the dark-energy conspiracy.)

"Errm, the interpretation of the redshift drift is also model-dependent. It does not "directly" measure acceleration."

I'm guessing if you observed the redshift over time from every galaxy in the universe precisely enough, forget about any cosmological models, you would observe acceleration if it were there. If objects are accelerating, their redshift will increase with time. (Just like the CMB across the universe is observed directly and mapped in detail independent of any models.)

"And who are those "people", if I may ask? For all I can see everyone is doing a good job ignoring a result that disagrees with what they believe to be true. This is hugely worrisome and very unscientific."

Many here in the comments. Many in the popular and semi-popular press. I don't see anyone ignoring it. If someone doesn't immediately agree with it, that is not ignoring it, it is reserving judgement---what Sarkar has criticized others for not doing. If anything, I see the opposite effect: those who have not been comfortable all along with dark energy jump on this uncritically and say that someone has finally found the truth, whereas the more-conventional people are reserving judgement. Usually, reserving judgement is better.

"A CC term is THE most generic correction to the field equations. It would be odd if there was no such term in the averaged equations. The question is merely how large it is."

"I'm guessing if you observed the redshift over time from every galaxy in the universe precisely enough, forget about any cosmological models, you would observe acceleration if it were there. If objects are accelerating, their redshift will increase with time. (Just like the CMB across the universe is observed directly and mapped in detail independent of any models.)"

You can indeed observe the CMB independent of any model. Extracting information about the Universe requires a theoretical framework. You can indeed observe changes in the redshifts of galaxies over time. Extracting information about the Universe requires a theoretical framework. You seem to think that a redshift increasing with time demonstrates, in a model-independent way, some sort of acceleration. How, exactly, is that supposed to work?

Are you perhaps thinking of redshift as an indicator of velocity and redshift increasing with time as indicating velocity increasing with time and hence acceleration? If so, then you are making a very basic mistake in cosmology.

Even if you have some other theory of the relationship of the change in redshift with time and acceleration, you have to have some theory. It is not model-independent.

"This is an entirely inappropriate comparison. A CC term is THE most generic correction to the field equations. It would be odd if there was no such term in the averaged equations. The question is merely how large it is."

Just to be clear, the comparison involves the reactions, not the viability of the hypotheses.

"Many here in the comments. Many in the popular and semi-popular press. I don't see anyone ignoring it."

You are evading the question. I do not know where the "many" are that you talking about. Please provide evidence. I don't see anyone "hyping" in this comment section and I have not seen any popular media article "hyping" this case either. And that you "don't see" anyone ignoring it, even though no one has paid attention, is probably because you don't want to see it.

There has to my knowledge been one paper reacting to Subir's criticism. That paper was itself wrong as Subir and his group have explained very quickly, and frankly I think it's obviously wrong. The only other reaction I have seen is dismissive comments, like yours.

You are evading the question. I do not know where the "many" are that you talking about. Please provide evidence. I don't see anyone "hyping" in this comment section and I have not seen any popular media article "hyping" this case either. And that you "don't see" anyone ignoring it, even though no one has paid attention, is probably because you don't want to see it.

It should be obvious that many commentators here---I'll mention Steve Evans by name, but there are others (though a bit more polite)---have read at most the title of Sarkar's paper and are convinced that it is right. That is confirmation bias. If you don't notice this in the comments, and don't notice that Evans is a troll and I am not, then you have my deepest sympathy.

Note: I am not saying that everything in the more-balanced reports is correct, nor that everything in the more-hyped ones is wrong, but merely stating that tabloid-style headlines are not helpful in the discussion.

So, let us look at the supposed evidence for "hype" in the popular media that you refer to. The first five articles that you cite are from 2018, 2016, 2004, 2016, and 2016 and the last two are from 2007 and 2018. They were all written before the paper we are talking about even appeared. They are not about the work we are talking about. That you bring them up makes me think you do not even know what we talk about.

This leaves one article, which is the big think piece. I agree that the headline is very misleading. However, the article does not overstate the case and in fact leaves the last word to Riess. I fail to see why you think it is hype.

In summary, you failed to provide the evidence I asked for.

"It should be obvious that many commentators here---I'll mention Steve Evans by name, but there are others (though a bit more polite)---have read at most the title of Sarkar's paper and are convinced that it is right. That is confirmation bias."

I am very well aware that most of the commenters here do not know what they are talking about. But this is not the question I asked. I asked what makes you think someone is "hyping" the result.

"Are you perhaps thinking of redshift as an indicator of velocity and redshift increasing with time as indicating velocity increasing with time and hence acceleration?"

Yes. I think both those statements are true.

"If so, then you are making a very basic mistake in cosmology."Highly possible.

"are convinced that it is right."I have nowhere written that. Misrepresenting me for the umpteenth time. The paper presents questions about the supernova data analysis and lax standards which should be addressed by the original team. As I have written many times, even before the Sarkar paper the presented evidence for dark energy was unconvincing because it is dependent on a model known to be imperfect, sparse data and now questionable data analysis and lax research standards. As I stated in a previous thread, this is no surprise to me given Brian Schmidt's support of a book in which a madman presents fairy tales as physics, and given the unbelievable basic mistakes about physics and logic he wrote in the preface. And your position on dark energy is no surprise given your positive review of the book of fairy tales, and your repeated stance in the comments of this blog of supporting claimed physical phenomena which have little or absolutely no evidence.

Regardless of the Sarkar paper, you have not been able to present good evidence of dark energy, nor of universal fine-tuning again. Yet you will go ahead with your paper. Why? Nobody has any idea whether the universe has to be exactly how it has been observed to be or not, so the question of fine-tuning is a non-starter. It's willful self-delusion on your part.

Phillip Helbig:>>> "Or is there a biased choice of samples as Sarkar has said it."

“The confirmation of GR has nothing to do with Sarkar's work.”

The samples presently not used are, as said, the cases of dark mass and dark energy and of inflation. These are explainable if we follow Lorentz’ approach to relativity. But this way is not taken into account as Einstein must not be questioned.

>>> "And we should keep in mind that the problem of dark energy is immediately solved if we admit that the speed of light was slightly greater in some past billions of years."

“We can't admit something until we know that it is true, and we don't. People have looked at this, but there is a reason why it has not become a mainstream idea.”

So, what is then evidence that c was stable over long time? On the other side, the results of Riess and Perlmutter are usable as a proof for a decreasing c within normal practice in research. It is a simple explanation, there is no other one and it is in no conflict with other observations. – And what does it mean to state the constancy of c? Present cosmology assumes that the space expands, which means that it permanently changes. What does a constancy of c mean in a changing space? To which space is the speed c then related?I had the occasion to ask Saul Permutter if a decreasing c could be an explanation to his results. He answered ‘no’ as it would be in conflict with the “fading of a supernova”. He did not say that it is a fundamentally wrong approach. - Now I have tried to find out what the influence of this fading is. No result. - Perhaps someone here has an explanation.

"Usually, reserving judgement is better."Unbelievable. You think fine-tuning is true based on zero evidence. It is the most egregious example of failing to reserve judgement.If the Cosmologists showed that all the galaxies in the universe are accelerating away from us by a direct measurement then people would believe the universe is accelerating. They have not provided anywhere close to this level of evidence. Some theorists are talking about models as if they are reality.

Following on your 14 years article with this excellent video illustrates the value of the combined presentation. That there are problems with the measurements used to support "Dark energy" is not surprising. The measurements supporting the Hubble constant are constantly being scrutinized by groups questioning the assumptions and corrections applied to the data.I cherish the opportunity to hear the words and / or read the words of scientists that are not other wise available to me. Your own videos are fascinating and more valuable when followed by your written words. Thank you for this and other presentations

Yes, of course. There are an infinite number of models. But what is the evidence for them? It seems rather drastic to invoke radically different physics just to explain an approximate coincidence. Even Dirac failed miserably at that.

The March 2020 issue of Scientific American has an interesting article relevant to this blogpost/video interview. It's titled "A Cosmic Crisis", and one of the points it makes is that estimates of H0 (the Hubble constant) are discrepant, and no one really knows why. I do not know if anyone can read this online without a subscription.

Here's a key passage:

"In 1970 one-time Hubble protégé Allan R. Sandage published a highly influential essay in Physics Today that in effect established the new science's research program for decades to come: "Cosmology: A Search for Two Numbers". One number, Sandage said, was the current rate of the expansion of the universe - the Hubble constant. The second number was the rate at which that expansion was slowing down - the deceleration parameter."

Nice to see someone adopting Lakatos' view of the nature of science ("research program")!

Yes, there's some technical subtlety/difference, but "dark energy" and "the deceleration parameter" are the same thing.

In 1970 one-time Hubble protégé Allan R. Sandage published a highly influential essay in Physics Today that in effect established the new science's research program for decades to come: "Cosmology: A Search for Two Numbers". One number, Sandage said, was the current rate of the expansion of the universe - the Hubble constant. The second number was the rate at which that expansion was slowing down - the deceleration parameter.´

There were two numbers because Sandage assumed that there is no cosmological constant. He stuck to this belief, mistakenly believing that it was implied by GUTs, for the rest of his life. With no cosmological constant, the value of the deceleration parameter, q, is simply half the density parameter, q = Omega/2. Historically, q was used because it appears in the first non-linear term in a series expansion for distance as a function of redshift. This is not relevant today, for two reasons. First, one can calculate the distance exactly, without needing a series expansion. Second, redshifts are high enough that series expansions are not useful anymore. (Of course, even if one needs higher-order terms in the series expansion, the value of q itself is still well defined and its sign indicates whether the universe is accelerating.) More generally, q = Omega/2 - lambda.

Sandage was in the low-Hubble-constant camp in the old debate on its value. The debate then was between 50 and 100. Sandage actually wrote a paper claiming that it is 42 (really!). Serious proposals by famous cosmologists were made claiming that H is as low as 30. Interestingly, that debate generated essentially no claims that the discrepancy could be caused by new physics (probably because there was no redshift dependence and because the discrepancy was so large that that didn't seem viable).

I thought Professor Sarkar was very rational and unbiased in pointing out the discrepancies he and his team observed, and in discussing what was needed going forward. He didn't jump to unwarranted conclusions and made a good case for needing more data and taking a closer look at some of the analysis and conclusions made for some of the existing data. Well done Sabine.

From Steven Evans' rough transcript (which I think is accurate, bar some punctuation; thanks again Steven): "So our first step was to use a principled statistical method This is called the maximum likelihood estimator".

I've written about something meta related to this, in quite a few comments here and in other blogposts. One way to characterize it is "anachronistic criticism".

It is obviously absurd to criticize Pheidippides for not using his Handy rather than running the marathon; likewise to criticize Copernicus or Kepler for not using MLE to address heliocentric models.

It's not at all clear from the video itself - and I haven't yet dug up the 2016 paper referred to - but I doubt that Sarkar (or anyone else) honestly criticizes Riess et al. or Perlmutter et al. for not using MLE in their 1998 and 1999 papers (respectively). Rather he's referring, in the video, to analyses of a much larger dataset, I think.

But let's hear it from any reader familiar with the use of MLE applied to datasets from astronomical observations from different equipment setups (at different times), with the number of data points being ~30-40. In the late 1990s. For any such reader, is it reasonable to criticize Riess+ and Perlmutter+ for not using MLE then? To what extent is this an anachronistic criticism?

This is a great blog issue, thanks for the post. A new paper claims that dark energy is robust. Its title is "No Evidence for Type Ia Supernova Luminosity Evolution: Evidence for Dark Energy is Robust" and can be found at https://arxiv.org/abs/2002.12382. As I am not a specialist in this area, it would be interesting to see comments from others on this.

In a nutshell: a recent paper ("Early-type Host Galaxies of Type Ia Supernovae. Evidence for Luminosity Evolution in Supernova Cosmology" - The Astrophysical Journal, Volume 889,1) claimed to have found a correlation between SN Ia luminosity and the age of the galaxy in which the supernova occured. The further away the galaxy you observe, the younger it is: so if the age of the stellar population has a systematic effect on the actual brightness of the supernova and you take the perceived brightness of it to be indicative of its distance, your estimates for the distance would be too high, with the discrepancy getting bigger the further you look back in time. The case for an accelerated expansion of the universe rests on an observed discrepancy between the measured redshift of the supernova and the distance inferred from its percveived brightness: the so-called Hubble-Residual. It, too, grows with distance. The claim of the paper is that if you take the Luminosity Evolution of the supernovae into account, most of the Hubble-Residual can be accounted for: the expansion of the universe is not accelerating.

Please note: I am not an astrophysicist, I have read the paper you linked (thank you!) and it does seem to contain some valid criticism of the concept of "Luminosity Evolution". Since the paper is freely available, let me quote: "The inclusion of standard error sources, clearly present in SN Ia residuals, reduces the significance of the dependence [between age and luminosity] to < 2σ ..." take away one (!) SN "with the oldest host and a poorly sampled light curve, SN2003ic, reduces the significance to1.5σ...". Well I don't know about you but a blatant refutation looks different to me. To conclude from this criticism that the "evidence for dark energy is robust" is a bit daredevil at best.

Just in case anyone is interested in this side-track of the discussion: I have finally managed to get hold of Kang's et.al. article in ApJ and the conclusion that the age of the white dwarf (or rather: it's companion) might have an impact on the brightness of the resulting SN is simply plausible. Taking the age of the stellar population around the SN into account would only be, after all, one more correction factor (along with i.e. mass of the galaxy, shape of the light-curve etc.) to take into consideration. Doing this would account for "most of" the Hubble-Residual (so the paper claims).

The article linked by Louis Wilbur challenges the significance of this relation between age and luminosity, and thus "We find the residual dependence of host age (after all standardization typically employed for cosmological measurements) to be 0.0011±0.0018 mag/Gyr (0.6σ) for 254 SNe Ia from the Pantheon sample, consistent with no trend and strongly ruling out the large but low significance trend claimed from the passive hosts."The article is publicly available and provides valuable insight into the data selection process and statistics applied.

However, I was still somewhat unsatisfied (as my last post indicates) since a physically plausible argument (that a young white dwarf can only have a young companion star to feed on and therefore ingests only lighter elements before it explodes: this affects the brightness of the explosion) was countered by a purely statistical one. I admit if you can't argue against the physics (i.e. providing a physical reasoning why this effect should be insignificant) it is totally legitimate to show that the effect is empirically negligible. However, I always feel more comfortable with an explanation why this is so instead of someone claiming that it is so. Intrigued by the question "how sound is the assumption that SN Ia supernovae all have the same brightness", I found this article:https://arxiv.org/abs/2003.01721

Anisotropy in cosmic expansion or not, I find the case for an accelerated expansion of the universe, whether directional or isotropic, exceedingly unconvincing. Until, of course, further notice ;-)

Thank you JeanTate! and no, I wasn't aware of these discussions, otherwise I wouldn't have posted my comments as they add exactly nothing to them. So I am a bit embarrassed now since "read before you write" (or "listen before you talk") is a motto I normally adhere to. I have only discovered Sabine's Blog long after finishing "Lost in Math", in fact, I googled her to see what else she has published. I am simply flabbergasted ;-)So before I shut up and dig in: let me break with my principles once again to thank you for pointing out this article to me. I had not appreciated how utterly fascinating WD binaries can be; so instead jumping to any conclusions about thermonuclaer SNs and their fitness as "standard candles" I'll lean back and study the intriguing lifes and feeding habits of white dwarfs.

Here, I think, is "So that was our paper in 2016 and in fact it was rather a surprise to us that this was actually the very first time that somebody had in fact looked at the data and the analysis in detail":"Marginal evidence for cosmic acceleration from Type Ia supernovae", Nielsen, J. T.; Guffanti, A.; Sarkar, S., 2016NatSR...635596N

And here's the "in fact did something novel which had not been done earlier; we cross correlated that catalog of radio sources [NVSS] with a catalogue of objects measured in the so called 2ma survey [2MASS Redshift Survey]" one (I think):"High-redshift radio galaxies and divergence from the CMB dipole", Colin, Jacques; Mohayaee, Roya; Rameez, Mohamed; Sarkar, Subir, 2017MNRAS.471.1045C

And "A radio astronomer called single had already said this some years earlier" refers, I think, to:"Large Peculiar Motion of the Solar System from the Dipole Anisotropy in Sky Brightness due to Distant Radio Sources", Singal, Ashok K., 2011ApJ...742L..23S

Re "This is the important thing and we are very grateful to the so-called joint likelihood analysis collaboration"From Nielsen+ (2016), "We focus on the Joint Lightcurve Analysis (JLA) catalogue11. (All data used are available on http://supernovae.in2p3.fr/sdss_snls_jla/ReadMe.html — we use the covmat_v6.)": the link works (for me anyway), and 11 is "Improved cosmological constraints from a joint analysis of the SDSS-II and SNLS supernova samples", Betoule et al., Astron. Astrophys. 568, A22 (2014).

Re "which essentially included every supernova expert in the world including the Nobel laureates". The Nobel laureates are Saul Perlmutter, Adam G. Riess, and Brian P. Schmidt. The first two are co-authors of the Betoule+ (2014) paper. Of the co-authors of the two benchmark papers cited by the Nobel Committee (Riess+ 1998, and Perlmutter+ 1999), the following are Bertoule+ (2014) co-authors: R. S. Ellis, S. Fabbro, A. V. Filippenko, A. Goobar, C. J. Hogan, I. M. Hook, S. Jha, C. Lidman, R. Pain, N. Walton. The other two circles on the Venn diagram - co-authors of a benchmark Nobel paper but not in JLA, and in JLA but not a co-author of a benchmark Nobel paper - have populations considerably larger than ten.

Re "So that was our paper in 2016 and in fact it was rather a surprise to us that this was actually the very first time that somebody had in fact looked at the data and the analysis in detail.": Bertoule+ (2014) has been cited many hundreds of times. Almost 200 of those citations are dated 2014 and 2015, so likely known to J. T. Nielsen at the time of writing Nielsen+ (2016). Among those ~200 papers are quite a few which could be characterized as looking at the data and doing in detail analysis; for example, Mosher+ (2014), "Cosmological Parameter Uncertainties From SALT-II Type 1A Supernova Light Curve Models".

Just as interesting as a positive cosmological constant is the possibility that the dipole flow extends much further out which might be shown with more data. I think generally speaking the mental image people have of the expansion of the universe is that it is equal in all directions, but what if it is measurably lopsided?

Could you check chapter 3 of the peer-reviewed paper “Evidence for anisotropy of cosmic acceleration”https://arxiv.org/pdf/1808.04597.pdf

The authors apply the following methodology:the luminosity-distance d is written as a Taylor-series of the redshift z. Then, the coefficients of that series are assumed to depend on z.And based on that d(z) model, they perform data-fittings.

But the coefficients of Taylor-series expansion are by definition constant, they cannot depend upon the variable.The coefficients cannot depend on z.In my opinion, this model is not correct...

What do you think?

I have also mentioned that point in a comment in the older relevant post:http://backreaction.blogspot.com/2019/11/dark-energy-might-not-exist-after-all.html?showComment=1577185682709#c3367580940491392832

"Could you check chapter 3 of the peer-reviewed paper “Evidence for anisotropy of cosmic acceleration”https://arxiv.org/pdf/1808.04597.pdf

The authors apply the following methodology:the luminosity-distance d is written as a Taylor-series of the redshift z.Then, the coefficients of that series are assumed to depend on z.And based on that d(z) model, they perform data-fittings.

But the coefficients of Taylor-series expansion are by definition constant, they cannot depend upon the variable.The coefficients cannot depend on z.In my opinion, this model is not correct...

What do you think?"

Don't worry about it. The idea is to use a Taylor expansion because it is model-independent (i.e., as they state, doesn't assume anything about the contents of the Universe). However, that expansion is based on an ideal case. They then "perturb" it to allow the q term to depend on z, since the whole point of their project is to look for deviations from the ideal case. If it helps, put "Taylor expansion" in square quotes when it is redshift-dependent.

The use of a Taylor expansion is extremely common; the (vague? well-founded? YMMV) hope is that only the first or second terms are big enough have any experimental or observational consequences (i.e. can be tested). It's not even motivated by beauty, simply convenience.

In this particular paper I feel it's a weakness (convenience, no matter how nicely dressed up, is ultimately just laziness), particularly as there's been so much said about "sigmas" and "principled" techniques.

Using the Taylor series is fine. But if one expands f(x) = a0 + a1*x + a2*x^2 + … then the coefficients a0, a1, a2 , … do not depend on x.

To check for directional dependence of the coefficients of the Taylor series is also a good idea.But they shall still remain independent of the redshift z.

Btw, it would be more generic (or even correct) to consider that all the coefficients of the series have directional dependence; i.e. also H0.And M. Visser already provides a fouth-order term, why not consider that term as well?

Sloppiness, as you say?Or simply “cooking” the model, until it gives the desired result?

If I may start with an observation regarding the previous discussion regarding videos vs. written word, in this particular post there 11 comments before Steven E. added the video transcription. After his transcription the post count went to 70. It seems to me that the written word is better for complex subjects. Granted this could only be considered a single example, but even a single example can speak volumes in some cases.

Regarding the video, I think that it is important to keep in mind that Dr. Sarkar is not saying that Dark Energy does not exist; rather he is questioning the strength of the evidence. One of the things I got from this interview, he was only questioning a single study which is a single piece of evidence. For me it is difficult to put a lot of confidence into questioning a general topic based on a review of one piece of evidence.

Another thing that came up for me was the amount of reliance that goes into statistical evaluations on prior statistical evaluations. From reading the interview I get the impression that there are layers of statistical evaluations associated with this evaluation. I understand that there is no other way to move forward, however; there is always some amount of “wiggle-room” in every statistical evaluation. So, the more layers of evaluations something is built upon the “wiggle-room” must also grow.

The one thing I really found interesting was Dr. Sarkar’s discussion about finding potential bi-directional motion “locally” along a dipole anisotropy. This is very interesting and I think it deserves a deeper look. However, this could also be a statistical artifact associated with the layers of statistical evaluations.

A couple of additional things I would like to know, what does Dr. Sarkar believe the weak spot is associated with his paper, and what is the next step or follow up based upon what he has discovered. My one negative critique, I do not think it was necessary for him to have the discussion about “confirmation bias.” I think he was over critical in his assessment of astrophysicist in this area. Additionally, going down the “confirmation bias” road gives the impression that one is trying to prop up their position by dragging down the counter-position and in this case it was not necessary.

"Regarding the video, I think that it is important to keep in mind that Dr. Sarkar is not saying that Dark Energy does not exist; rather he is questioning the strength of the evidence. One of the things I got from this interview, he was only questioning a single study which is a single piece of evidence. For me it is difficult to put a lot of confidence into questioning a general topic based on a review of one piece of evidence."

Indeed. But it is necessary to criticize the over-the-top "rewrite the textbooks" claims in the popular media, both by those who disagree with Sarkar's analysis but also by those who agree. One shouldn't make the the-enemy-of-my-enemy-is-my-friend fallacy.

My one negative critique, I do not think it was necessary for him to have the discussion about “confirmation bias.” I think he was over critical in his assessment of astrophysicist in this area. Additionally, going down the “confirmation bias” road gives the impression that one is trying to prop up their position by dragging down the counter-position and in this case it was not necessary.

Indeed. Anyone who is even remotely familiar with the history of this topic is aware of the time when it was difficult to publish a paper even considering a positive cosmological constant. Both supernova teams were completely surprised by their results, were exceedingly sceptical, and so on. The idea that people somehow wanted to measure cosmic acceleration is absurd. (Interestingly, an early paper by the Supernova Cosmology Project claimed support for the then-standard model, i.e. high density with no cosmological constant. We now know that that result was due to an outlier. To be fair, the error bars were very large, so statistically it is OK. The only case of confirmation bias in this story that I can think of is that they probably wouldn't have published this early paper as quickly if an outlier had indicated an unexpected result. But when it became clear that it was an outlier, this probably led to a more careful assessment in subsequent work.)

@Phillip Helbig,There are other coasting models than the old Milne universe. See the review paper I mentioned, which cites a dozen of them. Among others:Mellia's Rh=ct model, which features a dark energy component with a different equation of state than LCDM (https://arxiv.org/abs/1609.08576).Chardin's "Dirac-Milne" universe, where antimatter has negative active gravitational mass and is present in equal amounts as matter, so that the universe is on average gravitationally empty (https://arxiv.org/abs/0903.2446).

Finally, that t_H=1/H0 is a trivial coincidence is really just an opinion. It is a coincidence if you ASSUME the validity of LCDM, but this is just the kind of reasoning that Sarkar denounces in this interview.

Awesome to hear that stripping away the dark matter "corrections" reveals a stream of galaxies reaching out past 100 million parsecs in both directions! As Einstein said, "It is the theory that describes what we can observe." Thanks Sabine!

While I wouldn't characterize it as "a stream of galaxies reaching out past 100 million parsecs in both directions", the large scale anisotropy and streaming motions of the "local" universe (out to at least 200 Mpc) have been discussed in the literature well before the Nielsen+ (2016) paper which seems to be the main focus here.

To take just one example out of hundreds (or more): "Seeking the Local Convergence Depth: The Abell Cluster Dipole Flow to 200 h-1 Mpc" (Dale+ 1999)

«The idea that people somehow wanted to measure cosmic acceleration is absurd.»

Is it that absurd? The Lambda CDM model was put forward well before the announcement of the results from the supernovae teams, was it not? Doesn't it entail the acceleration of the expansion of the universe? To put it another way: how likely is it that Lambda CDM with Lambda=0.65 (I think that was the value put forward), plus other data for baryonic and dark matter etc available in the mid-90s, yields a prediction of no acceleration? (these are genuine questions from my part, not merely retorical ones.)

In its modern form, the concordance model goes back to 1995 and papers by Ostriker & Steinhardt and Krauss. But it is fair to say that it was a hypothesis which could best explain all the known data, rather than being forced by observations. As such, it was prescient.

But the important thing is, it was far, far, far from being accepted as the "standard model" of the time. It was clear that the standard model of the time (the Einstein-de Sitter universe with cold dark matter) had problems, but this was merely one of many suggestions, and not the most popular, to improve it it.

So, yes, there was a prediction. (Of course it was clear that the concordance model suggested in 1995 implies acceleration.) But other ways to correct the then-standard model made other predictions. But the people in the two supernova groups themselves definitely didn't start out with some confirmation bias to find acceleration. In fact, as is clear from their writings, they expected to measure deceleration. They have said so many times. Of course, not everyone is privy to conversation, but Bob Kirshner wrote it up in his book The Extravagant Universe: Exploding Stars, Dark Energy, and the Accelerating Cosmos, which I highly recommend. (Of course, scientists are human, and Kirshner tells the story from the point of view of his team and himself.)

The research was not meant to detect the cosmological constant or acceleration. The thought was the universal expansion velocity would slow due to mutual gravitation. This was considered almost inevitable. The interest was in knowing if the universe would recollapse or slowly coast to some constant or zero velocity. The acceleration was an unexpected result.

Quite often great results occur with a shout of eureka, but more a puzzled "Well that is odd."

" The thought was the universal expansion velocity would slow due to mutual gravitation. This was considered almost inevitable. The interest was in knowing if the universe would recollapse or slowly coast to some constant or zero velocity. The acceleration was an unexpected result."

Indeed. Dig the title of a paper from the High-z Supernova Search Team from November 1998(!): "The High-Z Supernova Search: Measuring Cosmic Deceleration and Global Curvature of the Universe Using Type IA Supernovae" (emphasis added).

The question is "Expected by whom"? Krauss, Steinhardt, and Ostriker expected it; most of those doing the corresponding observations didn't, according to their own recollections.

Check out my copious comments and Sabine's polite reply on the post linked to above: "It's just my impression from talking to people, listening to talks, etc, that pre 1995 we "didn't know" of the CC." Perhaps that I was at an observatory in 1995 caused me to have a different impression.

The cosmological constant never went away, even though Einstein called it his greatest blunder. The presence of a Λ > 0 in FLRW or de Sitter spacetime meant there was an outwards geodesic flow. Einstein originally put this in to prevent gravitational implosion of the universe.

I think though the idea of a cosmological constant in the physical universe was a minority perspective. Inflationary cosmology reintroduced this, with Λ huge. The standard expectation was that it was zero on the physical vacuum.

OK, thanks, Phillip. I just wanted to confirm that some eminent theoreticians did, in fact, more or less vocally, predict the acceleration. I won't venture into interpreting the chronology as I am not sufficiently privy with the history.

Personally, although I know little about the subject, I quite like the idea that Einsteins theory has to be modified at cosmological distances; I think Moffats theory of Modified Newtonian Gravity (MOND), is interesting; I've just recently read his book; though it will take a couple more readings before I have fully understood what he's saying.

I also noticed in a paper on black hole entropy in string theory, that they had higher curvature corrections to Einsteins theory.

Modern astronomy, from ~1950 onward, has a largely unrecognized data validity problem, as I call it.

To start, here are two quotes from Steven Evans' transcript^2

"We would be very happy to use the latest data if it is made available. [...] then when you look at the existing catalogues, my collaborator Ramiz found that there are discrepancies between, for example, the JLA and the Pantheon catalogues: lots of supernovae redshifts in the Pantheon catalogue are different from those in the JLA catalogue to a level which is far bigger than the stated uncertainty of the measurement""Yeah I looked at the paper; I was quite shocked I have to say and of course in the particle physics community we are used to the idea that if you make some data public you are responsible for the data and you entertain questions and any queries that people might have about using that."

In my experience, which is almost entirely with extra-galactic astronomy, you need to spend a lot of time checking an online dataset, or (for older stuff) Tables in papers, before you can start using the data therein ... columns may be transposed, rows may be duplicated, units may be mis-stated or inconsistently applied, to mention just some of the obvious problems. And of course, some datasets are perfect, and some owners very responsive to queries (but some are also long-since dead); and so on. Perhaps even worse: sometimes an online dataset will be edited/amended/etc (perhaps two transposed columns were put back to their correct order), but there's no (public) notice as to what was done, why, and when. Tough if you've download the original dataset.

There was even a paper on this fairly recently (but which I can't find just now); a seasoned astronomer was basically castigating the owners of a public dataset for not doing basic quality checks, and newbie astronomers for being so naive as to not check the data before using it (my paraphrase).

^1 "In this case, if Sarkar is correct, an important issue is the need to do what, in my field, is called "robustness checks.""^2 The "ctd. 11." one. I tried to correct the quotes by listening to the video, but I gave up. The corrections I've applied are informed guesses

I suspect that just about every variable in a binary system that leads to a 1a supernova has some influence on the luminosity of the final cataclysm, and contributes in some small way to the error bars.

Somewhere in the various posts on this subject it was mentioned that the difference in magnitude that leads to the conclusion of an accelerating universe was only about .15 magnitude. That doesn't seem very large considering all the corrections that have to be made.

So at best we have a hint that there might be a positive cosmological constant, and a conclusion that more data is needed. Hopefully, in some university basement somewhere some theoretical cosmologist is grinding away on a much bigger data set.

The light from a supernova may pass through regions of the host galaxy a long way from the site of the BANG that are thick with dust ("thick" being understood appropriately). More than one region perhaps. The dust will dim the supernova (from our perspective), but how? Dust - of this kind - is not grey (more UV is absorbed than IR, for example); is the relative attenuation universal? Can SALT2 faithfully capture all variation due to dust (out to z ~2 anyway)?

Imagine my dismay this am at reading that the issue of fine tuning has weaseled its way back into this blog. I nearly choked on my muffin and coffee squirted out my nose.

A big part of the problem seems to be that we are stuck with the term "fine" tuning with its implication that a supreme being stirred up a universe and poured it out across the cosmos.

After all, if the universe were really fine tuned we would be inundated with extra terrestrial blogs. It would be like social media on a universal scale. Heaven forbid!

No, much better to all agree that at best the universe is very weakly tuned to randomly permit the occasional occurrence of somewhat intelligent life. No one wants a deity who botched up the universe and there would be no more religious implications.

"A big part of the problem seems to be that we are stuck with the term "fine" tuning with its implication that a supreme being stirred up a universe and poured it out across the cosmos."--Steve Bullfox

The more reasonable implication is that our universe is part of a multiverse, with a vast range of properties, so of course we find ourselves in the member of that multiverse in which our kind of life is possible.

It is the difference between a lottery winner thinking, "The odds of my winning were so small that there must a god or controlling entity which wanted me to win, due to my wonderful uniqueness, and so made it happen", or instead thinking, "Well, there were a huge number of players, so the odds of somebody winning were good, and I happened to be that one." The second explanation seems much more reasonable to me.

(A useful heuristic in such matters is Mario's Sharp Rock, a modified form of Occam's Razor which says, among competing hypotheses which fit the data equally well, always chose the most humbling one.)

I am fairly certain that Dr. Helbig and some other scientists who refer to fine-tuning, do so in philosophical (not scientific) advocation of the multiverse explanation rather than the supreme being explanation.

Why speculate at all? Well, some of us feel a little easier to have an explanation for "Why am I here?" which makes some sense to us, even if it remains unprovable; one less thing to wonder about as we lie in bed at night. Also, it is sometimes useful to have another philosophical possibility to counter creationists with.

"Well, there were a huge number of players, so the odds of somebody winning were good, and I happened to be that one." The second explanation seems much more reasonable to me.

The second one is the true reason. You only get regular winners in a national lottery if millions play regularly i.e. if the nation is thick. Which turns out to be true for all nations that have a national lottery so far. Give people 12 years of education and they still fancy their chances with odds of 14 million-to-one.

I've started to go through Colin+ (2017) "High-redshift radio galaxies and divergence from the CMB dipole", and find myself quite perplexed. I hope a knowledgable reader can help.

Take these snippets (my bold):"The NVSS is a catalogue of radio sources at 1.4 GHz which ...""SUMSS contains 211,050 radio sources and ...""we randomly select galaxies from the SUMSS catalogue ..."

Is this just a typo which was not caught by anyone?

I don't think so, however, as this apparent confusion can be found in several other places, e.g. "randomly selecting a direction and counting the number of galaxies in that hemisphere ..."

Why is this important?

The physical origins of the radio sources (as found in NVSS and SUMSS) are two (caveats apply): star formation and AGNs. AGNs produce back-to-back jets, which may terminate in inter-galactic space up to an Mpc or so from the AGN; we see these as radio "lobes" (the AGN itself is called a radio "core"). So, depending on the sensitivity and resolution of the radio observatory, the angle the jets make with the plane of the sky, and the distance of the AGN from us, radio produced by an AGN may appear as one, two, or three distinct radio sources (or more ... AGN can switch off and on; they can "precess"; and so on). Not such a big deal for radio produced in star-forming regions: such regions are nearly always well within the optical boundaries of the host galaxy).

Then there's Doppler boosting. Which can wait for a different comment. As can several other physical effects which are not even mentioned in this paper.

Caveat: as the published paper is behind a paywall (I refuse to use paywalled sources), I'm using the "v3" arXiv version, which has the following Comment: "matches published version"

From the raw transcript (ctd. 11; I'm not going to even try to edit this):

"okay initially it was 3% now the claim is you can get to what almost 1% however even when the first Hubble parameter local Hubble constant measurement was done as a major project it was called the Hubble key project using the Hubble Space Telescope This was one of the flagship missions of the Hubble Space Telescope and Wendy Friedman led this collaboration and they had only about I think 50 to 60 measurements of objects on the basis of fees that determined the Hubble constant within about 30 MegaPath"

One of the Hubble Space Telescope Key Projects was "to measure the Hubble constant". Freedman+ (2001) "Final Results from the Hubble Space Telescope Key Project to Measure the Hubble Constant" (The Astrophysical Journal, Volume 553, Issue 1, pp. 47-72) is likely what is being referred to.

There's a lot in this paper, as you'd expect. I'll mention just three things:- Table 3 (Revised Cepheid Distances to Galaxies) has ~30 rows, with the number of Cepheids per galaxy ranging from 4 to 94)- Table 1 (Numbers of Cepheid Calibrators for Secondary Methods) lists five such, Tully-Fisher relation, Type Ia supernovae, Surface brightness fluctuations, Fundamental plane, and Type II supernovae- "The calibration of Type Ia supernovae was part of the original Key Project proposal, but time for this aspect of the program was awarded to a team led by Allan Sandage."

Note: the video has discussion of Ia supernovae (obviously) and brief mentions of Tully-Fisher and the Fundamental plane; neither Type II supernovae nor surface brightness fluctuations are mentioned (even indirectly). Other distance methods, such as cosmic masers and gravitational lenses, are not mentioned in Freedman+ (2001) nor in the video.

A few days ago, I wrote: "I have it on my TBD list to track down all the papers mentioned (at least those directly so)"

I'm going to have to take that back. And part of the reason why also points to why it's hard for outsiders - even those with good, relevant PhDs and a track record of good to excellent research - to quickly and easily come to grips with material covered in this video.

Take Freedman+ (2001), one of the papers directly mentioned. The work reported in this paper is mentioned, in the video very soon after: "and this was looked at later by McClure and Dyer I think."

How quickly do you, dear reader, think you can track down this paper? Freedman+ (2001) has 2772 citations, per ADS. That same source gives 1389 entries for author "McClure", and 2646 for "Dyer". Yeah your database search skills may get you to a paper or three quickly, ones with McClure and Dyer as authors, but how you can be sure this is what is being referred to? (that's a rhetorical question).

And once you've found that paper (or one/s very likely to be that), how quickly can you make a robust judgement of what's in it, and how sound the conclusions etc are?

Sarkar and his collaborators are, clearly, very smart and hard working. As are the members of the JLA collaboration, including those who got the Nobel gong.

Cosmology is one of the fields you have dipped your toes into, albeit mostly from the theory side. If you have a continuing interest in experimental tests of cosmological models - which means, in practice, mostly observational ones - may I give some advice?

Look very hard for unstated assumptions about the physical nature of the "cosmological" objects being reported, e.g. Ia SNe or "radio galaxies". In addition to asking if the datasets used are easily and freely available, and queries about them are answered promptly.

This will be particularly important once data from the SKA and Vera Rubin Observatory (a.k.a. LSST) starts to roll in.

Oh, and I think the work of Sarkar et al., per this video, is riddled with methodological flaws, making all their conclusions "unconfirmed" (at best).

Taking a broad view, ideas like this have been explored for some time now (hundreds of papers), and are sorta/kinda discussed in the video. Opinions will differ, of course, but I think serious work on this general idea will continue for some time/years yet.

As I recall, Singal found the Solar System was moving at two different speeds in opposite directions, with both findings being in conflict with the CMB data.

From his paper - Large peculiar motion of the solar system from the dipole anisotropy in sky brightness due to distant radio sources."Our results give a direction of the velocity vector in agreement with the Cosmic Microwave Background Radiation (CMBR) value, but the magnitude (∼1600±400 km/s) is ∼4 times the CMBR value (369±1 km/s) at a statistically significant (∼3σ) level."

Then in his paper - Peculiar motion of the Solar system derived from a dipole anisotropy in the redshift distribution of distant quasars

"The magnitude of the peculiar velocity thus determined turns out to be 2350 ± 280 km s−1, not only much larger than 370 km s−1 determined from the dipole anisotropy in the cosmic microwave background radiation (CMBR), but also nearly in an opposite direction."

Comment moderation on this blog is turned on. Submitted comments will only appear after manual approval, which can take up to 24 hours. Comments posted as "Unknown" go straight to junk. You may have to click on the orange-white blogger icon next to your name to change to a different account.