Image credit: Super-Kamiokande, the most successful neutrino detector of all-time.

“These neutrino observations are so exciting and significant that I think we’re about to see the birth of an entirely new branch of astronomy: neutrino astronomy.” –John Bahcall

You’ve been around here long enough to know about the Big Bang. The vast majority of galaxies are speeding away from us, but more than that, the farther away they are from us, the faster they appear to be receding.

Image credit: ESA/Hubble, NASA and H. Ebeling.

But it’s more than that; when you look at a distant galaxy, because the speed of light is finite, you’re actually looking at it in the distant past. Since all the galaxies are expanding away from one another, and galaxies that are farther away are expanding away at a faster rate, this led to the idea that the Universe was smaller, denser, and also hotter in the past.

Image credit: James N. Imamura of U. of Oregon.

Going backwards in time, because the Universe was hotter, it was once so hot that neutral atoms couldn’t even form: everything was a sea of ionized plasma, filled with nuclei, electrons, and radiation. (When the Universe cooled to form neutral atoms, that’s where the cosmic microwave background comes from.) Going even further back, you can imagine a Universe so hot that even the atomic nuclei can’t hold together against the intense bath of radiation; a high-enough energy photon will blast them apart into free protons and neutrons.

But going even farther back than that, we can find a time where the radiation in the Universe was so hot that all the particles that exist, along with their antiparticles, would be spontaneously created in particle-antiparticle pairs.

Image credit: James Schombert of the University of Oregon.

This includes all the quark/antiquark pairs, all the lepton/antilepton pairs, all the gluons and photons and the weak bosons, and any heretofore undiscovered particles that might exist at even higher energies than we currently understand. Back when the entire observable Universe — now nearly 100 billion light-years in diameter — was compressed into a space smaller than a single light-year across, these particle/antiparticle pairs all existed in great abundance, spontaneously creating and annihilating in (approximately) equilibrium.

Image credit: me. Yes, on rare occasion, I can make an image!

But — as you can clearly see — that state doesn’t last for very long. As the Universe expands and cools, it becomes harder and harder to make new particle-antiparticle pairs, while the existing ones will continue to annihilate away into photons, or particles of light. Eventually, the chance of annihilating — dependent on their cross-section — will drop to such a low value that whatever exists at that time will be effectively “frozen in,” and as long as that particle is stable against decay, it will continue to exist to the present day.

We know of three such particles (and their antiparticles) that do this: the neutrinos!

Coming in three flavors to match the three types of lepton — electron, muon and tau — these are the lightest, lowest-mass particles known to actually have a non-zero mass. The upper limit on the mass of the heaviest neutrino is still more than 4 million times lighter than the electron, the next lightest particle.

Image credit: Hitoshi Murayama of http://hitoshi.berkeley.edu/.

And yet, neutrinos have an energy-dependent cross section that becomes extremely small at lower energies. By time the Universe is about a single second old, the neutrinos and anti-neutrinos stop interacting with one another, and simply continue to lose energy and cool with the expansion of the Universe. You may remember that this is the same thing that photons do once neutral atoms are formed, and that’s where the cosmic microwave background comes from.

Image credit: COBE / FIRAS, retrieved from Fermilab.

Only, neutrinos are slightly different than photons. Even though they have the tiniest masses of anything we know, because we know where they come from (and what the Universe was like when they stopped interacting), we know they don’t do exactly the same thing. The cosmic microwave background (CMB) of photons has an energy spectrum like the one above, with a peak at a temperature of 2.725 Kelvin.

The cosmic neutrino background should have a slightly lower temperature at 1.96 Kelvin (because electrons/positrons hadn’t annihilated yet; that’s why the CMB is slightly hotter), but remember the important difference: unlike photons, neutrinos have a mass!

That mass, tiny though it may be, is still large compared to the amount of energy that corresponds to the thermal energy that’s left over from the early Universe. Depending on their mass (remember, there’s still some uncertainty), they’re moving at no more than a few thousand km/s today, and probably at just a few hundred km/s.

This is a really, really interesting number.

Image credit: Jenkins et al. 1998, Astophysical Journal, 499, 20-40.

The mass-and-energy of these neutrinos tell us that they’ve fallen into the large-and-small-scale structures in the Universe, including in our own galaxy. They tell us that they’re a small percentage of the dark matter — about 0.3% of it — but cannot be all of it. (There’s about as much mass in neutrinos as there is mass in the form of stars currently burning through their fuel today. Not a lot, but still interesting!)

But what’s maybe most amazing about these neutrinos is that we have no idea how to experimentally detect them!

Image credit: Ben Still of http://pprc.qmul.ac.uk/~still/.

We can detect neutrinos, but only neutrinos with about a billion times the energy of these cosmic relics. Because of how quickly the cross-section falls off, we really have no hope for how to detect something with such a small signature. This is one of the last great untested predictions of the Big Bang, but one we’re unlikely to solve anytime soon. Despite the fact that there are hundreds of these neutrinos and antineutrinos per cubic centimeter, and despite the fact that they’re zipping around at (at least) hundreds of kilometers per second, the only interaction they can conceivably have with normal matter is via a nuclear recoil.

And a nucleus, compared to a neutrino, is large, to put it mildly. Detecting one of these recoils is more difficult than detecting the recoil of a supremely-heavily-loaded semi-truck when it collides with… a paramecium.

But there is one interesting thing we’ve learned about these neutrinos. You see, we’ve known for a long time that neutrinos are all left-handed, which is to say that their spin always opposes their momentum, or that they’re spin -½. On the other hand, anti-neutrinos are all right handed, their spin always points in the same direction as their momentum, or that they’re spin +½. All the other particles of half-integer-spin we know of have versions that are ±½, whether they’re matter or antimatter.

But not neutrinos. It’s fueled speculation that neutrinos might actually be their own antiparticles, making them a special type of particle known as a Majorana Fermion. But there’s a special type of decay that should happen if they are; so far, no dice on that decay, and because of that, the window on neutrinos being Majorana particles is closing.

Image credit: the GERDA experiment at the University of Tübingen.

So there you have it: there are some 1090 neutrinos and anti-neutrinos left over from the Big Bang, making them the second most abundant particle in the Universe (after photons). There are more than a billion ancient neutrinos out there for each proton in the Universe. And yet, all of these relic neutrinos — making up the cosmic neutrino background (or CNB) — are completely undetectable to us. Not in principle, just in practice, as we don’t know how to make experiments sensitive enough (or even close) to search for this. If you want to know what you can do to win a Nobel Prize, come up with a way to detect them, and the Nobel Prize in physics will surely be yours!

Until then, enjoy some other neutrino fun facts, and marvel at what’s perhaps the last great unverified prediction of the Big Bang: a relic background of cosmic neutrinos!

@Zippy: that’s a very good question, with a variety of different answers depending on your preferred viewpoint.

Maybe you want to think about the particles in a pseudo-classical way, where they “actually have an identity” which changes as they oscillate. In that case, yes, the particles should change their speed along with their mass, but their momentum (E2-m2, or mv in the nonrelativistic limit) will remain the same.

But that’s not really appropriate for quantum particles. While they are travelling, the neutrinos simply are not in a well-defined state. Rather they are a superposition of the three flavor eigenstates; the three neutrinos ν1, ν2, and ν3 are each linear combinations of νe, νμ and ντ.

When a neutrino interacts (and only when it interacts) the coefficients of those linear combinations tell you the probability that what you’ll see is electron-type, muon-type, or tau-type. And that probability oscillates with distance from where and when the neutrino was produced, because of the mixing implicit in the superposition.

@dWj: I’m a bit confused by your question. _Before_ the neutrinos became free-streaming, they were in thermal equilibrium. That’s why they have a relic temperature. Once they dropped out of thermal equilibrium (became free-streaming), they maintained the black-body energy distribution characteristic of the temperature at that time.

The subsequent cosmic expansion while the neutrinos travelled unimpeded “stretched” (red-shifted) that black-body distribution so that today we would (if we could!) see it as being 1.96 K.

It would be great if you can talk about this proposed alternative model to expansion of the universe – a model where redshift occurs due to everything gaining mass over time:
I understand it is just a theoretical model, but very interested in hearing your take on it.

Just a quick question about your size vs. age graph for the universe. I see nothing on that graph indicating inflation. Would that have occurred at a time before that shown on the graph, or were you just ignoring inflation for simplicity? Thanks in advance.

Sorry to bother you. I think I’ve answered my own question. The size at the left end of the graph is roughly 10^-6 light years, which is some 9 billion meters. The time is ~10^-20 years, or roughly 10^-13 seconds. Obviously this would be after inflation had already occurred.

@Timothy C: You are quite correct. However, astronomers cannot measure those two parameters directly. Instead, what is done in practice is to measure the redshift of a type Ia supernova, and convert that to a pseudo-“distance” using a constant expansion rate.

Then, because type Ia SNe are “standard candles” (that is, they all have the same intrinsic brightness, or at least a light curve which is tightly correlated with intrinsic brightness), we can use their observed luminosity to compute their “true” distance.

Comparing those two distances allows you to plot the cosmological distance scale as a function of redshift (our proxy for time). If the universe were _really_ expanding at a constant rate, the comparison would be a straight light.

Acceleration vs. deceleration made specific predictions for the shape of the curve. The type Ia SNe data falls right along the curve computed for a universe which was decelerating up until about 5-ish billion years ago (z > ~0.5), and has been accelerating since then.

[ Note to Ethan: please don’t cringe too much at my summary! I’ve engaged in some oversimplification, I’m sure, and have also probably not used all the correct astrophysical jargon. But I’m pretty sure I’ve got the mechanics right. ]

So no WIMPS, no anti-neutrinos, no SUSY, no double beta-decay, etc etc. And yet dark energy had to be invented to account for a very recent speeding up of the expansion. Gee how special we are.

I know this is heresey but Occam’s Razor says that just maybe the real cause is that we’re measuring the red shift incorrectly. There are hundreds if not thousands of anomalies yet it can never be the red shift values. String theory is about to be discarded after 40 years. If happens that sometimes our basic assumptions prove to be wrong.

Anytime we have to invent things like multi-verse, dark energy etc to explain observations we should instead look at the theory and ask why?

John Urbanik: “I know this is heresey but Occam’s Razor says that just maybe the real cause is that we’re measuring the red shift incorrectly.”

That’s right, it’s heresy. That’s why no scientist has thought of this before, or at least mentioned it out loud, much less actually tried to figure out whether the cosmic distance ladder and our red shift measurements could be wrong. They don’t want to be excommunicated from science, after all.

*gigantic eye-roll visible from space*

Guess what? “Our measurements are wrong” is the first thing every (good) scientist thinks of when their data implies something remarkable. They key is they don’t just say “Occam’s Razor” and call it a day, they actually *check*. Often times, that turns out to be the case. Other times, they check and check and look for error and find none. Eventually, as more data piles up yet no source of error sufficient to make the anomaly go away appears, it becomes more reasonable to take the data as being real. Occam’s Razor says to favor the simplest explanation *if and only if the more complicated explanation has no additional explanatory power*.

Scientists want to use the simplest explanation, and are constantly trying to figure out ways to explain things with less. Sometimes, however, the universe isn’t amenable to that goal and the only way to make sense of things is by going with something more complex.

So there are “10^90 neutrinos and anti-neutrinos left over from the Big Bang” but they “are completely undetectable to us.”

Really? Maybe not.

Assuming CNB temperature of 2K or so, what is the highest energy CNB neutrinos in their black body spectrum? And are instruments sensitive enough to detect these higher energy neutrinos and antineutrinos? And I assume, that we would know that they are CNB because there would be equal numbers or neutrinos and antineutrinos. Versus, I assume that more contemporary neutrino production would produce many more neutrinos than antineutrinos. But I don’t know. Please educate me.

Well anyway, then I do a search upon this learning hypotheses and find this paper.
Ultra High Energy Neutrinos: Absorption, Thermal Eects and
Signatures, by Cecilia Lunardini, Eray Sabancilar, Lili Yang, June 2013 http://arxiv.org/pdf/1306.1808v1.pdf
Which has this interesting sentence, “The observation of absorption eects would indicate a population of sources beyond z 10, and favor top-down mechanisms; it would also be an interesting probe of the physics of the relic neutrino background in the unexplored redshift interval z 10 100. ” (last sentence of abstract).

Which sounds to me like there is a way to observe CNB at the high end. But I don’t know; because aside from that sentence and a few other sentences (which maybe I understand or maybe not); I have no idea what this paper is about. Too complex for me. Anyone care to interpret.

There’s no need for the rate of change of expansion to be linear, constant or any other figure. Therefore it could be higher in the past, slower now and higher in the future with absolutely no problem whatsoever.

@OKThen: You’re confusing two different (and absurdly confusingly named) “cosmic neutrino backgrounds.” The subject of Ethan’s post are the so-called “relic neutrinos”, those which participated in interactions above the electroweak freeze-out, and which became free-streaming when the Universe was about two seconds old (vs. 380,000 years for the relic photons of the CMB).

What you are talking about is the also expected background of _high-energy_ neutrinos which originate in cataclysmic events (supernovae, gamma-ray bursts, compact-object mergers, etc.). These neutrinos originate from a combination of high-energy nuclear reactions and particle acceleration regions (e.g., pi+ production and acceleration in shocks, followed by pi+ and mu+ decays producing neutrinos).

Such high-energy neutrinos are free-streaming from their point of origin, but would arise from innumerable “point sources” found at all redshifts since the earliest star formations.

If you want to calculate what fraction of the relic neutrinos are high energy, you can approximate it fairly easily.

1) The peak of the black-body spectrum for 2 K corresponds to 172 ueV (micro-electron volts).

2) A neutrino detector like IceCube is sensitive to GeV and higher neutrinos; lets be super generous and use 172 MeV for convenience.

3) The high end of the black body spectrum is very roughly exponential: the intensity is exp(-E/Epeak).

So you want to calculate exp(-10^12). I think you’ll discover that no calculator you own will give you a non-zero value.

I believe Ethan is writing his blog to share his knowledge and wondering about our universe with the general public. How do you for example know you are not abusing an 11 year old little child sometimes, which gets excited about its opinion? I’m a scientist, and I have children who are excited about science, and I am worried about internet trolls like you. Please be nicer to others.

I get the impression that neutrinos aren’t as well-understood as people suggest. They’re classed as fermions, but on brute properties a neutrino is actually more like a photon than an electron. Maybe Michael can correct me on this:

Michael, has anybody ever detected a neutrino travelling at anything other than c, or so close to c that we can’t tell the difference? Also, if we started with a fast-moving electron, would the expansion of the universe slow it down? And if so, where did the energy go?

Michael Kelsey,
Thank you for that clarification. By the way, your explanations out here are always clear and appreciated, even if I don’t comment.

John Duffield
Well, if, and I suppose they do, neutrinos oscillate between different types (which have different masses); then by E = mc^2 + pv^4 and all that the neutrinos’ speeds must vary as they oscillate. Or so it would seem to me. I’ve read somewhere something like that. And also, since we’ve detected neutrinos at various energies, well their momentum p and hence their velocities must be changing.

So I guess it depends on what we mean by near the speed of light, i.e. 0.99c versus 0.97c or such.

I am no expert in particle physics by any means, just an interested amateur. However my understanding is that to state that a neutrino is more similar to a photon than an electron is just completely off base. A neutrino has an intrinsic angular momentum (spin in partilce physics shorthand) of 1/2. A photon has a spin of 1 (both in units of h-bar). This may not seem like a big thing, but in particle physics it means everything. Integral spin particles (known as bosons) behave in a manner that’s completely different than half-integral spin ones (fermions). That distinction alone is enough to demonstrate that neutrinos and photons are nothing alike.

In general terms, the photon is a neutral, spin 1 boson with zero rest mass. If there’s such a thing as a neutral spin 1/2 fermion with zero rest mass, that would be a completely different animal, even though both would necessarily travel at speed c.

@CB I always thought Halton Arp was a scientist. And my comment about no WIMP’s is accurate. Now maybe in the future we might discover WIMP’s but right now the LHC has not.

But thanks for proving my point about heresy and open minds. I’ve been reading a lot from string theory guys trying very hard to question those that challenge them. Not to mention that Ptolemy proved over and over again that his math was correct.

John Urbanik: “And my comment about no WIMP’s is accurate. Now maybe in the future we might discover WIMP’s but right now the LHC has not.”

Yes, now that you’ve clarified that you meant no WIMPs detected yet, though we’re far from excluding them. I thought you meant ‘no WIMPs’ as in they do not exist, a premature statement. You seem to take it as given that everything else like Dark Energy doesn’t exist. Pardon my erroneous extrapolation.

“But thanks for proving my point about heresy and open minds. ”

How did I prove your point? By mocking your statement that questioning the accuracy of red shift measurements was “heresy”?

Are you actually claiming that it is really a fact that nobody has actually investigated this question? Then you’re wrong, and haven’t even checked (that thing scientists do), and are just assuming for the sake of your pre-conceived narrative where you are Galileo and science is the Catholic Church (or whatever your particular bugaboo is; it doesn’t matter).

Or you know that in fact many investigations of the accuracy of red shift and other distance measurements have been made, and the error quantified, in which case you knew your statement was nonsense from the get-go and yet still maintained it for the sake of your narrative, which would now be closer to just trolling.

There is no scenario in which the statement that questioning red shift measurements is heresy is accurate, and doesn’t deserved to be mocked.

@Ray Hinde: Nice poem! To answer the (rhetorical?) question — the only way we can identify a neutrino’s flavor is to see it interact destructively.

Specifically, when a neutrino has a “charge exchange” reaction, usually with a nucleus, it turns into a charged lepton, and changes either a neutron or proton in that nucleus into its counterpart. The simplest such reactions would be, for example, nu(e) + n -> e- + p, or with an anti-neutrino, nu(e)bar + p -> e+ + n.

Lepton flavor is (as far as we have been able to measure) a conserved quantum number. So when your big underground neutrino detector sees an electron track, then you know it was a nu(e) coming in. When you see a muon, then you know it was a nu(mu). And so on.

Similarly, neutrinos are produced in conjuction with their charged partner; for example, pi+ decays to mu+ and nu(mu) [and pi- to mu- anti-nu(mu)]. So in the beamline at CERN, we know we’re getting a muon neutrino beam because we make pions which decay in the transfer line. But by the time those neutrinos get to OPERA (700-ish km away), a whole bunch of them have turned into nu(e)!

Of course, I just said that lepton flavor is conserved, so how can the neutrinos change flavor. That’s the cool “beyond the Standard Model” physics which we don’t yet understand. Keep an eye on arXiv, and maybe someone will figure it out.

@John Duffield: You can do the arithmetic quite easily. The upper limit on the mass of any neutrino is currently 0.23 eV. The best neutrino detectors we have can get down to, maybe, a few hundred MeV energy for interactions.

Velocity (beta) is given by p/E, where p^2 = E^2 – m^2. You’ll get c for the special case of m=0 (p=E). But you can work out how many nines are involved with neutrinos.

Now consider that the (corrected) OPERA velocity measurement was good to a few percent. I think you can answer your own question.

As for identifying neutrinos with photons, Sean T addressed that quite clearly and simply. Neutrinos are fermions, photons are bosons. They have different quantum statistics, and obey different Hamiltonians. Thinking they are the same reveals a fairly fundamental lack of understanding of basic quantum mechanics.

@CB no my issue is that physics today is starting to move away from real science and open discussion. Haltom Arp for decades tried to get telescope time and was denied. His findings about quasars and annomolies were dismissed without any real discussion.

But like the future potential of WIMP’s I think we will change our view on velocity and redshift. When we have to ‘invent’ dark energy so the theories can fit the observations that begs for a future rebuttal. Else why has dark energy so recently become such a dominant force?

Science has been about dolalrs and funding for sometime now. For a couple of decades the standard model Higgs was pushed to the backroom. There was just no way that the standard model version of the Higgs could really be the answer. In the mid-2000’s many bets were placed and now we find that the SM version prevails.

As for what is science that is another huge debate that is going on. See Peter Woit and Sean Carroll and how they differ. The whole concept of the multiverse has sparked debate on if this is science or not.

In fact here is how Linde recently defended the multiverse: “The multiverse is the only known explanation so in a sense it has already been tested.”

It also seems that you took great offense to my use of the term “heresy”. I picked that word because Hilton Ratcliffe used in in the title of his book “The Virtue of Heresy – Confessions of a Dissident Astronemer”. The word was also used in a number of comments about the work Fulton and Arp has done.

I’m not saying red shift as calculated today is wrong. I am saying that because physicists had to invent dark energy I have major doubts that it is correct. BTW earlier you said no scientist would ever question redshift. What about Burbridge, Hoyle, Narlikar and Sulentic to name a few more?

But there is one interesting thing we’ve learned about these neutrinos. You see, we’ve known for a long time that neutrinos are all left-handed, which is to say that their spin always opposes their momentum, or that they’re spin -½.

But doesn’t that mean its spin is frame dependent? For example if you had a motionless neutrino what would its spin be? Two observers in different frames would see its spin differently.

John Urbanik
“that because physicists had to invent dark energy I have major doubts that it is correct”

I’m skeptical about dark energy but not because physicists invented it. They’ve hypothesized (invented) a lot of things. But then to their credit; they test those hypotheses with incredibly difficult experiments and observations. So many many kudoos to working physicists of the world.

Oh and my the way physicists are extremely harsh about their own hypotheses. I mean probably 99 out of 100 working hypotheses get reviewed and skewered. But the hypothesis that survive this brutal scientific process; well, even if you are a skeptic you’ve got to be amazed and admire a hypothesis like for example dark energy.

I mean, it has survived as the best or leading hypothesis to data. Dark energy, dark matter these things are incredibly strong hypotheses. And even if you don’t like them, which I don’t personally, I have to admit that there is nothing better out there.

Now as a skeptic, I of course, have to have a learning hypotheses as to why I disagree. But…. since I am not a crackpot; I don’t blatter on about my personal idea here.

I might ask an important idea that my hypothesis needs (i.e that could break my hypothesis) or that could inform my hypothesis. But I do realize that to be skeptical means being skeptical of a big picture explanation, narrative and ultimately hypothesis or theory; NOT being skeptical of experiment or observation.

So when Michael Kelsey above says, “lepton flavor is conserved, so how can the neutrinos change flavor. That’s the cool “beyond the Standard Model” physics which we don’t yet understand.” That is appropriate and necessary scientific skepticism. And that’s the kind of skepticism we need and by the way working scientists probably more of this kind of appropriate scientific skepticism than you or I. And their skepticism is the skepticism that we need to emulate.

OKThen, thank you for the reply. But I want to correct a misconception here. It is not my theory about dark matter. I’m sorry if I gave that impression. I’m not smart enough to have a theory. Want I do however is read what others write.

I am going to borrow a response that Hilton Ratcliffe sent to Cliff Saunders on the subject of Dark Energy:

“The Big Bang Model, and its forebear, General Relativity, require some tuneable factors in the equation to work properly in the mathematical sense, and included in these are both Dark Mtter and Dark Energy (which actually work against each other by attracting and repulsing respectively). The theory is so weak in fact, that these dark things would comprise over 96% of the model, and the stuff we observe, experiment with, and analyse in detail less than 4%. The model urgently needs observational support, and desperation sires rose-tinted spectacles. So the rotational anomaly in galaxies filled the need. By adding arbitrary, adjustable quantities of Dark Matter (a supernatural substance), and by awarding Dark Matter just the properties in needs for the job, they get galaxies to work in terms of their model, and consequently imply that their model has received observational support. It’s tragically funny.”

The link I provided earlier is to one of his scientific papers that was submitted. So while it is easy to dismiss a person such as myself I wonder why the scientific community ignores the work of these other scientists?

Quantum mechanics is counterintuitive. One thing you’ve forgotten is the uncertainty principle. How would you know where to look for your hypothetical “stationary neutrino”? If you know that the neutrino is stationary, you have zero uncertainty in its momentum. Therefore, you must have infinite uncertainty in its position.

In reality, you will never have infinite position uncertainty, so you will therefore have non-zero uncertainty in momentum. For objects with large rest energies (ie mass), this doesn’t really manifest itself in any significant way. Eg. we can observe a “stationary” automobile. For quantum particles, however, we usually cannot observe such a stationary particle if we are localizing its position in some way, which is absolutely something we MUST do to interact with it and make a measurement of it.

Short answer: you can’t observe the spin of a stationary neutrino because you can’t really observe a stationary neutrino.

@Wow no wonder you have WOW because your condescendence is overwhelming.

It seems you are the expert (at least in your own mind) so why not do me the favor and define science for the rest of us? Or maybe your definition will just be as self-serving as everyone else’s.

And I thank you for letting me know that scope time is hard to come by. Gee, no one would ever have known that. But that red herring does not refute why they were denied scope time. I guess its just better just stick to ad hominem attacks then address the issue.

I’m sure Copernicus and Galileo also had issues with heresy. It is 100% applicable. Dogma is still dogma no matter how much lipstick is slapped on it.

But let me ask one question. Did you even bother to read the book by Ratcliffe or the link I sent? Naw it’s just much easier to bash me.

:John Urbanik: “BTW earlier you said no scientist would ever question redshift. What about Burbridge, Hoyle, Narlikar and Sulentic to name a few more?”

*concussion-inducing face-palm*

You said questioning redshift was “heresy”. I mocked that by sarcastically saying that of course it was and that’s why no scientists has ever questioned it because they didn’t want to be excommunicated. Then, in anticipation of a severe deficiency in the sarcasm centers, I explicitly stated that this was of course nonsense and in reality many scientists are investigating the question of whether our red shift measurements are wrong.

But hey, you may not have understood the sarcasm, you may not have even understood the plain statement of fact, but at *least* you understand that your own point was wrong and it’s not heretical in the least.

Even if some people find it fun to self-assign that label.

“I’m not saying red shift as calculated today is wrong.”

That’s good, because while they may be wrong so far as we can tell they are not — not sufficiently to make the issues go away, anyway.

“I am saying that because physicists had to invent dark energy I have major doubts that it is correct. ”

Yes well physicists also had to invent the atom, the nucleus, and a bizarre non-classical, probabilistic view of the universe to explain the data in their experiments. That’s how it works. Then you come back and say “Well they also invented the Aether” and I say yes, they invented that, and kept it until experiment showed it to be wrong. The point is scientists inventing something that you think is weird “just” to explain the data isn’t an argument for or against it. The argument is what data do we have, and how do we support it or refute it.

And Dark Energy has more than just redshift measurements to support it. The CMB also shows evidence of Dark Energy, and in very close to the same proportion as our other data. If Dark Energy was just an invention made necessary because our red shift data was wrong, then how does that same invention also work so well on a completely different observation? Is Planck’s measurement data also wrong, and in a way that it makes it appear as though all our data and our model were actually correct?

It’s the same with Dark Matter, only here there’s even more separate observations that all suggest — if you assume General Relativity is correct — that there’s another kind of matter out there, and they all agree on the basic properties and the quantity that must be there.

It’s a shame Ratcliffe and I presume you don’t see any value whatsoever in a model having multiple different tests from observations of vastly different nature and scales that all agree. You can claim that inventing something to match one observation is just a fudge, but when that same invention also matches a completely different *kind* of observation, and then again, and again, that actually starts to be hard to argue that it doesn’t mean anything. If our models are wrong, they are wrong in a way such that Dark Matter + Dark Energy exactly fills in the wrongness in many dimensions at once. So if you want to fix the model, you end up having to work very hard to get something that will very likely still end up looking like Dark Matter and Dark Energy.

People are still trying, of course. It’s not heretical.

And of course the matter is still far from settled. Dark Matter won’t be settled until we actually discover what it is. And who knows if/when Dark Energy will ever be resolved — if it comes down to errors in the data I’ll be shocked, though I can’t rule it out.

So go ahead and doubt if DE is the best answer. Nobody can say you’re wrong. Just don’t go saying that GR+DM+DE isn’t our current best model, or that there’s no observational support, or that it’s just an invention to make *one* set of data work. Because that would be flat wrong.

CB thank you for your response. I appreciated it. I am very curious and read all I can. It’s when I read papers and books published by those that question redshift (for simplicity as they actually question much more) but are basically ignored instead of refuted I begin to wonder why.

Then with the most current results from the LHC putting major doubt on finding SUSY, WIMP’s extra-dimensions etc and saying B-sub-s is as predicted by the SM I come to the conclusion that much of the last 20-30 years in physics just might have followed the wrong path (aka ST). I know that is a huge gross oversimplification but hopefully you get my drift.

I would think the science world would be happy that amatures like myself show interest in science. I ask these question on forums like this because I sure can’t on Yahoo.

As for now I’ll stop and not derail the conversation anymore. Once again thank you for taking the time to answer some of my questions.

“I’m not smart enough to have a theory… I am going to borrow a response that Hilton Ratcliffe sent to Cliff Saunders on the subject of Dark Energy… I wonder why the scientific community ignores the work of these other scientists?”

1) Yes, you and every one else is smart enough to have a learning hypothesis; IF you want to learn.
2) Quoting someone else’s theories only makes sense (in my opinion) if it supports YOUR learning hypothesis and gives you a science insight or science question.
3) “why the scientific community ignores the work of these other scientists?” that’s not a physics question. The question is why is this work of science interesting or convincing to you and WHAT do you want to understand better. e.g. see my question #13, and see answer #15 Michael Kelsey of SLAC National Accelerator Laboratory.
4) That’s amazing, Michael is out here spending time answering our questions; helping us learn.

But if I have a longer question (an off topic question) like read this paper and tell me why you what’s wrong with it scientifically. I probably will not get a scientist to spend the time to answer my question.

Out here, take advantage of Ethan’s blog hospitality and learn. Read and question to understand. And you will learn and people out here will try to give you answers.

I really like Fred I. Cooperstocks ideas. He’s an emeritus physics professor and written and excellent book. But his ideas have not been accepted. I’m easy to convince. But he has to convince professional physicists and astrophysicists. Out here, I have raised my questions appropriately about his ideas and have been answered to the level that I can understand. Until I can understand general relativity a lot deeper (I’m studying learning) I can’t ask another question about his work. He has to argue with the professionals and defend his ideas.

Personally, I am as excited about learning how, where and why I am wrong and learning; as I am about learning that I am correct. e.g. Well yes oops, then that is a very big problem; that pretty much torpedoes my idea unless I can get around it somehow. Damn; but I learned!!

Read Ethan’s post and the comments http://scienceblogs.com/startswithabang/?s=theories+wrong
Even most of scientists theories are wrong. But they are part of the discussion. We all learn by and from discussion. Discussing is every important even when we are wrong. Maybe especially when we are wrong; because if we are really able to listen and understand why our best ideas are wrong; well then we are really open to learning.

If you know that the neutrino is stationary, you have zero uncertainty in its momentum. Therefore, you must have infinite uncertainty in its position.

Well yes fair enough. But imagine two observers traveling toward each other at high velocity. Now imagine a neutrino between them. Wouldn’t they each see the neutrino traveling in a different direction and so see it with a different spin? If the observers are traveling fast enough then uncertainty should not come into it and spin becomes frame dependent.

@ppnl: The topic you want to look up is “helicity supression.” There is not a proper Wikipedia page about it (darn those grad students, just can’t get off their duffs….), but there is a discussion of it as part of the pion (“pi meson”) page.

Basically, what you are saying is correct, for the case of a _massive_ spin-1/2 particle. For example, in the decay pi+ -> e+ nu, the electron and neutrino must come out with opposite spin (the pion is spin 0). Since they come out back to back, that means they have to have equal helicity. That’s “impossible” in a weak decay, which always produces left-handed particles (and right-handed antiparticles). So the pi-e-nu channel is forbidden.

But wait! The electron has mass, so you can boost the decay into a frame where the electron’s momentum is reversed, but the spin (a Lorentz scalar) is not. In that frame, the electron has the correct helicity. So the pi-e-nu channel, rather than being absolutely forbidden, is just very highly suppressed (by the fifth power of the ratio of electron to muon masses).

Helicity suppression is also part of the reason the neutron lifetime is so long (the other part being the very small n-p mass difference). There is no other channel for the decay, so the suppression basically just reduces the decay rate.

This argument doesn’t apply (or didn’t 🙂 to neutrinos, since they are (were believed to be) massless. Massless particles always travel at ‘c’, in every frame, and you can never boost to a frame where they reverse direction. So the whole helicity suppression issue applies only to the massive (charged) lepton in the weak decay.

With our modern understanding that neutrinos do have some mass, the possibility to flip helicity does exist in principle, and should require some additional terms in calculating the rate for helicity-suppressed decays.

However, with the neutrino mass < 0.23 eV (compared to 0.511 MeV for electrons), that effect is going to be immeasurably small (10^6 to the fifth power) for a very long time.

“3) “why the scientific community ignores the work of these other scientists?” that’s not a physics question.”

It’s not even coherent.

The only way that statement can be true is if “ignoring” requires anything other than “accepting”.

Theories other people come up with ARE discussed. That isn’t “ignoring”.

But, like the theory of a flat earth, it is discarded after discussion as being wrong.

That isn’t ignoring either. Unless you drop ideas, you will NEVER progress. Demanding old and oft-debunked ideas be continued to be discussed is only a way to halt science.

If someone comes up with a new theory about how to counter the evidence against, it is discussed AGAIN.

This is not ignoring.

And if the evidence disagrees, the idea will be discarded.

That isn’t ignoring either.

Moreover, there are a million theories out there. If your requirement is that you should know a large group of prominent people be discussing it all the time, then you’re being idiotic: this would require billions of scientists, all prominent enough to be noticed by people interested, but not into, science.

We have some scores of thousands, maybe, but any one person would know of only a few score of those.

So not being discussed by those you know of IS NOT ignoring it. It’s you ignoring the discussion that IS going on, and being too lazy to find out where the continuing discussion is going on.

“@Wow no wonder you have WOW because your condescendence is overwhelming.”

Only when my *condescension* [note CORRECT spelling] is to someone who is actually a light year below me, kid.

Then it would be entirely correct and appropriate to be condescending, right? Looking down on someone who IS beneath you IS the right way to look at someone who is beneath you. Otherwise you’re staring over their heads and you’d whine about you being ignored, then, won’t you.

And since it IS condescension, in what way does that invalidate the argument? You fail to do that, making it a pointless statement. You at least avoid making it an ad-hom by not making any conclusion from it in the unrelated argument, but that just makes it even MORE pointless to point it out.

If I point out that you CANNOT SPELL, as I did in the opening here, then all I’ve done is insult you.

Just like you did.

The difference being, I don’t think that insulting someone is wrong and deserves censure if it isn’t gratuitous, and only if it is also CORRECT.

***YOU*** claimed “real science”, I asked that you define “real science”. And ***NOW*** you go on about “science”. No “real” qualifier, which only means that you acknowledge that your “real science” was solely to make out that you’re the only “real scientist” and everyone who obeys the consensus are not “real scientists” and aren’t doing it right.

Except you can’t define what “real science” is and will refuse to do so

Emphatically.

Indicating that you were not engaging in requiring to know or impart information, only in willy-waving and denigrating people who do this for real because you just don’t like what they do.

Heresy is the claim that a super-powerful creator of the universe is wrong.

NOBODY thinks any scientist is the creator of the universe, never mind super-powerful. Not even Tony Stark.

Therefore someone may dislike or even hate someone who disagrees with them or another person, but it never becomes heresy to do so.

The claim of “heresy” is only brought up by those butt-hurt by their crackpot theories they prefer not being accepted, or to appeal to those who want those smart people in the world brought down a peg or two (god forbid they actually improve themselves rather than bring others down: that would be saying that they didn’t deserve better or that they have to work to do better, UNPOSSIBLE!!!).

@OKthen: I agree that if neutrino mass really is non-zero, and if it really does vary, the speed really must vary too. A photon moving at c has no mass, but when you trap it in a mirror-box it adds mass to that system. It’s still moving at c, but in aggregate its speed is zero. Check out photon effective mass, and you will appreciate that there’s a sliding scale between the two extremes. You don’t have to be an expert to understand this aspect of relativity.

@SeanT: A photon has no charge, a neutrino has no charge, an electron does. A photon has no mass, a neutrino has hardly any mass, an electron does. A photon travels at c, and neutrino travels at about c, an electron doesn’t. Yes the spin is different, photons and neutrons are different. But really, it isn’t off base to assert that the neutrino is more like the photon than it’s like the electron. Do you have a washing line? Sight your eye down it, and twang it. The transverse wave you see is an analogy for a photon. Now grip the line and twist it, then let go. The rotational wave you see is an analogy for a neutrino.

@Michael: if you can’t slow down a neutrino, it really can’t be very similar to an electron at all. In this respect, it’s definitely more like a photon. Like SeanT said, it’s not the same as a photon, but it’s even less like an electron. With respect, I would urge you to consider this with respect to the “beyond the Standard Model” physics which you don’t yet understand. Also look at TQFT and the standing-wave nature of the electron, and gamma-gamma pair production. See http://en.wikipedia.org/wiki/Two-photon_physics which claims that pair production occurs because pair production occurs? That’s wrong. By the way, do you know of any experiments featuring neutrinos and electrons where positrons have showed up?

As you say : “…because the speed of light is finite, you’re actually looking at it in the distant past. Since all the galaxies are expanding away from one another, and galaxies that are farther away are expanding away at a faster rate,….”

Distance is irrelevant, because we are not in the same time frame. Time is key. So the right conclusion should be: How farter back in time, how faster the galaxies are expanding. And that is a huge difference in concept.

Ok, as I poorly understand it helicity suppression works exactly because you can boost to a frame where the electron appears to be traveling in the opposite direction.

But if we take this statement at face value”

” You see, we’ve known for a long time that neutrinos are all left-handed, which is to say that their spin always opposes their momentum, or that they’re spin -½.”

Then helicity suppression cannot work on neutrinos because they will always be seen as spin -1/2 particles in any reference frame. They cannot be +1/2 particles because that would be the antiparticle.

This seems to be briefly mentioned in the wiki article on neutrinos:

“This change in spin would require the neutrino and antineutrino to have nonzero mass, and therefore travel slower than light, because such a spin flip, caused only by a change in point of view, can take place only if inertial frames of reference exist that move faster than the particle: such a particle has a spin of one orientation when seen from a frame which moves slower than the particle, but the opposite spin when observed from a frame that moves faster than the particle.”

So in effect the neutrino is a particle or an antiparticle depending on what frame of reference it is viewed from. In this case helicity suppression works but… I don’t know. This would seem to violate the symmetry between matter and antimatter. Well that symmetry is known to be broken so maybe that’s ok.

Since he was so quick to basically say I’m stupid because of spelling issues then does this mean he now fits his own definition?

“So you can’t define science, then? You admit it?”

And a tad more reading comprehension would reviel that my definition would be no more correct then yours.

But since you keep asking I would define real science as a system that uses observation and experimentation to describe and explain natural phenomena. And the scientific investigations need to use the scientific method.

“Heresy is the claim that a super-powerful creator of the universe is wrong.”

Heresy is actually defined as “1.opinion or doctrine at variance with the orthodox or accepted doctrine”. The definition is then further expanded to religious institutions. Of course other dictionaries have a different order in their definition.

I’ll let you have the last word as I’m off to friendlier forums. But if you ever wonder why your funding may get cut then maybe ask yourself if you’ve done all you could to be nice to the interested taxpayers.

Finally, I laughed at this line “A 400m runner hasn’t run at zero speed because he ended back where he started.”

See what proof is there that the runner actually moved? Maybe the runner was stationary. There is no observation to prove the runner was in motion and both theories are valid till observation can prove the runner had motion but was held in place.

@ppnl — I think you’ve caught the gist of what’s going on with helicity suppression. If you really want a deeper understanding, you’re going to have to get through at least a first-year quantum field theory course, together with a good class on group theory for physicists.

With massless neutrinos, there doesn’t exist any inertial frame into which you can boost them to flip their spins. If neutrinos _do_ have mass (and they must have mass in order for the observed flavor mixing to occur), then such a frame can exist.

Therefore, in principle, there could be a small probability to find right-handed neutrinos (or left-handed anti-neutrinos). However, we don’t have any mechanism within the Standard Model to produce them (the weak interaction couples _only_ to left-handed leptons). Which is one of the several reasons why neutrino mixing is so interesting to the HEP community.

@John Duffield: A neutrino is as completely _unlike_ a photon as you can get. It does not interact with the electromagnetic field, which means it doesn’t interact with electrically charged or magnetic entities. It only interacts with the weak field (W+, W- and Z0 particles). It’s interaction with the weak field is 100% parity asymmetric (while EM is 100% parity symmetric). Neutrinos are _only_ produced in particle interactions either in pairs by themselves (a neutrino and an antineutrino), or in conjuction with a flavor-matched lepton (e+ and nu(e), e- and anti-nu(e), etc.).

If you don’t know enough about the state of the art in particle physics to understand this, then you really don’t know enough to competently criticize that state of the art.

That trapped photon is only going nowhere, its speed isn’t zero, it’s still moving like billy-o, but just back and forth.

“A 400m runner hasn’t run at zero speed because he ended back where he started.”

“See what proof is there that the runner actually moved?”

When you watched him run, idiot.

When did you see a photon stop?

Mr. Wizard’s analogy is FAIL. LOL A trapped photon can’t move by definition because it’s TRAPPED. If it can move it isn’t really trapped but CONFINED to a specific area. A very real and big difference. In addition if we observe the runner moving then that means the runner is somewhere along the path of the 400m track. We see the runner in different positions and the runner is not TRAPPED based on our observation. Yet if our only snapshots view the runner or photon in the same exact spot we cannot determine if the runner was stationary or moving. So Mr. Wizard your analogy is epic fail.

I wonder if you’re ever going to engage brain?
The above is a statement and is not a question so I guess this spotlights your inadequacies at punctuation.

Ridiculous, kid, a light year below me, kid, moron, MORONIC BUFFOON, idiot.
I would have thought someone that is a light year ahead could come up with some better insults. I’ve heard much more creative, ingenious and interesting insults walking down the street of any major city. Mr. Peabody it seems you are stuck at the Sesame Street level of insults.

The only way that statement can be true is if “ignoring” requires anything other than “accepting”.

Gee someone as smart as yourself should have known that there is the other possibility that I can REFUTE a statement and that would classify as not ignoring it. Please don’t tell me that this possibility escaped that steel trap mind of yours.

Demanding old and oft-debunked ideas be continued to be discussed is only a way to halt science.

So how soon will this be said about string theory? How ironic if that is your major field.

I’ve been pondering why you have so much built up anger. So I’ve come up with a theory to explain it. Since anger can easily be attributed to built up sexual frustration, I’ve concluded that you’ve never been laid! It completely fits all observations to date. The corollary from this is the requirement to try and produce alpha-male acts of superiority which you attempt all the time based on your responses to others. Or are you just trying to massively over compensate for something?

@Hi Michael;
“Then, because type Ia SNe are “standard candles” (that is, they all have the same intrinsic brightness, or at least a light curve which is tightly correlated with intrinsic brightness), we can use their observed luminosity to compute their “true” distance.”

But didn’t a study by Richard Scalzo of Yale University call into question the validity of using type 1a SNe as a standard candle?

And there was a paper released in 2011 that included 3 noble prize winners that studied the SN 2004dt, SN 2004ef, SN 2005M and SN 2005cf supernovae and concluded the lightcurve did not fit that of 1a SNe as defined for a standard candle.

And Bradley Schaefer and Ashley Pagnotta of Louisiana State University in 2011 also found where two white drawfs were responsible for a 1a SN. This calls into question the standard portion of the standard candle.

Ratcliffe, Arp and others have always said that metallicity of the supernovae themselves, as well as the size, density and chemistry of their host galaxy can all impact the brightness and light curve we observe.

At some point the simple weight of all these anomolies should overcome the intrinsic resistence and I predict profound changes will happen in the world of physics. Sort of how the LHC has almost completely burst the bubble of many physicists today.

@WOW you are really cracking me up. So much built in anger and frustration from you. I know you were never picked for any sports team and still looking for your first “living” and/or human girlfriend, so your comments just make me laugh.

The problem with questions about measurements at the edge of our abilities, and the point I believe Wow was making (which unsurprisingly didn’t come across well) is that you can raise lots of issues with them but by there very nature there isn’t enough data to actually resolve the issue and show that we are really okay, or we’re really wrong. So what is there to say? In the meantime the measurements appear to be reasonable, the results match up with other completely different observations, so most of science moves forward as though those measurements are correct within their error bars until such time as we have sufficient evidence to say otherwise.

The issue with Type 1a SNs isn’t as bad as it seems. While they’re called “standard candles” which invites the assumption that they all go off with the same magnitude, the truth is closer to them being “standardizable”; only the shape of the light curve graph is the same, not the actual values. So a double-degenerate supernova is okay even though it will exceed the mass at which a single degenerate Type 1a would occur; single degenerate events vary more than you may have been lead to believe anyway. Also, any “weird” events that don’t fit the model can typically be identified and not used for distance calculations.

John Duffield:

The statement that neutrinos are more like photons doesn’t contribute anything. The only sense in which they are similar is that their typical velocities are close to c, and can’t be slowed down.

Well right away there’s a problem as the photon’s velocity is always exactly c, while a neutrino’s velocity is *never* c, and can in fact be anything — this very article is about (predicted) neutrinos with velocities as low as 100km/s.

And then there’s “slowing down”. Neutrinos can be slowed down in an actual physical sense, but it is ridiculously impractical to do so as they interact so rarely. Photons, on the other hand, can never be actually slowed down such that they propagate at less than c, but can *trivially* be slowed down in a practical sense via passing them through a medium, as they interact readily with matter.

What insight, then, does this “they both go really fast and are hard to slow down” comparison give? The comparison of “fast” is a special (if common) case and not an essential point of similarity between the particles, and the sense in which they “can’t be slowed down” are completely and utterly different and therefore meaningless. So photons and neutrinos are at best *metaphorically* similar and that metaphor breaks down the second you look closely at the actual behavior.

What points of comparison remain? The actual quantum mechanical properties of the particles. And in this sense neutrinos and electrons are vastly closer than neutrinos and photons. Just by saying that both neutrinos and electrons are massive leptons, I’ve just described about half a dozen different properties that are shared between electrons and neutrinos, in an actual non-metaphorical sense, and which are not shared by photons.

The only thing shared by neutrinos and photons in a non-metaphorical sense is that they are quantum phenomenon obeying the Uncertainty Principle. This doesn’t tell us anything other than Quantum Mechanics is a good theory for describing lots of different phenomenon.

Oh I forgot about the whole “they’re both neutral” comparison, but that’s because it’s arguably the worst. See, a neutrino is neutral (and not afawk a composite of charged particles like a neutron) and therefore has no interaction with the EM field at all. On the other hand, the photon IS a disturbance in the EM field, so it has EM interactions with charged particles despite being neutral. If the photon was charged, then it wouldn’t just change which EM interactions occur and what nuclear processes were allowed, it would change the very nature of the electromagnetic force and make QED look a lot more like QCD.

So even the property that is actually physically the same is still completely different in kind.

I know you said you know all this, and that we should think really hard about this in light of that, and, well, I did and I came up with “useless red herring”. Why don’t you help us out. Why don’t you tell us in what way this metaphorical-at-best physically-completely-false “similarity” between photons and neutrinos suggests physics beyond the Standard Model?

To me, the only thing the neutrino-photon comparison suggests as far as looking for new physics beyond the SM is to find a reason for the unexpected neutrino mass. As in it’s the difference, not similarity, that is interesting.

Science is a method of understanding nature through the following iterative method:
1) Guess how nature might work.
2) Calculate the consequences of that guess.
3) Compare those consequences to observation to see if the guess may be correct.

That’s it. You can define it however you want. If that’s at the core of your definition, then I agree, and if it isn’t, then I disagree.

Now of course there’s lots of details in the process of figuring out what the consequences of a guess are, and whether it matches with nature. Trivialities as far as the definition is concerned. And of course your guess could be directly inspired by observation; it’s a cyclical process with lots of feedback.

The more interesting case is when we have a guess but have yet to get through all the steps. Is String Theory science? Well it took a while before they computed any consequences that weren’t precisely the prediction of either GR or QM (not that this is a bad place for a theory to be). Now they have them, and unfortunately the necessary experiments either are or easily could be outside our ability to perform for the foreseeable future.

I say that it is still science, because they are following the above process. Nothing in the definition ever said making the necessary observations would be *easy*. Nature need not accommodate us in that way. However if you want to say that isn’t science then that’s just fine with me — excepting that you mean it isn’t science *yet*, not that it isn’t science in the same way astrology is not science and therefore should not be pursued by scientists. If we stopped every line of inquiry that hadn’t reached step 3 yet as “not science” we wouldn’t have gotten very far.

And to be clear, I don’t favor String Theory. I admire the mathematical achievement of unifying GR and QM in a single framework, but that admiration means bupkis as far as physical reality goes. I think it’s quite likely that at the *very least* those who hoped String Theory would solve some of the major questions in physics today have spent their careers in vain. But is it really in vain to find a promising idea and follow it through, only to find out it is wrong?

Well I’d so no, but either way you’re talking about the majority of scientists throughout history, for at least some part of their career.

* I’m not ashamed to admit that my confidence comes from the fact that I’m cribbing.

Someone did find one, but the correlation was poor and it was a good many years ago now.

There were multiple methods of causing this that had nothing to do with the physics of neutrinos, however, so I can’t remember if it ever got officially debunked or just ignored because it really wasn’t going anywhere.

“the truth is closer to them being “standardizable”; only the shape of the light curve graph is the same, not the actual values”

It’s more that the shape of the light curve shows the physics going on and, since the physics has to be the same to give the same shape of curve, the same physics gives the same light magnitude.

The biggest error is estimating the extinction of light by the intervening gas and dust. A star of 2% metal and a star of 0% metal changes the values by a vanishingly small percentage of the error interstellar dust fraction errors give.

@Wow #76, regarding type Ia SNe light curves: In fact, CB was more precise than I was (yes, I simplified for the sake of argument). There do turn out to be multiple characteristic light curves involved, which makes the “standard candle” argument more complex.

First, there is some extinction involved, as you say, but that is fairly easily calibrated using nearby sources along the line of sight.

Second, the width of both the initial peak and the long tail are affected by the source’s motion, both cosmological and peculiar. So you have to apply a temporal scale factor to “correct” the light curve back to its own rest frame. It’s a bit touchy (but justifiable with good numbers) when the result of that correction is then used to infer a cosmological distance.

Third, the metallicity of the progenitor WD does have a noticeable effect on the initial peak shape. That in turn can influence both the scale correction above as well as the computation of absolute luminosity.

Would that be because of the improvements in discerning the effect of extinction? At some point, the errors that introduces is going to drop low enough for something else to become relatively significant.

CB: regardless of how you classify neutrinos, there is an issue wherein neutrino “typical velocities are close to c, and can’t be slowed down”, along with “this very article is about (predicted) neutrinos with velocities as low as 100km/s”. It would be nice if Michael could assist us with this.

Has anyone given consideration as to what is the purpose of neutrinos in respect to the grand scheme of things?

In a book ‘Neutrino’ by Frank Close he states that “the Sun produces ~2 x 10^26 neutrinos every second.” This is such a vast number if it is considered there are, on average, 100 billion stars in a galaxy and 350 billion galaxies amounting to ~3.5 x 10^22 stars within the universe. All these stars produce vast quantities of neutrinos and this has been going on continuously for most of the lifetime of the universe. The lifespan of neutrinos are expected to be stable so it can only be presumed, as there is no redundancy within nature, nature has intended a practical utilization for all these neutrinos!

Using @CB’s notion (or was it Richard P. Feynman?) “1) Guess how nature might work.” Is the same approach I have used in my paper, although, with a bit more methodological reasoning involved:

Classification is irrelevant except as a tool for our brains. Actual properties are relevant, and photons/neutrinos share essentially none.

So if I understand you the “issue” you mean is an apparent contradiction between the difficulty of slowing down neutrinos via particle interactions and the predicted CNB neutrinos being relatively slow. That correct?

Assuming so, that’s not a contradiction at all. The statement that neutrinos “can’t be slowed down” is a *practical* statement about the difficulty of slowing them down via particle interactions since those are so rare but does not imply that neutrinos with lower velocities are forbidden. The CNB neutrinos weren’t slowed by particle interactions, but by the expansion of space.

Just like how the CMB photons were also cooled by the expansion of space. But because photons are massless and REALLY can’t be slowed down in an actual physical sense, they still travel at c but have a longer wavelength.

This is precisely what one would expect if you had two different particles, one with mass, one without, that were emitted everywhere early on in an expanding universe. It’s their differences that make the difference.

Sorry I didn’t have time to do more than peruse the first few pages of your paper, so pardon if this is answered later. It appears as though you are predicting that the electric and magnetic forces can be separated below a certain energy like the electroweak separates into electromagnetic and weak in our common experience. At what energy level does your theory predict this occurs?

Deep space observations have profoundly changed our view on the universe. This has resulted in what is referred to as the standard model of cosmology, the Lambda-CDM model. According to this model current times are absolutely unique. It tells us that the recently observed acceleration of the cosmic expansion just kicked in, and that it is only around the current age of the universe that this acceleration can be observed. At earlier times the acceleration would have been too small to be observable, and at later times distant galaxies will have accelerated out of view and thereby rendering the cosmic acceleration again unobservable. The cosmic acceleration happens to be observable around the time cosmologists populate planet earth.

Copernicus taught us something profound: if you think your situation is special, you should probably think deeper.

Accelerated expansion does not just make late deep space observations impracticable, but rather poses strict fundamental limits. According to the Lambda-CDM model, ultimately all distant galaxies will permanently accelerate out of sight beyond a cosmic horizon that can effectively be considered ‘the edge of the observable universe’. It doesn’t matter how big a telescope future generations will be able to construct, if all distant galaxies have accelerated out of the observable universe, those future generations will not be able to even get a hint of the cosmic acceleration.

The net effect of all of this is that we are here just at the right time. Neither earlier astronomers nor later astronomers would be able to correctly predict the dynamics and ultimate fate of the universe. It is only us who can accomplish this feat.

How come we, scientific heirs of Copernicus, end up with such a strangely anti-Copernican standard model of cosmology? Are we really living at the very right moment in a mediocre location? Or is Lambda-CDM ready for revision?

@John Duffield: You seem to have some confusion about basic physics, namely, the momentum/energy relationships in special relativity. First, we’ll cover the basic numbers, and show that neutrinos do not travel _at_ the speed of light; then we’ll answer the question of how to change the speed of a neutrino.

The fundamental equation, which applies to all particles everywhere, is E^2 = p^2*c^2 + m^2*c^4. Particle physicists, like myself, hate carrying around pointless constants, so let’s work with a system of units where we can drop all the ‘c’s in casual conversation: Energy has units of eV (or MeV, GeV, ueV, etc.), momentum has units of eV/c, mass has units of eV/c^2, and we’ll let c=1, so that the special relativistic parameter beta = v/c is equivalent to speed.

With those conventions, *ALL* *PARTICLES* *EVERYWHERE* obey the fundamental equation E^2 = p^2 + m^2. From the definition of the Lorentz boost gamma = 1/sqrt(1 – beta^2), we have E = gamma*m, p = beta*gamma*m, so v = p/E. Massless particles, like photons, gluons, and gravitons, are a special case. Gamma is undefined for them (since v=beta=1), but for m=0, the fundamental equation simplifies to E = p, which is perfectly well defined.

The fact that gamma is undefined is basically why helicity suppression (see one of my much earlier comments) applies to the charged (massive) leptons but not to neutrinos. Without gamma, you can’t apply a boost (massless particles travel at the same speed, c, in vacuum in all frames).

So now, John, let’s do the arithmetic to work out the speeds of the cosmic relic neutrinos, and let’s compare that to the speeds of neutrinos from radioactive decay, and from neutrino-beam experiments. We’ll do the last one first.

0) The mass of the most massive neutrino is known, from cosmological limits, to be < 0.23 eV. We'll take that upper limit as the value, for convenience.

1) Neutrino beams from accelerators have energies in the GeV range (from about 0.5 GeV or so up to 10-20 GeV). We want to compute the velocity = p/E, for those neutrinos. So going back to the fundamental equation, v = sqrt(E^2 – m^2) / E = sqrt(1 – (m/E)^2). Let's be generous, to get the slowest neutrino we can from our beam, and use a beamline with E = 0.23 GeV. Plug in the numbers, v = sqrt(1 – (0.23 eV/0.23 GeV)^2) = sqrt(1 – (10^-12)^2) = sqrt(1 – 10^-24). I leave it to you to count the nines.

2) Radioactive decay. Beta decays are three-body decays, so the electrons and neutrinos have a broad spectrum with a cut-off at a few MeV, and a peak just a little below that. So let's be generous and suppose we have a neutrino from beta decay with an energy of 0.23 MeV. Plug in the numbers, same as before, and v = sqrt(1 – (0.23 eV/0.23 MeV)^2) = sqrt(1 – 10^-12). Again, I leave it to you to count the nines.

3) Relic cosmological neutrinos. As discussed in Ethan's post, these have a black-body spectrum which today has a temperature of 1.96 K. As noted in my earlier posting, that corresponds to 172 ueV at the peak of the spectrum. Since this is far below the rest mass of the neutrinos (0.23 eV) we know that we're talking about kinetic energy (Ek = E – m), and we can work in the non-relativistic limit. Ek = mv/2, so v = 2*Ek/m = 2*172 ueV/0.23 eV = 0.001496 = 448 km/s (multiplying by c to get back to conventional units. That's for the peak of the black-body spectrum. The low energy tail certainly includes a non-trivial rate of neutrinos at Ethan's quoted 100 km/s level or below.

We've just shown that they do _not_ travel at 'c', but at a velocity which depends on their energy and momentum, just like every other massive particle in the Universe. How do you slow the neutrinos down? The cosmic relic neutrinos "lost energy" due to the cosmological expansion, just as the photons (and presumably relic gravitons) did. But what about terrestrial neutrinos, with a nice fat momentum?

Well, you shoot them through material, just like shooting bullets through water to slow them down. A neutrino can interact in two ways — elastically (like a billiard ball bouncing off something) or inelastically (changing into something else, i.e., a charged lepton, and producing fragments from whatever it hit). Let's ignore the inelastic collisions, since the neutrino disappears and isn't interesting any more.

In an elastic collision, the neutrino hits a nucleus (which has a much higher cross section, than hitting an atomic electron), and transfers some of its energy to that nucleus. The energy transferred is approximately inversely proportional to the mass ratio (delta E ~ m(nu)/m(A,Z)), which, if you followed the math above, is about 10^-11 (assuming something like tungsten or lead as your target). So it will take a lot, a lot, a lot of interactions to get the neutrino's energy down by a factor of two, let alone down by a factor of 10^12 (compare (1) and (3) above).

So there is no reason in _principle_ why neutrinos "can't slow down" (as you have misconceived things), it is merely that the relative energies and cross-sections involved are so tiny that on practical, experimental scales, we don't see it happening.

@Wow #78: As I understand it, the extinction (scale height along the line of sight) can be calibrated using quasars and other very bright objects, by comparing multiple spectral lines from a single source. I am *NOT* an astrophysicist, so I cannot speak to methodology, and trust that when the authors of peer-reviewed papers say this works, it does (to whatever level is quoted in their error bars).

You are absolutely right that as observational techniques improve, the uncertainties on each SN data point will go down. Also, improved observations will (and have already!) allowed us to extends the reach of type Ia SNe out to relatively high redshift (as I recall, there are now points out at z~6 or 7).

The point is that the expected curves for flat vs. always decelerating, vs. accelerating deviate more and more as you get out to higher redshift. If your data points get (a) smaller uncertainties, and (b) more lever arm out to higher z, then your fitted curves become a better and better match to “reality,” whatever that is.

It *could* have turned out that the data taken in the 90’s and early 2000’s could have contradicted the very early 1980’s acceleration results, regressing to the mean of simple deceleration. But that’s not what happened in practice. Instead, the SNe data, along with other observations, including CMB, weak lensing of distant clusters, and more, regressed to a mean that corresponds to an accelerated expansion which began 5-6 billion years ago (z somewhere around 0.5).

If you use the simplest equation of state for whatever-might-be-driving-that-expansion (specifically, a single constant number corresponding to Einstein’s Lambda), then you end up with the total mass-energy budget of the Universe having 70-ish percent whatever-that-is, 23-ish percent non-baryonic matter, and the rest stuff we know and love.

I think you have misunderstand the Copernican Principle. It does not say that there should be nothing at all interesting, convenient, fortunate, or in its particulars unique about your situation — otherwise the fact that earth is almost certainly unique in its fine details would disprove it. It certainly does not say that the evolution of the universe must be static so it makes no difference when you are observing. It just says that your location should not be special as regards the operation of the universe.

Compare with the pre-Copernican view which put the earth at the literal center of the universe with the sun and planets, plus all the fixed stars, rotating around earth. That’s how you give the earth a special position.

Now what’s so special about the time we live in? Well nothing as far as we know. “The point where humans started studying the cosmos” has been granted no special place in the laws of nature. Now are we *fortunate* that we live in a time where the effects of Dark Energy are measurable but have not yet rendered the rest of the universe invisible? Er, well, in a sense, but not very. We’re talking about a window that extends billions of years into the past, and perhaps hundred billions years into the future, many times the current age of the universe. How “special” is our living in that time window?

And then think of all the other ways in which our study of the cosmos could be foiled (or enhanced) by particular circumstances. In just a handful of billions of years from now, Andromeda and the Milky Way will have merged, likely forming an enormous elliptical galaxy. If we’d been born at that time, the night sky would be a nearly uniform glow and it would be very difficult to see past it at the wonders beyond. There’d be no convenient Great Nebula in Andromeda for scientists to study and eventually figure out was its own island universe at a previously unheard of distance. Or what if we were orbiting a star inside a very large nebula? We’d be able to see basically nothing of what we see from earth, and who knows if astronomy would have ever begun since there wouldn’t have been much interesting to look at.

So are we in a sense lucky that this isn’t the case and we conveniently have such a fine view of the heavens? Yeah. Much more lucky than having been born in the “very right” 100 billion year “moment”. But in an anti-Copernican sense? No. Earth still isn’t special in the grand scheme of things, it’s just one of trillions of rocky worlds in our galaxy that we happen to live on. Same with the time we live in.

Lambda-CDM will be ready for revision when there’s a compelling reason to do so. I don’t know what you have against it, that you feel the need to fish for these “scientists are ignoring basic principles of science!”-type arguments, but they hold no weight. I can understand if you just don’t like it because it’s weird and confusing and against common sense — but so has all of physics for over a hundred years. Time to get over it.

I would think that the simple fact that the Lambda-CDM model ignores quantommechanics would be enough to call into serious question. The biggest problem in Lambda-CDM is the way electromagnetic fields are handled or sometimes simply ignored in the stress energy tensor of Friedmann–Lemaître–Robertson–Walker metric (FLRW).

Lambda-CDM has a fundamental weakness dating from the 1934 book by Richard Tolman. First he treated a static DC electromagnetic field by the method of Reissner Nordstrom where the field counteracts gravity and bends space backward, causing time to pass more quickly, and masses to repel each other. It led in later times to theories of accelerating galaxies driven by faint electromagnetic fields instead of dark energy. Then in the next paragraph, Tolman treated alternating AC electromagnetic fields as contributors to positive curvature, increasing the strength of gravity, because of the deflection of star light passing near the sun.

It seems a bit irrational to have two different curvatures resulting from electromagnetic fields. Why can’t Dark energy be eliminated from Lambda-CDM If the bending of light near the sun could be attributed to a tiny decrease of light speed in a gravitational field?

From what I’ve read changes to Lambda-CDM are being rejected in favor of finding a new theory with more dimensions. Six is a common number used.

As far as the rapid expansion is concerned it is a “special event” unlike your examples of glaxies colliding which happen all the time.

As for the argument that this time is special let me try to put some numbers to it. What is the current age of the universe? Expressed in natural units, about 10^61 ticks have passed since the big bang. It is around this very time that we can observe the accelerated expansion of the universe. But only marginally so. It took us four centuries of telescope technology improvement before we managed to observe the cosmic acceleration. Observing cosmic acceleration is challenging, as one needs to look very, very deep into space to see any effect. This is because dark energy represents a cumulative effect. In a rough analogy, one can think of dark energy as a negative mass that increases proportional to volume. For a volume the size of our earth, the dark energy adds up to a negative mass corresponding to removing a single grain of sand from the entire earth.

Tiny as it might be from our earthly perspective, the dark energy effect does grow proportional to volume. It keeps growing with increasing size until we reach the size of the whole observable universe. At that size, the effect has grown from taking away a grain of sand into an effect that overpowers the total mass in the universe. At the scale of the universe, dark energy beats the deceleration due to gravitational attraction, and the result is a cosmic acceleration.

All of this applies to a cosmic age of around 10^61 ticks. Earlier on (any time till about 10^60 ticks since the big bang) the size of the universe was so much smaller that the total dark energy effect got dwarfed by the forces of gravity. Would we have lived around 10^60 ticks after the big bang, we surely would not have observed a cosmic acceleration. Later on (any time around 10^62 ticks or later) the acceleration will again be undetectable. This is not due to the relative weakness of the dark energy, but rather due to the distant objects in the universe that currently allow us to observe cosmic acceleration having accelerated out of view.

That seems like a very small window as expressed in natural units and rather special. And I have to ask why would the universe be so kind to us to make dark energy observable just at the very time we are capable of noticing it?

“I am *NOT* an astrophysicist, so I cannot speak to methodology, and trust that when the authors of peer-reviewed papers say this works, it does (to whatever level is quoted in their error bars).”

Well, I was. But no work in it, or not enough for a second-rater.

It works, but the errors were pretty damn big. Hipparchos helped a hell of a lot, and there will have been improvements in sensors, but I really don’t know where they were doing their improvements: in spectral/linear resolution or signal/noise values?

It was a bit bit of uncertainty when I were a lad, and I had concluded that the improvements with newer sensors hadn’t managed to reverse that situation, but I guess signal processing and computation will have helped and that’s been something I’ve never kept up with.

I’ve recently been going on about the error in attributing the distance of an object by standard candle, where the error in the detailed process undergone with a high-metal vs low-metal supergiant rarified (hence metal poorer than the overall body) expanded atmosphere that produces the SN candle is a lot less than the errors in the assumptions or measures of extinction.

“When I were a nipper…” the error introduced by the measures of extinction swamped any error available because the spectral shift clouded the information discovering the total extinction.

But high-Z measures are no worse affected by extinction than low-Z observations, since each SN event goes through exactly two universes to get to our observation: the home one and ours. High-Z ones don’t get much of a chance of passing through a third intervening galaxy to muck the picture up, or at least not often enough for the removal of such events to cause problems in too little data.

Your re-expression of the “problem” in natural units has changed nothing. The only “benefit” of expressing it this way is to take advantage of layman’s misunderstanding of exponentials and the tendency to view each increment of the exponent as the same. This is not a misunderstanding I suffer from. Do you? Or were you just hoping I did?

In any case: The time from 0 to 10^60 is only 1/100th the time from 10^60 to 10^62. The time window you are describing is much larger than the current age of the universe.

So what you are saying is in actuality: “Isn’t it bizarrely fortunate that we live in this *special* period between when the universe was a tenth its current age, and when it will be ten times older?”

Or in the more familiar terms I used before: “Isn’t it bizarre that we live in this *special* period sometime between billions of years in the past and hundreds of billions of years in the future?”

No, no it isn’t bizarre in the slightest. That’s the most ridiculous definition of “special”, and the most desperate appeal to the Copernican Principle, I’ve ever heard.

You, or whoever told you about the “problem” in natural units, are trying to lie with numbers in order to besmirch a theory that you/they don’t like for reasons that have nothing to do with this non-argument. If it’s not you, then free yourself from whatever liar bamboozled you; they are doing you no service. If it is you, then I’m afraid there’s no point to further discussion as that presupposes sincerity. And I thought I got through to you in some small way when I pointed out there’s more to DE than just red shift measurements… 🙁

CB, the natural unit argument about specialty was put forth by Johannes Koelman and was based on comments made by Ratcliffe. I do know that the expotential values are not additive. But even you “stretched” the values. If 10^61 is now then 10^60 is 1.5 billion years and 10^62 would be 150 billion, not hundreds of billions of years.

That still doesn’t invalidate that 1.5 billion years would be too soon to see the expansion and sometime around 100-150 billion years would be too late. And unless the acceleration changes overtime it is still a unique and special occurance that is going on right now. This type of argument was used to counter the “void pocket” theory where the Earth happens to exist in a low density void pocket.

I think I’ve mentioned that I love to read about physics and read just about anything. Now this guy might be a crackpot, but here is something that was written about Lee Smolin not too long ago. (BTW I can’t wait to read his newest book).

“Everyone who tolerates this disgraceful liar and demagogue as a part of the scientific community is an immoral bastard. Not only the internet crackpots – the likes of “Marcus”, “Peter Woit”, “Sabine Hossenfelder”, and similar subjects from the moral dumping ground of science – but also the very institutions whose official goal should be to support science actively do lots of things to protect this stunning degree of scientific misconduct.

Of course that what Smolin says doesn’t influence real science because every competent scientist has known that Lee Smolin is a crank for years or decades. And it is damn easy to see and prove that similar discrete models of spacetime simply cannot preserve the Lorentz symmetry.”

That guy makes WOW look like Mr. Rogers!

And yes you did help me in regard to DE and redshift. I’m now reading some recent articles and papers about using AGN as a standard candle.

Here is a small snipet from Lee’s upcomming book where he refers to those who support the Many World’s Interpretation:

“I believe that these theorists-smart as many of them are-are making a big mistake. They are confusing a thematical construction for a radical vision of a real world. Their physics is a branch of mysticism because it leads them to believe that everything we experience is an illusion, a veil which hides what is really real from us.”

John Urbanik:”CB, the natural unit argument about specialty was put forth by Johannes Koelman and was based on comments made by Ratcliffe.”

So they are the bamboozlers lying with numbers in a way that can only fool laymen. They can’t make an argument that would convince a scientist so they try to stir up “public opinion” instead. What class acts.

” I do know that the expotential values are not additive.”

So you knew your reformulation was identical to what I originally claimed, a hundred-billion-year stretch many times the age of the universe, yet still claim that we are in a “special” time and attempted to use your identical statement only with different units to bamboozle *me* into agreeing. Awesome. Now I feel like a schmuck.

” But even you “stretched” the values. If 10^61 is now then 10^60 is 1.5 billion years and 10^62 would be 150 billion, not hundreds of billions of years.”

You mean you have claimed a false precision. As if it only being 10 times vs 14 the age of the universe makes any difference to the failure of this point anyway.

“it is still a unique and special occurance that is going on right now. ”

‘Right now’ meaning anywhere in a range ten times the age of the universe, like claiming it’s a great coincidence we both live ‘right here’, where ‘right here’ is defined as within ten earth diameters from each other.

Anyway, yes it’s a unique really-long period, but so what? I reiterate: The Copernican Principle does not state that the universe cannot evolve, that there cannot be events that occur once and do not repeat.

Recombination will only occur once; does the CMB violate the Copernican Principle?

Star formation only started occurring after a finite amount of time, and can only continue for a finite time before all hydrogen available for star formation is used. Does that violate the Copernican Principle?

Heck,the window of time in which solar-mass or larger stars can form in the Milky Way (+ Andromeda) is likely shorter than the window for observing Dark Energy, does *that* violate the Copernican Principle? Just living around a Class G star means living in the DE-visible time-frame is practically inevitable, so it’s really *that* bizarre coincidence you should be talking about.

Eventually the last of the red dwarfs will burn up it’s fuel. This will take several orders magnitude longer still (not that duration seems to matter), but for a civilization that arose then wouldn’t even know that stars existed! The Era of Stars is a unique, one-time event. Does *that* violate the Copernican Princple?

No. No it does not.

Alternatively, you could remain consistent and claim that all of these things DO violate the Copernican Principle. And since we have overwhelming evidence that the universe does in fact change and evolve and that different epochs in the universe are different, this means the Copernican Principle is trivially false. That’d be based on a misinterpretation of the principle, but at least it’d be consistent.

But NOPE. Instead, the Copernican Principle is true, that means Dark Energy is probably false for violating it, yet none of the other things which are less speculative but equally vulnerable to the argument are made subject to it.

This is the argument put forward by Koelman and Ratcliffe, complete with unit abuse, and you buy it, and repeat it.

And when it’s pointed out what nonsense this is, they just wave the persecuted heretic flag and you lap it up even *more*.

Well then that’s it for me. I just can’t make progress against that mentality, and like I said I feel like a schmuck for trying to engage in honest discussion when this is the game being played. I can’t play that game, and I won’t try.

I wish I could play it. I’d make a ton in book sales.

“And yes you did help me in regard to DE and redshift. I’m now reading some recent articles and papers about using AGN as a standard candle.”

@CB re comment #82: Photons have no mass or charge and travel at c. Neutrinos have no mass (to speak of) or charge and travel at so close to see we can’t detect the difference. That isn’t “share essentially none”. You said neutrinos can’t be slowed down, and in the next breath you talked of neutrinos doing 100km/s. Yes, that’s a contradiction. And to try to wriggle out of it, you asserted that neutrinos were slowed down by the expansion of space. How does that work then? Magic?

@Michael Kelsey re comment #85: With respect, I don’t have any confusion about basic physics. The “fundamental” equation E^2 = p^2*c^2 + m^2*c^4 isn’t fundamental, there’s a flip-flop between the momentum and mass terms in gamma-gamma pair-production and annihilation. You start with two E=hf massless gamma photons, you then get two E=mc² massive particles in the guise of an electron and a positron. With no excess separation energy they have no momentum, and they annihilate back to two E=hf gamma photons. The electron isn’t fundamental either, you can create an electron, and you can diffract electrons. The wave nature of matter is not in doubt. The electron can be modelled as a standing wave in a “Dirac’s belt” configuration. The mass of a body is a measure of its energy-content, and the electron is a body. Even a child can work out that electron mass is a measure of resistance to change-in-motion for a standing wave in a closed path, whilst photon momentum is a measure of resistance to change-in-motion for a linear wave in an open path. Accelerate a fermion and only then does E^2 = p^2*c^2 + m^2*c^4 apply. But there is no standing-wave configuration with an energy-content of 0.23eV. So you have never seen a neutrino at rest, and you never ever will. When you detect neutrinos moving at significantly less than c, please let me know. By the way, the Lorentz factor comes straight out of Pythagoras’s theorem, gluons are virtual, gravitons are hypothetical, and particles are neither bullets nor billiard-balls.

Wow’s science is strong, and his quick assessments are mostly correct. When he says your logic fails, pay attention.

The politics of the anti-science folds is to confuse with nonsense science-speak on difficult science topics; with the purpose to discredit scientists for not understanding their own science. But the anti-science folks always hit themselves in the butt as they go out the door. They can’t help themselves; they have to try to discredit the most credible person in the room. And on this post that person is Michael Kelsey, SLAC National Accelerator Laboratory. Oh well.

This constant of nature is too hot, this constant is too cold; and no constant of nature is correct

July 23, 2013

What motivates anti-science John Duffield; besides just being anti-science? Politics, insecurity, trypophobia…

If you look up John Duffield and psuedoscience; the first comment by John Duffield on another blog is, “What gets me about all this is that the “constants” aren’t constant anyway. The fine-structure constant isn’t constant, Planck’s constant might not be… So you know c is merely defined to be constant. And… Lambda isn’t constant either. How you go from all that to Goldilocks multiverses absolutely beats me.”

Apparently the only thing constant of nature is John Duffield’s anti-science attitude. Because unlike Goldilocks, there is no constant of nature that is just right for John Duffield.

@CB for what its worth I do thank you for taking time explaining this to me. I have learned my leason in regard to the Copernican Principle and won’t be using it as I did here. In fact I’ll probably be challenging others using your logic.

“It appears as though you are predicting that the electric and magnetic forces can be separated below a certain energy like the electroweak separates into electromagnetic and weak in our common experience. At what energy level does your theory predict this occurs?”

Electromagnetism is a unified field consisting of the magnetic and electrical forces, which must remain in consideration as fundamental forces.

In 1931 Paul Dirac discovered that when using Maxwell’s equations for the electromagnetic unified field and in removing electrically charged particles from the equations produced duality symmetry in that the electric and magnetic fundamental fields can be interchanged without changing their form. Adding back the electrically charged particles to the equations destroyed the duality symmetry. Dirac proposed the existence of hypothetical magnetic monopole particles and their inclusion in the equations, together with electrically charged particles, reinstated the duality symmetry. Using the fine-structure constant α Dirac established the coupling strengths for the electric and magnetic charges to the electromagnetic unified field. Electrically charged particles are weakly coupled with coupling strength α = 1/137, whereas, the duality symmetry inverts the coupling strength for magnetically charged particles and are strongly coupled with coupling strength of 1/α = 137.

As comparison between force strengths are by relative magnitude of the coupling constants, if discovered, then these hypothetical magnetic monopoles would establish magnetism to be the strongest fundamental force! (Electric → x137 → Strong → x137 → Magnetic.)

@Wow #91:

“1) Nature has oodles of redundancy.
2) Nature has no intent. It just is.”

Very good at making remarks without backing them up. These remarks are meaningless without justification.

WOW, did you write this review of Lee Smolin’s latest book Time Reborn?

The last three chapters resemble a speech of an Islamic fundamentalist preaching before the execution of a heretic who is being stoned to death. There isn’t a trace of science in those chapters. It’s pure religion and screaming that everyone must act to agree with Smolin’s unscientific delusions.

I don’t think you’ll run into that argument much outside those whose real interest is in crafting a narrative. “Mainstream science is so far off the rails they ignore basic principles, while we heretics work in the shadows on the truth!” It’s a romantic, seductive notion, sure. Which is great for a vacation in Paris, but in science maybe it’s better to recognize who is trying to seduce with narrative rather than educate with reason and avoid them.

Anyway, I’m not here to tell you what books to buy or not buy. I’m just glad to hear you weren’t engaging in an deliberate attempt to trick me. Any other result of our conversation is fine with me, but that would have made me sad.

OKthen re comment #107: I’m not anti-science. I’m pro- science. Very much so. What motivates is seeing physics funding under continued pressure, arguably due to disillusionment amongst the public and politicians, and the rise of pseudoscience such as the Goldilocks multiverse.

Really?
Then discuss science, listen and learn from the scientists out here instead of trying to discredit them, keep your personal psuedoscience speculations to yourself, and stop shifting the discussion from science to politics.

I am discussing science, and I’m not trying to discredit scientists. If you check back you can see me asking questions of Michael in comment #17, and his partial response in comment #27. Note his final line. And note your own comments. Irony isn’t your strong suit, is it?

Stupid question: If the universe is 13.8 billion years old, how can the diameter of the universe be 100 billion light years? If space was expanding at the speed of light, the diameter could only be 27.6 billion light years, no?

@Wow
Hmmm. I’m sure your answer is correct but I’m still not seeing it. I’ve “led” the target by an additional 13.8 billion years in doubling the diameter to 27.6 billion light years. I just don’t see how you need to “lead” by the amount to get to 100 billion LY. Can you point me to a blog that illustrates this? Thx