From Omega Centauri on science, policy and communication: “Do we upfront acknowledge all the counter intuitive exceptions to the general rule, at the risk of boring people or obscuring the message, or do we leave those in an appendix that will only be read by the few most motivators reader? These sorts of things come up not just in climate change, but in many other areas where science and policy collide. And I don’t think there is any clear answer about how to handle them.”

Everyone has their own opinion on this, and because how to communicate science is itself an artform, not a science, there isn’t really a clear ‘follow-this-formula-for-optimal-results’ equation out there. My view on science literacy — or even my definition of science literacy — isn’t the mainstream one, although it is rooted in a significant amount of evidence.

I don’t think giving everybody the full suite of facts and just laying them out is effective, because people pick and choose which facts they remember to support their own position.

I don’t think belaboring the gory details and the exceptions are generally worthwhile, because highlighting the nuance, in general, means the overall message gets lost.

And I don’t think that aspiring that everyone be able to perform the scientific analysis themselves is a useful task; most people seek only to convince themselves, not to draw a valid scientific conclusion.

I think the best path to science literacy — and this follows the scholarly work of Morris Shamos (RIP), which you can get for a penny on amazon.com — is to teach people an awareness of what the scientific enterprise is and to give them an appreciation for what science knows and does as a result. You can give them more or fewer details as you like, but that should be the main message. Anything more than that is going above and beyond what’s required, but that’s what I choose to look for. It’s sort of in the same vein as Star Trek.

Three members of the Star Trek crew beaming down off the ship. Image credit: CBS Photo Archive/Getty Images.

Star Trek brought us a glorious future, where advances in science and technology were used for the benefit of all members of all planets in the Federation, by picturing scientists as altruists. Rather than warmongers who brought us progressively more destructive weapons, scientists were researchers who sought out truth, knowledge, and positive applications towards the ends of peace and societal advancement. Many people don’t see scientists in this same light, but that’s what we’re doing; that’s what we’re striving for. We’re on the side of humanity. We have value. What we do and what we learn has value. All we’re asking — and yes, perhaps demanding — is that we all actually value it.

Popehat is a pretty good site in general for explanations about the law that go beyond what you’d simply get by reading the constitution for yourself and applying a reasonable standard to it. I am not an expert in all things, and the law is certainly one of the ones I’m a non-expert in.

No matter its color, wavelength or energy, the speed at which light travels in a vacuum is always the same. This is independent of positions or directions in space and time. Public domain image.

From Mike Doonesbury on the speed of light: “If you got into a Ford Mustang and accelerated at 1 G for seven months and then integrated your speedometer, you would calculate a velocity greater than the speed of light. The reason you can do this is the Principal (sic) of Relativity, which states, briefly, that you can perform no experiment that will tell you how fast you’re going.”

That’s not quite what the principle of relativity tells us, though. It tells us that to observers in all reference frames, light will appear to move at the same universal speed: 299,792,458 m/s. It’s true that you can never calculate your velocity relative to empty space, because there is no such thing as absolute, empty space, but you can calculate it relative to the Chevy Nova you’re passing, any matter particles or even the rest frame of the CMB. You’ll never, under any measurement circumstances, calculate yourself moving faster than the speed of light. Adding up an extra “+9.8 m/s” for each second you go by is a calculation that you might be tempted to make, but that special relativity tells you is wrong.

The oscillating, in-phase electric and magnetic fields propagating at the speed of light define electromagnetic radiation. Public domain image.

From David on a common misconception about the energy stored in light waves: “Aren’t the E and B fields in a traveling light wave out-of-phase, so that the energy density E.dot.E + B.dot.B remains constant?”

You might think so, but that’s not how it works. The “E.dot.E + B.dot.B” remains constant when you consider that these are complex fields; we’re only seeing the real components of them. Alternatively, I also like Michael Kelsey‘s explanation.

That’s what you might think (each field’s time rate of change drives the other), but that isn’t the case.

If you combine Maxwell’s zero-source equations (i.e., the two curl relationships with no charges and no currents), what you get is a relation between the time derivative of one and the space derivatives of the other. The net result is that the two fields are in phase.

Galaxy cluster LCDCS-0829, as observed by the Hubble Space Telescope. This galaxy cluster is speeding away from us, and in only a few billion years will become unreachable, even at the speed of light. Image credit: ESA/Hubble & NASA.

From Sinisa Lazarek on a new theory of creation: “maybe it was a failed experiment.. excited the inflaton field in the wrong way and woooosh. ? this is all just an afterglow”

Four individual planetary nebulae — He 2-47, NGC 5315, IC 4593, and NGC 5307 — were imaged by Hubble in February of 2007. Image credit: NASA, ESA, and The Hubble Heritage Team (STScI/AURA).

From PJ on the planetary nebulae showcased on last week’s Mostly Mute Monday: “Absolutely spectacular ! It’s sometimes hard to get a handle on how much energy is expended in each of the photo’s.
Have a safe holiday, you lot. Back in a few weeks.”

Of course everyone should have a happy holiday season, regardless of what (or whether) you believe in anything natural or supernatural at all. The comments section here is great evidence that we come from all over the world, and are of many different ages, education levels, political beliefs and ideologies. But for some reason or other, we all enjoy sharing in components of the story of this Universe, and that makes what we have here something rather special.

But for PJ, this is the best time of the year for being outside and enjoying a summer vacation. The fact that the darkest days of the Northern Hemisphere’s year coincide with the start of Australian summer may explain why I always see so few aussies (excepting transplants) at the annual AAS meeting the first week in January!

An illustration of how redshifts work in the expanding Universe. Image credit: Larry McNish of RASC Calgary Center, via http://calgary.rasc.ca/redshift.htm.

From Michael Kelsey on distances in the expanding Universe: “The “46 billion light years” you are explaining is the “comoving distance” (type that into Wikipedia!) between us and the CMB (surface of last scattering; type that into Wikipedia as well). That is, how far away is an object TODAY, if it emitted the light we see 13.8 billion years ago. But what we see is the light from 13.8 billion years ago, and at that time that same “object” was much, MUCH, closer to us than it is now.
So do we want to talk about the comoving distance, or the light transit time, or the original emission distance? Those are three entirely different quantities, and mixing them will lead to nothing but trouble.”

This is a very important point that I think needs highlighting. When we talk about distances in the expanding Universe, there are three things you might think of:

How far away this object was from us when it emits the light we now detect.

How far away the object is from us “right now,” if we could somehow travel at infinite speed to this object and measure the distance we traversed.

How far the light — arriving right now — traveled from this object to our eye along its journey.

For objects close by, like within our own galaxy or local group, all three of these numbers are generally pretty close to identical. But for objects that are very distant, whose light we see from very long ago, the three definitions lead to very different numbers.

In the very distant past, everything was much, much closer together. The Universe has expanded, and light has always traveled at the speed of light. If you were to use the first definition to measure the earliest galaxy we’ve ever seen — from 13.4 billion years ago, or 0.4 billion years after the Big Bang — you would find it’s only about 1 billion light years away. The Universe has expanded by that much. If you were to use the second definition, which is the one I tend to use, you would get about 30 billion light years. And if you were to use the third definition, you’d get 13.4 billion light years by definition: the light travel time. (Some NASA press releases use this definition.) At least you’ve got the information now, in case you’re interested.

From axil on microwave ovens and new fundamental physics: “One way to test this theory, is to demonstrate the reduction of the half life of an radioactive isotope in the regions of positive vacuum energy against the result produce in the regions of negative vacuum energy. The positive vacuum energy should produce a reduction in the half life in the radioactive isotope.”

Nice hypothesis you’ve got there. What were the results of your test? This seems like an easy one to:

write down your methods.

perform the experiment vs. a control (no microwave).

measure your results.

publish your conclusions.

It’s easy to report, “someone claims this thing is happening,” that’s pretty much all of what I’ve ever seen with LENR and the EMdrive. Now you have a new claim that contradicts the predictions of fundamental physics, and you claim it’s easy to test. What are the results? (If you don’t have them, you haven’t done science yet, BTW.)

Image credit: Tabby Boyajian and her team of PlanetHunters, via http://sites.psu.edu/astrowright/2015/10/15/kic-8462852wheres-the-flux/.

From Sinisa Lazarek on Tabby’s star and its uniquely odd behavior: “If you take into account that Tabby’s star was/is dimming at an incredible rate for the last hundred or so years. And then these dips that sometimes go as much as 20% of the brightness… and I had to extrapolate a hypothesis… I would say that the star is somehow dying. But totally unlike anything we know.
I’m almost thinking about something that could be inhibiting fusion from the inside. What if it’s not the light getting blocked by something, but the [star] itself getting “buggy”.”

The dimming is sketchy, to say the least. Over the 100+ years its been measured, it may have dimmed overall (gone to higher visual magnitude numbers), but it hasn’t been at a consistent rate. These observations are… questionable in their robustness. Have a look at the results directly from the 2016 paper.

If you throw out the observations from the 1800s, things look pretty constant from 1900 to 1970, and then they look pretty constant again from 1970 to the present. So I’m not sure that this is real.

An F-class star this young — a few hundred million years old — shouldn’t be dying. It should be steady and on the main sequence. Internal variations in the fusion rate couldn’t cause such rapid variability at the surface; heat transport from the core to the surface takes tens of thousands of years. Something funny is going on for sure, but the intrinsic explanations that can’t immediately be ruled out are hard to come by.

The CMS detector, prior to its final installation. Image credit: CERN/Maximlien Brice, of the CMS detector, the small detector at the LHC.

From Michael Kelsey on particle physics detectors: “a very minor nitpick. Bubble chamber tracks alone cannot determine all the properties of particles, not without significant assumptions (and thereby fairly large uncertainties). In fact, there’s no kind of detector which directly measures the mass of any subatomic particle. That’s always _inferred_ from the relationships between measurables like momentum, energy, or velocity.”

And this underscores the importance of having such a sophisticated detector. You can measure energy and momentum from calorimeters placed outside the collision point: E-cals and k-cals. That enables you to reconstruct the mass. You can measure electric charge by applying a magnetic field and seeing how the particles — of given energies and momenta — bend. You can measure a particle’s lifetime by either the distance to a displaced vertex or the speed/distance it travels in the chamber/detector. And the reason CMS and ATLAS are so ginormous is because muons live so long and are so low in mass; if you want to detect them accurately, the detectors need to be that large.

It’s been a while since my graduate particle physics, and even longer (1997! yeesh!) since I worked in experimental particle physics as an undergrad, so I may gloss over things or get sloppy about them from time to time. Thanks for making sure we get it right in the end, particularly for the sophisticated audience we have here!

And finally, for the last word this Christmas, we have a scenario worth exploring from Denier: ”
1 – If matter touches anti-matter in space the two pieces annihilate into energy.
2 – Time is just a dimension in space-time. It is all the same stuff, just in a different direction.
3 – Neutrinos are their own anti-particle and can flavor change between anti-matter neutrinos and matter neutrinos.
Why does a matter to anti-matter flip not instantly cause the neutrino to annihilate? It its timeline the matter neutrino is touching the anti-matter neutrino.”

If neutrinos are majorana particles, and are their own antiparticles, then this is a legitimate question. In fact, even if that third part of the question is false, the Cosmic Neutrino Background is made up of equal parts neutrinos and antineutrinos, to the tune of about 300 neutrinos per cubic centimeter permeating all of space around us. And it’s true, that if matter “touches” antimatter in space, they annihilate into pure energy. But “touching” has a particular meaning in physics, and is dependent on cross-section. For neutrinos (and antineutrinos), that cross section is not only tiny, it gets smaller at lower energies.

A neutrino-antineutrino interaction could produce any number of particles, but we have never observed one because the cross section is so tiny. The simple answer to your question is that neutrinos and antineutrinos don’t “touch” very frequently. Dark matter particles, in most models, are their own antiparticles as well, but we don’t see a mysterious radiation excess that we can attribute to them. If you interact through the strong or electromagnetic force, you’re easy to detect this way, as “touching” happens all the time. But if not, it’s like you don’t even touch at all.

Thanks for a great week and a great year! We’ll have plenty of wonderful stories this coming week and then it’s off to AAS in January to bring you the greatest Astronomy news to start off 2017. Thanks for sharing a bit of your world with me, and I’ll see you back here soon!