The masses of known binary black hole systems, including the three verified mergers and one merger candidate coming from LIGO. Image credit: LIGO/Caltech/Sonoma State (Aurore Simonnet).

“Many people today agree that we need to reduce violence in our society. If we are truly serious about this, we must deal with the roots of violence, particularly those that exist within each of us. We need to embrace ‘inner disarmament,’ reducing our own emotions of suspicion, hatred and hostility toward our brothers and sisters.” -Dalai Lama XIV

From subatomic scales to very human ones to the largest conceivable ones in the cosmos, we do our best to cover the entire Universe here at Starts With A Bang! There’s always so much to discuss that we’re never going to lack for potential topics, and this past week was not only no exception, it actually featured an extra article on top of our normal schedule. Come take a look back at what we’ve seen:

Thanks for reading and showing interest, and hopefully you’ve learned something along the way. Now, let’s see what else there is to learn on this edition of our comments of the week!

The theoretical ‘islands of stability’ (circled, and also to the top right of the line) in nuclear physics.

From eric on the island of stability: “The transactinides discovered so far are well “to the left of” (i.e. neutron poor) the island of stability We have yet to figure out whether a high neutron value of the elements between about 110-130 would lend them more stability, because until someone comes up with some new idea, we don’t know a good projectile-target reaction that will reach that island. But I have good confidence that eventually, someone will get there and give us some empirical estimate of just how stable that island is.”

The whole concept of an island of stability, for those who don’t know, is highly related to the idea of filled electron shells in atomic physics. What are the most stable, non-reactive atoms? The ones with filled electron shells: helium, neon, argon, krypton, xenon, etc. Why? Because their electron shells are filled; if you take away an electron or add an electron, you get something much less stable. Well, atomic nuclei are thought to work in a similar fashion. You’ll notice that the supposed island looks “horizontal” in nature, and that is because we are pretty confident that it will take a specific number of protons and neutrons in the nucleus to produce a more stable (i.e., living for days-to-months instead of seconds-to-minutes) nucleus with a filled nuclear shell. The best candidates will have either 114, 120 or 126 protons paired with 184 neutrons; the 114th element (Fl) appears in the white circle, while the 126th is in the isolated island.

Neither of these isotopes have ever been successfully produced, but that’s where the true test will lie!

A better understanding of the internal structure of a proton, including how the “sea” quarks and gluons are distributed, has been achieved through both experimental improvements and new theoretical developments in tandem. Image credit: Brookhaven National Laboratory.

From Elle H.C. on how to probe the internal properties of particles: “Of course we ‘smash’ one thing into another but I was pointing at the differences in force and size between what’s hitting what.”

There are many ways to examine what’s going on inside a proton, meson, or at the quark/gluon level, and although you may not be aware, scientists are pursuing them all. These include holding a particle as still and stable as possible and looking for its decay rates and pathways, stimulating it in a variety of ways and seeing how it behaves, bombarding it with various particles (photons, neutrons, electrons, etc.) and seeing what interactions take place, colliding large numbers together at high energies, supercooling them and monitoring them, and performing deep inelastic scattering experiments. These teach us about different properties and different behaviors, all of which offer interesting things to study. But we are doing all of them; the LHC focuses on the last one and does it better than any machine that’s come before it.

You are free to advocate for greater investment in one approach over another, but don’t pretend that these experiments aren’t going on at all, and don’t pretend that one line of investigation is a replacement for another. They are all complementary.

Image credit: E. Siegel, of the GUT baryogenesis scenario.

From Sinisa Lazarek on the hunt for new particles: “…particle accelerators are not used only for probing new particles.”

This is very, very true, and there are many good reasons to invest in particle accelerators beyond studying particle properties and searching for potential new ones.

But make no mistake about it: there are almost certainly new ones that must exist. Something must be responsible for dark matter, for baryogenesis, for the lack of CP-violation in the strong interactions, for solving the hierarchy problem, for explaining neutrino masses, and for providing the gravitational force. The leading idea for what that “something” is, in all of these cases, is either a new particle or a suite of new particles. To declare “there are no new particles” is the height of silliness with no well-motivated alternatives; to declare “there are no new particles within reach of the LHC or its immediate successors” is what’s referred to as the “nightmare scenario” at the LHC, and that is more reasonable and quite likely true. But don’t mix these two up! New particles are out there, or if not, there’s something way weirder that’s out there instead.

From Denier on Mars’ sideways tornadoes: “I watched a Mars-centric sci-fi movie last night that was as bad as a sideways tornado. It was ‘Life’ with Ryan Reynolds and Jake Gyllenhaal.
[…]
Seriously, don’t see this film.”

There have been two good movies about humans on Mars, as far as I’m concerned, and many, many bad ones. I really enjoyed The Martian when it came out, and I very much enjoyed the original Total Recall by Paul Verhoeven. Everything else I’ve seen in film that’s been Mars-focused has been a disaster, and “Life” doesn’t appeal to me as a candidate worth seeing.

Although, I do understand it deals with nihilism, depression, despair, and death as major themes. I did recently see a movie that employs those themes in a way I haven’t seen before and truly enjoyed. It’s called Kubo and the Two Strings. No sci-fi, but definitely worth checking out. (Also, you’ll note I linked to Rotten Tomatoes. In the absence of a movie critic whose recommendations I really appreciate — and I haven’t had one since The Filthy Critic stopped posting on bigempire.com — I recommend going to Rotten Tomatoes. If a movie gets over 90%, chances are it’ll be quite good. If it gets less than 80%… watch it at your own peril. In my experience.)

When you hear that someone made a basket, you’re already biasing your probabilities of success, even if you don’t realize it. Image credit: Shutterstock.

From Omega Centauri on the hot hand: “I always thought no effect was highly implausible. Humans are not automatons. We know humans have streaks of underperformance, due to injuries, illness, or distraction. Why not streaks (periods) of overperformance too.”

What’s interesting about this theory is how non-universal it is! Sure, there are some basketball players who exhibit tremendous amounts of streakiness, with Klay Thompson perhaps being the most sterling example of hot-and-cold performances. But there are others who exhibit no streakiness at all, performing at approximately the same level under any circumstances.

The big, unsurprising takeaway at this? Some people are more consistent than others; some are more inconsistent; sometimes inconsistency gets you a win when you otherwise shouldn’t get one; sometimes consistency defeats inconsistent but superior talent. In other words, that’s why they play the games.

Scottie Pippen, on fire, in a game of NBA Jam.

From Carl on the math of how to get to average: “It’s obvious that humans have “streaks” – simply turn the question around, and you’ll realize that when players feel a little ill or injured they don’t play as well.
That pulls their average down, so when they are 100% they must outperform their mean.”

And if you’re a basketball fan, it’s those memorable overperformances in some way that stick in your memory the best. Boom Shakalaka!

The Square Kilometer Array will, when completed, be comprised of an array of thousands of radio telescopes, capable of seeing farther back into the Universe than any observatory that has measured any type of star or galaxy. Image credit: SKA Project Development Office and Swinburne Astronomy Productions.

From Omega Centauri on the configuration of SKA: “I have a question about the array. In the picture, it looks like the dishes are randomly distributed. Or is there some special algorithm that finds an optimal placement?”

There are some resources out there (Sinisa provides a good link to one), but for radio telescopes like this, the explanation is fairly simple: you want as much coverage within a certain radius of your central point as possible, so there will be as much light-gathering power that’s centrally located. You also want to get long-baselines to get high resolution for the sources that are powerful enough to show up in single dishes. So you’ll get clusters that follow Gaussian distributions in where their dishes are located.

The ideal configuration will follow a Gaussian distribution, albeit with discrete locations due to the finite number of telescopes available for the array. Image credit: Chong Teng, Deepankar Pal, Haijun Gong, and Brent Stucker.

This is like a scaled-up version of the Very Large Array, augmented by other arrays that both extend out the Gaussian distribution of the main array and also provide their own, smaller arrays to perform similar services. If you want a more detailed explanation, SKA has plenty of technical documentation available.

This photo, of the completed integration of the Control Systems and the CrIS instrument aboard JPSS, represents some of the earlier systems to finish up on JPSS. It is nearly ready for its launch just a few months from now. Image credit: Ball Aerospace.

From Denier on NASA vs. NOAA: “Don’t get me wrong, the JPSS pair will provide some great data. It is a definite upgrade over the current POES series, but what does NASA Earth Sciences really add?”

Oh, a ton. You look at a NASA/NOAA collaboration and ask why it needs to be a collaboration? Why not just give the whole deal to NOAA? (Which, by the way, is facing its own budget cuts of almost 20% for the next fiscal year, as is the NSF.) And the answer is simply due to expertise and experience. NASA Earth science adds the expertise of having successfully managed products and satellites like these for decades, along with the expertise of data retrieval and the experience of managing and maintaining the infrastructure, while NOAA has its own set of strong points. This is the type of collaboration that truly showcases how both organizations can shine brightest when they work together.

To directly answer your question, there are instruments that are going on board JPSS that have been direct outgrowths of previous successful NASA Earth science missions, such as Terra, Aqua and Aura, and it makes no sense to say, “oh, even though you have the experience and expertise, let’s cut you out and make NOAA do it all themselves.” You’re not a fan of reinventing the wheel; why would you cut the rear axle off your car like this?

From Carl on a thought experiment: “First, imagine an instrument that detects virtual pairs created from the quantum foam.
[…]
If I’m moving at 0.9 C compared to the CMB, what do we expect to measure? Zero average momentum in that frame of reference? If so, how did particles know to match the speed of my instrument?”

I want to be clear that these “virtual pairs” are not real particles, and you cannot detect them. They are a calculational tool used to compute the zero-point energy of a field in empty space. So there is real energy there, but it is energy inherent to space itself, not energy that you will see or experience as a particle to smack into you.

Now, Michael Kelsey recommended looking up “Unruh Effect” and “Unruh Temperature” as well, and I agree, these are related! Different fields have different zero-point energies, and the frame of reference that allows one field to be in its lowest-energy state may not be the lowest-energy state of other fields. The key is not velocity, however, but rather acceleration. It is the effect of gravitational acceleration that produces Hawking radiation, and by the equivalence principle, any acceleration should produce that same type of radiation. So if you can accelerate at 2.5 x 10^20 m/s^2, you can achieve a whopping temperature of 1 K.

The noise between the two detectors, in red and black, clearly exhibit correlations between them. Image credit: J. Creswell et al., arXiv:1706.04191v1.

From John on the robustness of the LIGO signal: “I’m not overly concerned about this criticism. Within a few years KAGRA and VIRGO will come on-line, and with the integration of data from these additional detectors, the degree of confidence in future detections should be much higher.
GR predictions have been tested in other ways (the geodetic effect and frame-dragging by Gravity Probe B, gravitational lensing, etc.) and has not yet been falsified.”

The issue at play here is not whether GR is wrong (it’s right), whether LIGO and other experiments point to the same successes as one another (they test different regimes that for the most part do not overlap), or whether the significance of results will improve in the future (it will). The issue, rather, is whether the LIGO results are independently reproducible from the same data and the same methods in a way that makes sense.

After publishing this piece, I have received private correspondence from a number of people and groups — some of which are involved with LIGO directly — that have relayed the following pieces of information to me:

Several people from LIGO-Virgo have interacted with the Danish group.

The group was invited to (and did not) present their results to LIGO-Virgo.

Their methodology does not follow what LIGO-Virgo does, completely, and they have tried to get them to reflect on that fact, to no avail.

The LIGO-Virgo members have left frustrated about the lack of openness and responsiveness from the Danish group.

They no longer feel that a response is worth their time or energy.

The 30-ish solar mass binary black holes first observed by LIGO are likely from the merger of direct collapse black holes. But a new publication challenges the analysis of the LIGO collaboration, and the very existence of these mergers. Image credit: LIGO, NSF, A. Simonnet (SSU).

But that said, the LIGO group refuses to put out an official statement or paper addressing these claims. I believe that is a mistake, and allows doubt to be sown. I would much rather see them devote the effort (even if they feel it’s not worth it) and address what Jackson’s group has done, even if it isn’t getting scientific attention. If you can demonstrate that you’ve done it right, it’s an opportunity to educate the public in a tremendous way, especially when you’ve got them interested.

And this opportunity couldn’t come at a better time; the NSF budget has been slashed by ~$840 million, and LIGO’s path to achieving design sensitivity and being able to see lower-mass black holes and even potentially neutron-star mergers nearby is in jeopardy. Why wouldn’t you drum up public support, now, when it’s your best chance?

Related

Comments

‘Several people from LIGO-Virgo have interacted with the Danish group.

That’s nice.

‘The group was invited to (and did not) present their results to LIGO-Virgo.’

They wanted to wine em and dine them.

‘Their methodology does not follow what LIGO-Virgo does, completely, and they have tried to get them to reflect on that fact, to no avail.’

The question here is has LIGO ever published their ‘more intricate – data analysis’ that Sabine mentioned in her article?

‘The LIGO-Virgo members have left frustrated about the lack of openness and responsiveness from the Danish group.’

Well if you can’t convince someone that’s always frustrating, ask MM he can tell you all about it.

‘They no longer feel that a response is worth their time or energy.’

To my knowledge it is the first ‘official’ criticism so it’s not like they have to use their time to fight of any other criticism, at least they could have said; “sure, now is a bad time, but you’ll get a response in a month or two”.

‘But that said, the LIGO group refuses to put out an official statement or paper addressing these claims.’

Well that’s the key issue. If you write something down than it is official and it can be used against you as evidence, so better to invite someone over, wine em and dine em, convince they are wrong by applying charm and some peer pressure and the ‘problem’ is fixed without ever putting something on paper.

This thing already smelled like a con from the start, and by now refusing to put something on paper it is starting to smell worse.

“bombarding it with various particles (photons, neutrons, electrons, etc.) and seeing what interactions take place”

Seeing???

Well I was always curious if collisions at the LHC could cause tiny vibrations in SpaceTime and shake up surrounding matter with the risk of disrupting protons, like how you can shake and break a glass from a distance, with a speaker with a strong enough amplitude.

Now what this LIGO case shows us is that we possibly can’t detect these vibrations, because there is too much noise and we even have no templates for such things, to spot them. We simply can’t see if we are starting to overheat the installation.

Physics has changed in strange ways. Theoreticians can now do without experimental confirmation while experimentalists find it ethical to work in absolute secrecy and launch fraudulent campaigns from time to time. So in 2010 “a select few expert administrators” deceived everybody, misled astronomers into wasting time and money on the fake, and “this became particularly useful starting in September 2015”:

https://www.researchgate.net/blog/post/a-null-result-is-not-a-failure
“…a blind injection test where only a select few expert administrators are able to put a fake signal in the data, maintaining strict confidentiality. They did just that in the early morning hours of 16 September 2010. Automated data analyses alerted us to an extraordinary event within eight minutes of data collection, and within 45 minutes we had our astronomer colleagues with optical telescopes imaging the area we estimated the gravitational wave to have come from. Since it came from the direction of the Canis Major constellation, this event picked up the nickname of the “Big Dog Event”. For months we worked on vetting this candidate gravitational wave detection, extracting parameters that described the source, and even wrote a paper. Finally, at the next collaboration meeting, after all the work had been cataloged and we voted unanimously to publish the paper the next day. However, it was revealed immediately after the vote to be an injection and that our estimated parameters for the simulated source were accurate. Again, there was no detection, but we learned a great deal about our abilities to know when we detected a gravitational wave and that we can do science with the data. This became particularly useful starting in September 2015.”

In the changed physics world the following C.S.I. activity is regarded as a normal scientific procedure:

http://nautil.us/issue/34/adaptation/the-gravity-wave-hunter
“I can tell you about Alan Weinstein’s reaction, and he’s a professor here at Caltech who works on the LIGO experiment. He said when they got the phone calls they were all incredulous because they couldn’t believe that it was real. They’ve been looking for gravitational waves for decades. He said at first he thought that it was a blind injection, that someone had put in a signal and they didn’t know about it and so they thought that they were going to have to go through this whole rigmarole again, to find out that at the end of the day it was a hardware injection. Then they thought that maybe it was double blind because no one seemed to know what was going on. Whoever did the injection didn’t tell anyone, and this is going to be a big secret, and then eventually it’s not going to be a real signal. But then everyone swore that they hadn’t done any injections, and so they were starting to think, “oh my gosh, maybe this is real!” And then Alan thought maybe it was a triple blind experiment, and that just means it’s a malicious hacker who somehow managed to erase all of their steps and get the perfect gravitational wave signal in the mirror, and then will announce that they’ve somehow engineered this in a few months, and embarrass the collaboration. But he also claims that a binary black hole merger is much more likely than someone with that level of computer hacking power who is interested in hacking LIGO.”

http://www.gizmodo.com.au/2016/04/black-hole-blues-gives-a-ringside-seat-to-discovery-of-gravitational-waves/
“Rai said, “Look, we went through every possible scenario for how you would inject a false signal, and tried to do it ourselves.” There were only a few people in the entire collaboration with sufficient access and knowledge to do something like that, and they interrogated them all. And you have to physically attach stuff, you can’t just do this telepathically, so they looked for little black boxes and things like that. It was like a C.S.I. experiment. So there’s no physical evidence. It would be very hard to fake a signal without being caught. And I don’t think anyone in the collaboration has that sophisticated a criminal mind. In fact, when they did a [deliberate] blind injection during the test run [of the earlier version of LIGO], they screwed it up a little. They got the orientation wrong.”

So in 2010 LIGO conspirators still did not have “that sophisticated a criminal mind” and “screwed it up a little” but then they improved and in 2015 everything was just fine:

http://www.thenational.ae/arts-life/the-review/why-albert-einstein-continues-to-make-waves-as-black-holes-collide#full
“Einstein believed in neither gravitational waves nor black holes. […] Dr Natalia Kiriushcheva, a theoretical and computational physicist at the University of Western Ontario (UWO), Canada, says that while it was Einstein who initiated the gravitational waves theory in a paper in June 1916, it was an addendum to his theory of general relativity and by 1936, he had concluded that such things did not exist. Furthermore – as a paper published by Einstein in the Annals of Mathematics in October, 1939 made clear, he also rejected the possibility of black holes. […] On September 16, 2010, a false signal – a so-called “blind injection” – was fed into both the Ligo and Virgo systems as part of an exercise to “test … detection capabilities”. At the time, the vast majority of the hundreds of scientists working on the equipment had no idea that they were being fed a dummy signal. The truth was not revealed until March the following year, by which time several papers about the supposed sensational discovery of gravitational waves were poised for publication. “While the scientists were disappointed that the discovery was not real, the success of the analysis was a compelling demonstration of the collaboration’s readiness to detect gravitational waves,” Ligo reported at the time. But take a look at the visualisation of the faked signal, says Dr Kiriushcheva, and compare it to the image apparently showing the collision of the twin black holes, seen on the second page of the recently-published discovery paper. “They look very, very similar,” she says. “It means that they knew exactly what they wanted to get and this is suspicious for us: when you know what you want to get from science, usually you can get it.” The apparent similarity is more curious because the faked event purported to show not a collision between two black holes, but the gravitational waves created by a neutron star spiralling into a black hole. The signals appear so similar, in fact, that Dr Kiriushcheva questions whether the “true” signal might actually have been an echo of the fake, “stored in the computer system from when they turned off the equipment five years before”.”

I’m not skilled in analasys in order to decide who is right and which method is ok or not. What I’m glad for, is that they now showed where the complete methodology for LIGO analasys is, and how it’s done (papers and links in the article). Not the simple “tutorial” for everyone. That way at least, everyone can give a go with the actual method that they used.

A generic problem. Unlike special relativity, general relativity is empirical, not deductive. If so, none of its predictions matter, including everything it says about gravitational waves. The models Sabine Hossenfelder criticizes are actually metastases from the primary tumor, general relativity:

Sabine Hossenfelder: “Many of my colleagues believe this forest of theories will eventually be chopped down by data. But in the foundations of physics it has become extremely rare for any model to be ruled out. The accepted practice is instead to adjust the model so that it continues to agree with the lack of empirical support.”http://www.nature.com.proxy.readcube.com/nphys/journal/v13/n4/full/nphys4079.html

Sabine Hossenfelder: “The criticism you raise that there are lots of speculative models that have no known relevance for the description of nature has very little to do with string theory but is a general disease of the research area. Lots of theorists produce lots of models that have no chance of ever being tested or ruled out because that’s how they earn a living. The smaller the probability of the model being ruled out in their lifetime, the better. It’s basic economics. Survival of the ‘fittest’ resulting in the natural selection of invincible models that can forever be amended.” http://www.math.columbia.edu/~woit/wordpress/?p=9375

Have watched the lecture… and I do understand the issue of Danes a bit better now. Altough have no idea if it has merit or not. At around 37 min.. he shows that even with their analysis without templates, they get the same result as LIGO, and with same sigma. Even their plots match almost 98%.

So it’s not that they doubt LIGO had a detection, that’s clearly visible once data is filtered, they doubt what that detection was. And they doubt LIGO’s second event which they can’t reproduce from the data available.

What’s also interesting is that LIGO only has a limited view due to the noise, and apparently being only able to scan for certain BH mergers.

For my interest it raises questions about the LHC and the volumes of noise they have to deal with, it looks like they can only scan for specific events, anything out of the ordinary such as for instance detecting vibrations through the Higgs field is a no no, they are probably only able to see particles that form a kind of linear path-tree.

I also had to laugh with the thunder storm right in between. Lightning is related to Cosmic Rays creating the first current paths, maybe the LIGO detectors got hit by the same cosmic outburst. A mild earthquake or flash hitting something that released a mild wave of radiation.

Maybe the other detections were also electric discharges but because those weren’t exactly in between the results didn’t match as good. Etc. etc.

Anyway, the overall message was to get more results, more detectors, and more people looking criticality at the data, and be more patient before making a final claim.

At the LHC they need a massive amount of results to claim a 5 sigma result, and an official approval, while LIGO only had one result with a 6 sigma resemblance, it is a bit misleading to compare those two. But LIGO does play the media quite well, getting lots of attention.

Final thought is that the Danish group plays the part of the devil’s advocate, there is always a chance that something was overlooked.

Neither of these isotopes have ever been successfully produced, but that’s where the true test will lie!

A word of caution for the layfolk: “stability” is a relative term. It’s entirely possible that the nuclear shell effects Ethan talks about increases the stability of the istotopes in the ‘island of stability’ by a factor of 1,000 or even 1,000,000. But that may mean increasing their expected halflives from nanoseconds to milliseconds. AFAIK nobody int the business expects these elements to be truly stable or even stable enough to allow us to build up macroscopic supplies of them as we do the actinides. But hey, we won’t know for sure unless/until we produce them.

I did recently see a movie that employs those themes in a way I haven’t seen before and truly enjoyed. It’s called Kubo and the Two Strings. No sci-fi, but definitely worth checking out.

All the Laika stop-action claymation movies are worth checking out. Coraline is based on a Neil Gaiman novel and has the quirky/creepy you expect from him. Paranorman uses old zombie motifs to talk about bullying. Box Trolls is less deep but still a decent ‘just entertainment’ movie. Art is in the eye of the beholder, but I’d say Kubo is the best followed by Coraline.