The Blog of Scott AaronsonIf you take just one piece of information from this blog:Quantum computers would not solve hard search problemsinstantaneously by simply trying all the possible solutions at once.

As everyone knows, this was a momentous week in the history of science. And I don’t need to tell you why: the STOC and CCC accepted paper lists finally came out.

Haha, kidding! I meant, we learned this week that gravitational waves were directly detected for the first time, a hundred years after Einstein first predicted them (he then reneged on the prediction, then reinstated it, then reneged again, then reinstated it a second time—see Daniel Kennefick’s article for some of the fascinating story).

By now, we all know some of the basic parameters here: a merger of two black holes, ~1.3 billion light-years away, weighing ~36 and ~29 solar masses respectively, which (when they merged) gave off 3 solar masses’ worth of energy in the form of gravitational waves—in those brief 0.2 seconds, radiating more watts of power than all the stars in the observable universe combined. By the time the waves reached earth, they were only stretching and compressing space by 1 part in 4×1021—thus, changing the lengths of the 4-kilometer arms of LIGO by 10-18 meters (1/1000 the diameter of a proton). But this was detected, in possibly the highest-precision measurement ever made.

As I read the historic news, there’s one question that kept gnawing at me: how close would you need to have been to the merging black holes before you could, you know, feel the distortion of space? I made a guess, assuming the strength of gravitational waves fell off with distance as 1/r2. Then I checked Wikipedia and learned that the strength falls off only as 1/r, which completely changes the situation, and implies that the answer to my question is: you’d need to be very close. Even if you were only as far from the black-hole cataclysm as the earth is from the sun, I get that you’d be stretched and squished by a mere ~50 nanometers (this interview with Jennifer Ouellette and Amber Stuver says 165 nanometers, but as a theoretical computer scientist, I try not to sweat factors of 3). Even if you were 3000 miles from the black holes—New-York/LA distance—I get that the gravitational waves would only stretch and squish you by around a millimeter. Would you feel that? Not sure. At 300 miles, it would be maybe a centimeter—though presumably the linearized approximation is breaking down by that point. (See also this Physics StackExchange answer, which reaches similar conclusions, though again off from mine by factors of 3 or 4.) Now, the black holes themselves were orbiting about 200 miles from each other before they merged. So, the distance at which you could safely feel their gravitational waves, isn’t too far from the distance at which they’d rip you to shreds and swallow you!

In summary, to stretch and squeeze spacetime by just a few hundred nanometers per meter, along the surface of a sphere whose radius equals our orbit around the sun, requires more watts of power than all the stars in the observable universe give off as starlight. People often say that the message of general relativity is that matter bends spacetime “as if it were a mattress.” But they should add that the reason it took so long for humans to notice this, is that it’s a really friggin’ firm mattress, one that you need to bounce up and down on unbelievably hard before it quivers, and would probably never want to sleep on.

As if I needed to say it, this post is an invitation for experts to correct whatever I got wrong. Public humiliation, I’ve found, is a very fast and effective way to learn an unfamiliar field.

131 Responses to “The universe has a high (but not infinite) Sleep Number”

Really, really insignificant correction: in the physics lingo (which does not necessarily constrain you) gravity waves are waves in some medium due to gravity, whereas what was discovered for the first time this week is gravitational waves.

Moshe #1: Thanks! Fixed. (But I’m curious: do experts talking among themselves always say “gravitational waves,” or do they relax and say “gravity waves,” much like theoretical computer scientists talking among themselves will freely refer to an optimization problem as “NP-complete,” even though they know it’s oh-so-wrong to use NP-complete for anything other than decision problems?)

Not sure, not really considering myself an expert. It definitely seems on the pedantic side, since as far as I can tell the communities working on the two types of waves have no overlap (this is theorists, but maybe experimentalists need to know about dirty real world phenomena? e.g. are gravity waves a source of noise for LIGO?).

Adam #4: Yes, so I learned. 🙂 I guess the unintuitive part is that you expect the “amount of oomphiness” in a spherical expanding wave to fall off like the surface areas of the spheres. But crucially, the stretchiness and squeeziness of the waves is the square root of oomphiness.

(Incidentally, all the math and CS people who I talked to today also assumed it would have to fall like 1/r2, which made me feel better that at least I wasn’t alone.)

On the “do you feel the wave?” issue, it is tough to estimate how small of a wave amplitude one might feel, and probably harder to measure experimentally.

If “feel” means activate the pain sensors, a likely place to look might be in bones. Because the machinery that generates the pain signal is also affected by the G-wave, possibly in a perpendicular direction, it might be hard to figure out how big of an amplitude is needed to notice it. Anyone out there know anything about this, or that can clarify the thoughts I just put down?

If “feel” is expanded to mean any detection, a good place to look is the brain, where a lot of info and thoughts are compressed into a small space. Hard to guess how this would interacting with thinking. But, maybe some of our best “sudden insight” ideas happened because g G-wave jostled some thoughts over some kind of potential barrier and into a “valley of good idea”. Anyone know anything about this?

asdf #12: I got 1/(4*1021) by plugging in the following: the LIGO arms are 4km long, whereas the displacement was said to be by 1/1000 the diameter of a proton, which is ~10-15 meters.

So, one possibility is that the displacement was actually by 1/250 the diameter of a proton—that would explain it.

But notice that there’s also a factor-of-2 ambiguity: are we measuring the displacement from “normal” to peak length, or from trough to peak? And I suppose a second factor-of-2 ambiguity comes from the fact that LIGO has 2 arms, one of which compresses while the other expands. So, do these ambiguities together explain the factor of 4?

Scott: You can put me in the same camp of dolts that also thought the amplitude would fall at 1/r^2, until Adam corrected me. This made more sense when I remembered strain is proportional to force, which is proportional to field strength, which is proportional to the square root of wave power. With amplitude and strain decreasing at 1/r, you’d probably need to be within about 1au of the event to feel the gravitational waves at all.

Moshe: “as far as I can tell the communities working on the two types of waves have no overlap” Ha ha ha, you’re forgetting someone. 😉 I did read, though, that LIGO can hear the noise of surf crashing.

Technically minded Shtetl Optimized readers can find answers to pretty much any gravity-wave astronomy question in a pair of LIGO documents posted today, which respectively address the questions “How big is the signal?” and “How big is the noise?”. Together these two documents convey quite a lot more information than today’s Physical Review Letter.

LIGO Document P1500269 (How big is the signal?) “GW150914: First Results from the Search for Binary Black Hole Coalescence with Advanced LIGO” (also available as arXiv:1602.03839 [gr-qc])

LIGO Document P1500237 (How big is the noise?) “GW150914: The Advanced LIGO Detectors in the Era of First Discoveries” (also available as arXiv:1602.03838 [gr-qc])

Scott’s answer of “How big is the signal?” is accurate (AFAICT). The answer to “How big is the noise?” is summarized, dryly, in Figure 3 of the latter article (LIGO-P1500237).

The juicy summary is, that at full rated power, LIGO’s four-kilometer light-beams become more longitudinally rigid than equivalent bars of diamond, and LIGO’s suspended mirrors become more angularly unstable than pencils balanced on-point. The difficulty in dynamically stabilizing LIGO’s mirrors is sufficiently great, and the noise induced by stabilizing feedback is sufficiently great that, in its initial run, advanced LIGO was operated at only ~10% of its rated optical power.

Even so, the required controls complexity is amazing; as LIGO-P1500237 notes that “More than 300 digital control loops, with band-widths spanning from sub-Hz to hundreds of kHz, are employed to keep each Advanced LIGO interferometer operating optimally during observation” (this is why it’s far from easy to keep both LIGO detectors running simultaneously, even at less-than-designed optical power).

It’s remarkable too, that LIGO’s photons exert forces and torques stupendously greater than the forces and torques exerted by the gravity waves that the photons detect.

Conclusion From a controls point-of-view, the Advanced LIGO observatories are the most sophisticated devices ever operated … and the very good news is, there is plenty of scope for further improvements in observational sensitivity … sufficient to foresee (hopefully yet reasonably) that a fully tuned-up Advanced LIGO may observe new gravity-wave sources on a daily basis. Wonderful! 🙂

I have been having the reverse problem about gravity waves versus gravitational waves: I am thinking about whether I can measure solar g-modes (gravity waves that propagate in the sun’s core and are evanescent at the surface). If I say solar gravity waves to people, they think I am talking about gravitational waves; I have to say gravity waves like ocean waves for people to understand.

Is there any reason why, with different filtering algorithms, the LIGO’s are not super seismographs? To a zeroth order approximation, the right filter should be able to ID someone doing the Dougie after scoring a touchdown. The NFL might fund a couple more LIGO’s for enforcement of excess celebration rules.

So how sure are we of this detection. 1 part in 4×1021 sounds a very tiny effect.

Isn’t there a lot of statistical flukes, noise, errors etc. that could be at play here?

I guess what I’m asking is what’s the chance that 6 months from now we realize that what we thought was an “effect” is, in fact, an artifact or analysis error etc.?

I mean, when we routinely get the statistics behind something like clinical trials wrong or into controversy & there the effect size is much much much larger, what are the things that could go wrong when you are detecting a 10-18 meter change in a 4-kilometer reference. Isn’t there a LOT of methodological nuance behind trusting that the detection is real? Is there wide agreement among the experts on the detection procedure power?

(afterthought) Fermi Question At peak intensity, the energy flux at Earth’s surface of GW150914’s gravitational waves waves, expressed as a fraction f of the full moon’s optical-wavelength radiant flux at Earth’s surface, is most nearly:

Isn’t that what people mean when they say that gravity is weak compared to E&M? When you talk about the light emitted by all the stars in the universe, you seem to be comparing E&M to gravity, rather than making an absolute statement about gravity. On the other hand, when one talks about the difficulty of explaining the precession of Mercury, one seems to be making a purely gravitational statement. On the third hand, Einstein was driven by the problem of reconciling Maxwell’s equations with Newton’s laws, which other people didn’t find pressing because they were relevant at different scales.

Also, the topic of whether one would feel a 1% squash is again a comparison with E&M. Lots of bonds would suddenly find them 1% shorter or longer than usual. What we would or would not notice is their response to this. That sounds very different than the normal experience of being squashed 1% over the course of the day.

I did the same calculation and agree that you’d basically be in the near field zone before having a chance of feeling anything. The other issue is related to your last point, that because spacetime is so stiff (or your body so squishy) the mechanical coupling would be very poor, so even then I’m not sure you’d feel it. That’s why GWs pass unhindered through ‘normal’ matter.

P.S. Yes we usually do talk about ‘gravitational waves’ amongst ourselves, although nobody would really care if you called them gravity waves, so long as there’s no risk of confusion. It’s a shame ‘gravity waves’ was already taken. Anyone got a better name?

Would you be able to feel the waves at all? As a layperson, I view gravitational waves and “the coordinate system itself” compressing and it is not clear to me that that can be “felt.” The only effect I know of is that the light will seem to go in a different speed.

KWillets #11: That would depend entirely on just how small that fraction is. But there’s no reason to worry; it is extremely difficult to extract energy from gravitational waves (that’s why are so hard to detect in the first place).

When you get “close” to a black hole, the coordinate r does not correspond to (proper) distance. That is true even in the case of a static black hole, and it is even more true for a rapidly evolving spatial geometry like this one. So a statement like ” we are now 300 km from the black holes” does not have a clear meaning. When the LIGO people talk about the distance between the black holes, you should not take this literally. Same goes for masses and velocities, sorry.

Rahul #20: It was a 5-sigma effect (note, in particular, that exactly the same waveform showed up at both detectors, the one in Louisiana and the one in Hanford, Washington).

As a semi-Bayesian outsider, taking into account the possibility of systematic errors, software bugs, hackers (which were seriously investigated as a possibility!), etc. etc., I’d personally wager 97% odds that the result stands after 6 months.

Elbi #29: Well, I did say that the linearized approximation breaks down by the time you’re that close to the black holes. But perhaps I should’ve made a stronger statement: by the time you’re close enough to the black holes that you could clearly feel their gravity waves, you’re probably also close enough that we can’t even say very accurately how close you are in a coordinate-independent way. An outside observer would estimate that distance to be on the order of a few hundred kilometers.

Hi Scott:
I was thinking about this question too. I think you’d really feel mm-level vibrations.

As editor of the paper at PRL, who stands behind publishing the paper with observation claims, I am happy to give you 30:1 odds. So if the result stands in 6 months I get a bottle of wine, if not I give you three cases of wine (36:1 actually). Deal?

Robert #33: Deal. I expect (of course) to be sending you a bottle of wine, and if so, will consider it a well-deserved honor for your editing efforts. Just send me a reminder and your address when it comes time.

Great! Now that I’ve secured my bottle of wine, I offer some reasons why I am so sure.

• 5.1σ is a *lower bound* on the estimate of the strength of the signal. See Fig 4. They don’t have enough background data to show a false hit in one detector, let alone two at the same time. So the statistical significance is probably much greater than 5σ.

• PRL has published more than 30 (I’m sure) papers with 5σ results, and I’m not aware of any which turned out not to be real (let alone after 6 months).

• The signal is so strong that one does not need fancy schemes to extract it, you can see it by eye.

I think that leaves only the “evil genius” hypothesis. I would put the prior on that quite low, but there is one more piece of evidence.

• Buried in the paper is news of a second event (i.e. from a different merger at a later time (on Earth)). That one is not nearly as strong, a fluke rate of 1/2.3 years. Given that LIGO was actively looking for evil geniuses then, this second event would require a super evil genius. My prior on that is quite low.

Robert #37: I’ll be happy to send you the wine; I’m sure you earned it! To explain my side: the only reason I didn’t go above 97% is that I’ve been spending a lot of time lately hanging out with the rationalist/LessWrong crowd, and they’re constantly hammering home messages like: to estimate the probability of some event (a Space Shuttle explosion, whatever), just find a reference class of similar events, and resist the temptation to think that all the specific knowledge you have about YOUR event makes it unique. And in this case, the most obvious “comparable event” in recent memory would seem to be … umm, err, BICEP2. The other thing they keep saying is, never, ever give something (say) 99.99% confidence, without first going through the mental exercise of imagining yourself placing 10,000 similar bets (one per day for much of your life) and winning every single one, because that’s the kind of epistemic confidence you’re asserting. So I tried to apply those principles—but I confess that my gut-level, intuitive probability that I’ll be sending you the wine is way above 97%.

Cabrera’s PRL article described the never-repeated, never-explained observation that is today known as “The Valentine’s Day Magnetic Monopole“. Was this observation a fabulously important real event? A mundane statistical fluctuation? An instrumental artifact? A deliberate scientific fraud? Even today, no one knows.

That is why (as Robert Garisto’s comment notes), the “other” LIGO event, known as “LVT151012”, greatly increases our confidence that event GW150914 was not an isolated “Valentine’s Day Black-Hole Coalescence”.

The parameters of event LVT151012 are given in Table 1 and Figure 7 on page 14 of LIGO document P1500237 (link provided in comment #16). In essence, LVT151012 is a less-massive more-distance cousin of GW150914: radiating 15 solar masses of energy rather than 28, from a distance of (roughly) 1100 megaparsecs rather than 410, yielding a P-value of 0.02 rather than 5×10^-6.

Note that in medical research at least, LVT151012’s P-value of 0.02 would be regarded as respectably publication-worthy.

Moreover, to see not one, but two such gravitational wave events, in the relatively short span of eighteen days of observing time, provides high confidence that longer observing runs in coming months, at the improved levels of sensitivity that are associated to larger LIGO optical-power and better-tuned control loops, will yield a flood of gravitational wave detections.

Conclusion Far more likely than not, LIGO events LVT151012 and GW150914 both were real black-hole coalescence, with each event serving to sharply diminish residual doubts that the other might have been an anomalous “Valentine’s Day event”.

Historical confection In coming months, the most likely problem that PRL editors will be facing in regard to gravitational waves, is the same problem that PRL editors have already faced, back in the 1960s, in regard to masers and lasers:

Theodore Maiman made the first laser operate on 16 May 1960 at the Hughes Research Laboratory in California, by shining a high-power flash lamp on a ruby rod with silver-coated surfaces. He promptly submitted a short report of the work to the journal Physical Review Letters, but the editors turned it down. Some have thought this was because the Physical Review had announced that it was receiving too many papers on masers — the longer-wavelength predecessors of the laser — and had announced that any further papers would be turned down. But Simon Pasternack, who was an editor of Physical Review Letters at the time, has said that he turned down this historic paper because Maiman had just published, in June 1960, an article on the excitation of ruby with light, with an examination of the relaxation times between quantum states, and that the new work seemed to be simply more of the same. Pasternack’s reaction perhaps reflects the limited understanding at the time of the nature of lasers and their significance.

In coming months, a flooding tide of papers grappling with “the nature of gravitational waves and their significance” will be a wonderful challenge for PRL editors! 🙂

As far as I can see, there’s a big difference between BICEP2 and LIGO in how much faith outside experts initially had in the results. When BICEP2 results were announced, many astrophysicists and cosmologists were immediately suspicious (mostly) about the foreground subtraction. On the other hand, I haven’t heard or read about any experts in GR/gravitational waves who have serious doubts about the LIGO result.

I understand. I would make bets like this 36x. It’s difficult to imagine a 10,000:1 bet I would take because one should never bet what one cannot afford to lose. So what I would bet on differs from my probability estimate.

btw, the probability that the BICEP2 result *as published in PRL* (again by me) is wrong is similarly miniscule. The PRL asserts that they detected B-mode polarization in that microwave band, which is surely true, and it makes very clear that the results could be due to dust, which, alas they are. Remember, their experiment was great, it’s just that they had very little data from other sources about dust foregrounds, and they underestimated it (in the preprint). So the BICEP2 PRL is not wrong. (I’m actually rather proud of how the paper improved, just compare abstract of the preprint and PRL and see the Note Added, to get a sense of it). The results as announced at the BICEP2 press conference are another matter.

Moral: So one’s confidence should go up if the result is published in PRL. 😊

You would feel tidal forces quite a bit before you would feel any gravitational waves. Deviations from assumed Kerr behaviour would likely also completely swamp the effect, so you would have no way of knowing which ‘effect’ you are feeling, only an outside observer at infinity would be able to recreate exactly what you felt.

Note that there is a bit of a subtlety here. Gravitational waves arising from Binary mergers really have a characteristic size R not much smaller than the Schwarschild radius of the blackhole merger to begin with. In the near Horizon region, the human body is thus of a significantly smaller scale than the amplitude, and I am unsure which approximation breaks down first. Certainly extrapolating equations like the strain from the far field approximations is going to give you a wrong answer so close to the horizon.

Scott, comment 38 got me curious. Not to distract too much, but my small experience with literature about bias and making better decisions (e.g. Kahneman book) points almost the opposite direction. You’d like to design all kinds of mechanisms to avoid using your gut feeling and mechanize the process of assigning probabilities. Imagining yourself in relevant situations is precisely what you are trying to avoid…

With regards to whether the result holds up, one potential issue I’m wondering about is fraud. After all, one notable feature of this experiment is that the blind injection system, which creates a fake signal that only a small group of people leading the project can tell is fake. They are supposed to reveal if the signal was fake right before the group submits the results for publication, but what keeps them honest about it? This seems like something people have thought of already and that there’s a mechanism in place to deal with it, such as a tamper-proof append-only log attached to the blind-injection system, but I just don’t know what it is personally.

[This particular signal was supposedly during an experimental run before the blind-injection system was in place, but that claim shouldn’t increase our confidence by much.]

Yet another wonderfully thought-provoking article out of the LIGO collaboration is the roadmap “Prospects for doubling the range of Advanced LIGO” (PRD 91, 2015, available as arXiv:1410.5882 [gr-qc]).

In a nutshell, LIGO’s roadmap foresaw (in 2014):

It is currently anticipated that Advanced LIGO will undertake its first observing run (O1) in 2015, with subsequent runs of increasing sensitivity and duration in 2016-2017 (O2) and 2017-2018 (O3). Full design sensitivity should be achieved by 2019. We envisage that a squeezed light source will be installed after O2, with test mass and coating thermal noise upgrades being implemented following O3. […]

While expected detection rates are currently very uncertain, even in the most pessimistic scenario these upgrades will take Advanced LIGO from observing a few events per year to observing a few events per month, a critical improvement as the era of gravitational-wave astronomy begins.

In regard to LIGO’s planned quantum squeezing technology, the key ideas and historical origins of this technology are lucidly and even humorously surveyed by Carlton Caves in his on-line interview “Realizing Squeezing: An Interview with Carlton Caves” (LIGO Magazine vol 1(3), 2013, theme issue “Squeezed Light: From Inspiration to Application”).

It is notable that LIGO’s roadmap was published before the just-completed LIGO Observing Run 1 (O1).

Conclusion Would that more quantum roadmaps were as thorough, prescient, and fruitful as LIGO’s roadmaps. A question worth pondering is “How does LIGO do it?” 🙂

5 sigma claims in PRL that turned out to be false: several observations of the Theta+ pentaquark were published in PRL. Here’s one claiming a 7.8 sigma significance. Later higher-statistics experiments found little credible evidence for this state.

Moshe #46: The prediction advice is emphatically not to rely on gut feelings, but rather to trick your gut into going along with what’s known to work. So, as an example, it’s been found that people assign “99.999%” or “0.0001%” probabilities way more often than is warranted. So imagining yourself making similar bets 100,000 times and winning every single one is just one device for talking yourself out of that.

Sure, I understand that is you just assign probabilities based on how strongly you feel about the correctness of some statement, those numbers are not very meaningful. You can improve your gut feeling, but another approach to remedy this is to have a systematic way to *calculate* those numbers, especially in a regime (rare events) where developing intuition is difficult.

I find myself interested recently in models of decision making based on ideas of statistical inference and AI. So, in that context I found your statement surprising — not whether this is the correct way to do things (what do I know?) but that this advice comes from these specific circles.

This week’s gravitational waves discovery made me slightly less sure of the Church-Turing thesis. For if general relativity is correct, and not an emergent approximation to some form of quantum gravity, then hypercomputation must be possible since the universe is somehow computing how fast the apple is falling and according to GR this is continuous. This spectacular validation of general relativity even into the realm of small objects like barely detectable gravity waves slightly increases my posterior belief that GR will trump QM in realms where their predictions conflict. Currently there is zero empirical evidence either way, but it’s easy to make the case that GR is the more fundamental theory since it deals with spacetime itself, whereas quantum field theory just assumes a flat spacetime sitting in the background.

Rahul #39. As I recall, I cast doubt on BICEP2 in this blog as soon as the news came out. I did not do any expert analysis, I just thought it “didn’t sound right”. I am admittedly prejudged against all reputed evidence of inflation theory, as I favor the “we have no clue” theory of the origin of the universe.

Moshe #46. Which of Kahneman’s books? I was not familiar with him. Checking Amazon shows several interesting looking books on various aspects of decisions making. The list of books I want to read grows rapidly.

CATWW #53: Those are some amazing logical acrobatics! Even assuming (as most physicists do) that quantum gravity takes over at the Planck scale, I haven’t heard of any model that predicts it would’ve affected these results in any way whatsoever. So, while LIGO should maybe increase your confidence in GR-as-effective-field-theory from 99.97% to 99.997%, I don’t know of any reason why it should affect your views on quantum gravity (maybe someone else does).

Meh, is there a way to register a nickname here? It doesn’t have to be asdf, I can probably think of something a bit cleverer (“asdf” was a very quick thought as you could probably guess). This is posted from a working email address.

asdf #58, #59: Sorry, I don’t have a registration system set up. So how about this: “asdf” who hasn’t been posting here for years, you’re respectfully asked to pick a different nickname, to avoid confusion with the “asdf” who has been posting here for years.

Re: Scott #56. There seems to be one implication for quantum gravity, that is, an upper bound on the graviton mass. Quoting from the PRL paper:

“Finally, assuming a modified dispersion relation for gravitational waves [97], our observations constrain the Compton wavelength of the graviton to be λg>1013 km, which could be interpreted as a bound on the graviton mass mg<1.2×10−22 eV/c2."

David #62: Thanks! Ed Farhi also made that point at our weekly quantum information group meeting. Of course, as far as I know, almost all physicists (including string theorists, LQGists, etc.) strongly believe that the graviton is massless—so, like many great experimental results, this one seems to fall into the category of constraining theoretical ideas that hardly anyone took seriously in the first place. Which isn’t to denigrate its importance at all! The only way ever to learn anything that shocks you, is to carry out investigations most of which will confirm what you already thought. (Or am I wrong, and were there any serious QG proposals that predicted a massive graviton?)

While I agree the LIGO result certainly looks solid, the “Blind Injection” plot arc is hard to ignore.

Here is an interesting aspect. How often in a breakthrough discovery does the signal perfectly match up to the computed profile? Is it a perfect match? Anyone care to estimate how closely the observed result matched the expected result? If an estimate is obtained, is it reasonable? Maybe it should match exactly, I don’t think many nonlinear or otherwise awkward terms are being ignored in the analysis. At any rate, this aspect is kind of a can of worms — perhaps a far-fetched COW, but a COW nevertheless.

So, when you combine the previous thought with the fact that blind injection has been done at times in the past, you have to consider the possibility that the blind injection function got triggered by accident, code rot, skulduggery, or who knows what. By no means am I suggesting that this happened, only that in a discovery of this magnitude, every nit must be picked.

BTW, I read somewhere about observatories searching the “likely location” to see if anything shows up, which would obviously change everything. But is seems to me, the only data is the elapsed time between the two stations, so how do you compute a likely location? Considering the sky to be a two degree of freedom phase space, one measurement should leave a one DOF target area, with a width that can be estimated by the accuracy of the time lag.

Anyone know anything about this?

I think LIGO stations are in the works in Italy and Japan. This should allow some triangulation on subsequent events. Unfortunately, all four are at about the same latitude, which is likely to result in some near singular matrix inversions. A station in Antarctica would surely improve accuracy. Next up: One on the moon. That would finally justify sending humans into space.

Yeah, that was confusing me also, especially because the first few reports just said that they had the event located to about a hemisphere (which made some sense given the degrees of freedom). I’m not sure how they can narrow it down further, maybe based on the shape of the wave?

@Raoul Ohio: no, there are two degrees of freedom observed, namely the amplitude difference and the phase difference between the signals between the two detectors. The differences are caused because the two detectors measure distance in a different direction.

tod #72: Bertrand Russell once said that the great benefit of having coauthored Principia Mathematica was that afterwards, he (unlike many philosophers) could say everything in plain English—because everyone knew that he could be arbitrarily abstruse and incomprehensible if he wanted, and was just choosing not to be.

I completely understand this: I constantly feel like the only real advantage of being an MIT professor, and of whatever theorems I’ve proved, etc., is that I no longer care who thinks I’m a moron because I wonder aloud why the oomphiness of a gravity wave goes like the square of the squeeziness. If I’d had a better personality, I would never have cared in the first place, but I did. And crucially, my no longer caring how often I make a fool of myself in public means I can now learn new things and clear up my misconceptions much faster than before.

Raoul Ohio #66: It’s actually not so surprising that the events match up so precisely to the theory. First, you can actually solve numerically the exact theory, rather than having to do a whole lot of approximations to get the state space down to a manageable size before trying numerical solution – so no approximation errors, which is pretty rare. Second, it apparently takes 3 solar masses worth of energy over a rather short space of time to generate something we can see – which means even much closer sources of noise simply get swamped by the signal.

In some sense, what would be worrying would be a signal which did not match a theoretical solution.

Richard, jonas, and Peter, thanks for the info. This is a truly unique type of observation. When they get a couple more LIGO stations up and running, we can probably look forward to a whole new dimension of astrophysics discoveries. I can’t wait!

Here is another question. Somewhere (probably above) I read that there is not expected to be an EM signal. Presumably that is for a “pure” two black hole merger, with nothing else involved. In the real world, I would guess there are likely to be accretion disks and who knows what else involved. If nothing else, there should be an accretion disk merger, which might well cause a surge in accretion. So I would expect a big flash, possibly a micro or mini quasar, maybe like SS 433, or something else cool, possibly unthought of yet.

Anyone know anything about that?

While walking dogs a couple decades ago,I ran into a physic prof I had taken a course from a couple decades earlier. We discussed the then new topic of gamma ray bursts, and guessing what might cause them. I said I was betting on black hole collisions, because there ought to be enough energy to explain anything.

Could Scott or someone else please comment on the exact meaning of the 5 sigma number in this paper? I understand the meaning of this term in the context of, say, rolling dice to determine the expected roll of the dice, in which the variance of the average of the experiments decreases over time. But I cannot understand the meaning of 5 sigma from this paper. How can you really measure the accuracy of an experimental device like this? (What if North Korea detonated a nuclear bomb in the air or something?) Did numerical calculations somehow figure into this 5 sigma number?

In addition to possible gravitational-wave signals, the detector strain contains a stationary noise background that primarily arises from photon shot noise at high frequencies and seismic noise at low frequencies. In the mid-frequency range, detector commissioning has not yet reached the point where test mass thermal noise dominates, and the noise at mid frequencies is poorly understood. The detector strain data also exhibits non-stationarity and non-Gaussian noise transients that arise from a variety of instrumental or environmental mechanisms. The measured strain s(t) is the sum of possible gravitational-wave signals h(t) and the different types of detector noise n(t ). […]

To measure the significance of candidate events, the PyCBC analysis artificially shifts the timestamps of one detector’s triggers by an offset that is large compared to the intersite propagation time, and a new set of coincident events is produced based on this time-shifted data set. For instrumental noise that is uncorrelated between detectors this is an effective way to estimate the background.

To say it another way: “If the fluctuations in the detection measure were stationary and Gaussian (which they are not), then the false-alarm probability associated to event GW150914, as estimated from a numerical analysis of timestamp-shifted observational datasets, would correspond to a signal-to-noise ratio in excess of 5-sigma.”

Further reading An overview of the dynamical mechanisms that presently restrict LIGO’s sensitivity — mechanisms that even at present are only partially understood — is presented in “Observation of Parametric Instability in Advanced LIGO” (PRL 2015, available as arXiv:1502.06058).

As the conclusion of the article notes, a small cadre of Soviet theoreticians is the hero of this story:

The authors would like to acknowledge the extensive theoretical analysis of parametric instabilities by our Moscow State University colleagues Vladimir Braginsky, Sergey Strigin, and Sergey Vyatchanin, without which these instabilities would have come as a terrible surprise.

One of the great and hopeful lessons of the gravitational wave saga (for me at least), has been the fundamental collegial integrity of the LIGO community, which ensured that even the most dismaying theoretical analyses of unwelcome device dynamics were reasonably accounted in the Advanced LIGO design and validation process.

Although LIGO has surmounted plenty of significant technical challenges, and plenty of challenges still remain, it has (to date) encountered no show-stopping “terrible surprises.” Long may this happy tradition continue!

I would also like to understand whether it is in principle possible for the type of events whose gravitational waves are detectable by LIGO to be observed by other means such as radio telescopes, or even optical ones – e.g. if a supernova is observed should we expect LIGO to be able find a corresponding gravitational signal ? if so then how rare such a supernova would be in terms of the size and distance based on the observation history we have ?

John #77; thanks for the reply; so if I now understand, they are saying “due to a numerical simulation of a Gaussian noise source of gravitational waves over a few millennia, we estimate the probability of this event happening as a result of this random noise to be less than .00001%” Correct? If so, this 5 sigma number is in some sense made up, isn’t it? Since it is kind of an artifact of the various parameters of the simulation? And this is kind of where Scott’s 3% estimate comes from, since he only has a 97% confidence in the simulation?

And there is still a separate issue, which is: they observe this signal, but it may not necessarily come from the collision of two black holes; it may come from some other, yet-to-be-understood event?

Steve #82, the distillation “due to a numerical simulation of a Gaussian noise source of gravitational waves over a few millennia [etc.]” is not quite correct … LIGO is rather following a long-established convention of describing statistical confidence-levels as arising from an equivalent five-sigma signal-to-(Gaussian)-noise ratio.

Fred: That’s one useful thing I would’ve learned had I studied electronics engineering!

There was exceedingly little doubt that gravitational waves exist. Not only has GR predicted them for a century, but the Hulse-Taylor pulsar gave an indirect yet essentially conclusive observation of them in the 1970s. So the reason to be excited about detecting them directly, I’d say, is indeed that you can use this as a whole new way to do astronomy. (We already learned something new about the sizes, distances, and frequencies of merging black holes.) Also, if there WERE a slight correction to GR, this is very plausibly how we’d notice it.

fred #88: My understanding is that, while gravitational waves could in principle be focused, it would require a science-fiction-level ability to tug black holes into desired positions and things like that.

In effect, Fred asks (#47) “What is the LIGO news all about?” Three answers are suggested:

• Physics News Gravity waves are real (what once was suspected, now is confirmed).

• Astronomy News We can now see the universe in new ways (see e.g. “A Gravitational Wave Detector with Cosmological Reach”, 2014, arXiv:1410.0612).

• Engineering News We can push quantum limits in new regimes (aka, there were no “terrible surprises”, per #78).

Needless to say, it’s not necessary that everyone agree regarding which is primary.

For me there’s a fourth dimension to the LIGO achievement, a systems engineering dimension, which reflects values that are well-captured in historian Stephen B. Johnson’s The Secret of Apollo (2002):

Systems approaches emphasize integrative features and the elements of human cooperation necessary to organize complex activities and technologies. Believing that humans are irrational, I find the creation of huge, orderly, rational technologies almost miraculous. I had never pondered the deeper implications of cooperative efforts amid irrationality and conflict, and this project has enabled me to do so. […]

I sincerely hope that this work helps others recognize the the ‘systems’ in which we all take part are our own creations. They help or hinder us, depending upon our individual and collective goals. Regardless of our feelings about them, they are among the pervasive bonds that hold our society together.

Conclusion The enduring values of the Apollo Program are well-reflected in the LIGO program (for me at least).

—–

Scott’s #73 reminded me of a maxim by Leszek Kołakowski:

“A modern philosopher who has never once suspected himself of being a charlatan must be such a shallow mind that his work is probably not worth reading.”

As with modern philosophy, so with scalable fault-tolerant quantum computation?

Fred #88: yes. The already-observed gravitational lenses in the sky also bend any gravitational waves in exactly the same way as they do to light. That is because gravitational waves travel at the same speed (speed of light), so within geometrical optics it is the same. The frequency is different but I don’t think that will matter as the lenses are so big.

Can anyone clarify what exactly was going on with the blind injections? It seems to me that an experiment like LIGO would be way more concerned about false positives, but blind injections seem to be designed to fish out false negatives (which I wouldn’t have thought would be a concern).

thepenforests wonders “Blind injections seem to be designed to fish out false negatives (which I wouldn’t have thought would be a concern).”

This history of science provides plenty of poignant examples of false negatives.

A paradigmatic example is Cornelis Jacobus Gorter’s “Negative Result of an Attempt to Detect Nuclear Magnetic Spins” (1936), which anticipated by a full decade the Nobel-winning 1946 observations of Edward M Purcell, Robert V Pound, and Henry C Torrey. The successful 1946 experiment used a similar apparatus and experimental protocol to Gorter’s failed 1936 experiment; even today it is not entirely clear why Gorter’s early attempt failed.

Lesson-learned It is prudent to systematically verify, to the maximum degree feasible, that if a signal is present, then the experiment will see it.

That’s why the LIGO program ran two independent signal-detection protocols simultaneously, known as “PyCBC” and “GstLAL” (per LIGO Document P1500269 of comment #16).

Has this been actually worked out? Do gravitational waves obey the same laws in terms of geometrical optics? Contrary to EM, gravity is very nonlinear (i.e., contrary to photons, I suppose, gravitons do interact with each other). As a consequence, it probably doesn’t satisfy a superposition principle, right? Does it make sense to immediately import intuition from interferometry, geometrical optics, etc?

Maybe graviton-graviton interaction is just so weak that geometrical optics is a good approximation in any interesting range. Then again, if the gravitational lens is a black hole or something like it and we’re deep in the strong-field regime, maybe not.

Lewikee #97: Think about it. If it fell off as 1/r2, then knowing how strong it still is when it reaches the earth, it would have to be really, really strong even (say) 1 AU away from the black holes. Because it falls off as 1/r, it can already be imperceptible at an AU away.

I could be wrong but my intuition tells me that provided the wavelength of the gravitational wave is greater than the length of my body they could not harm me regardless of how intense they are. The reason is the wave will not cause the atoms in my body to move through space and so will perform no work on it, that is to say will cause no damage to it. Instead the wave will cause the very space inside my body to expand, so the wave might cause me to look taller to you but I would feel just the same as I always did, although you would look squashed to me. The detected waves were many miles in length so I think they be of no danger no matter how strong they were.

For the same reason I don’t think you could extract work out of the accelerating expansion of the universe even if you could build machines of cosmological size, although at one time I thought you could. Suppose you had a spool of string on an axle and you extended the string to cosmological distances and tied a weight on the end of the string. I thought there would be torque on the spool, so the axle would rotate. The axle could then be connected to an electric generator and it seemed to me you’d get useful work out of it. Of course you’d have to constantly add more mass-energy in the form of more string to keep it operating, but the amount of mass per unit length of string would remain constant, but because the universe is accelerating the amount of energy per unit length of string you’d get out of it would not remain constant but would increase asymptotically to infinity.

Then I realized my brainstorm wouldn’t work, there would be no torque on the spool because the weight on the end of the string was not moving through space with respect to the spool, instead space itself was moving. It’s as if I grabbed the string with my left hand 4 feet from the spool and I grabbed the string with my right hand 1 foot from the spool and pulled my hands in opposite directions I would create tension in the string between my hands but I’m doing no work and producing no torque on the spool, and the same would be true of the Dark Energy that’s accelerating the universe. If I pulled hard enough the string would break and if Dark Energy got strong enough every atom in the string would pull apart from every other atom in it, that’s what happens when space expands, but there would still be no torque on the spool and no useful work could be extracted from the spacial expansion.

Daniel supposes “Contrary to photons, I suppose, gravitons do interact with each other.”

Quantum electrodynamics (QED) theoretically predicts that photons do interact with one another. The cross-section for direct “light-by-light scattering” is too small to be observed in terrestrial laboratories (by any means discovered to date), but it does naturally explain an observed astronomical cut-off in gamma-ray photons that appears beginning at 80 TeV: photons of greater energy are lost to scattering from relic microwave photons.

If we accept — as observations indicate — that QED and quantum gravity alike have well-defined weak-field limits, then it is interesting to contrast the strong-field pathologies of both theories.

In QED, when electric fields become sufficiently strong, it is thought that electron-positron pairs non-perturbatively emerge from the background, such that the created charges act to reduce the E-field below the critical level.

In quantum gravity the situation is more complicated. For sufficiently massive black holes, the metric curvature at the event horizon can be arbitrarily low; thus classical GR should work just fine. Inside the event horizon something non-classical/non-perturbative presumably happens, but it is far from clear just what this internal quantum dynamics might be.

It would be nice if this unseen internal quantum black-hole dynamics acted to preserve quantum unitarity and macroscopic causality … but as to how this might work in detail, no one knows so many credible postulates have been advanced in the literature, that no one of these postulates is considered (so far) to be very convincing.

Shtetl Optimized readers may enjoy the sympathetic resonance of Ars Technica’s article with Peter Sterling and Simon Laughlin’s new textbook Principles of Neural Design (2015), which just this week won the prestigious American Publishers Award for Professional and Scholarly Excellence (PROSE award).

#94, #95: Blind injections also have a social/cultural
role in reducing false positives (and disciplining the messaging of results) – they reduce the likelihood that a “signal” will be prematurely leaked by someone in the collaboration before being fully vetted, since the potential leaker will be concerned that they may be leaking a blind injection. This facilitates the effective functioning of the internal procedure for vetting/verifying/announcing potential signals.

Yes, ok, photons do interact. I knew that, but that was not the question I was raising. In the classical domain, EM satisfies a superposition principle and GR doesn’t. Does that change the laws of interferometry and geometrical optics for gravitational waves, compared to EM waves? Has this been studied?

However the answer “YES” isn’t much use without a considered appreciation of the conditions that are associated to this answer.

These conditions are surveyed in the Google/Martinis’ group’s eprint “What is the Computational Value of Finite Range Tunneling?” (2015, arXiv:1512.02206 [quant-ph]). For example, the Google article discusses ties to the theory of neural networks; an overlap thus naturally arises with Sterling and Simon Laughlin’s Principles of Neural Design (of comment #104).

A more mathematically minded survey of considerations relevant to D-Wave’s computation technology can be found in (for example) Haegeman, Mariën, Osborne and Verstraete “Geometry of Matrix Product States” (JMP 2014, available as arXiv:1210.7710 [quant-ph]). This work naturally overlaps with Google’s open-source package “TensorFlow” for programming machine learning applications; the TensorFlow package is documented in the Google white paper “TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems” (2015). Unsurprisingly, a Google search finds this white paper. 🙂

Conclusion Short answers to “D-Wave speed-questions” are associated to a trans-disciplinary pool of research that is wide and widening; deep and deepening — the accelerating and seemingly unbounded widening and deepening of this research pool is very good news (as it seems to me) for the entire STEAM community.

Hopefully these references, and reflection upon of their implications, will in some measure help to relieve “D-Wave irritation syndrome” … if only to replace it with “D-Wave uncertainty syndrome”, which nowadays pretty much everyone suffers from (well, me at least).

It would be more nearly correct to say, “In the weak-field/low-energy limit, EM and GR both satisfy a superposition principle, whereas in the strong-field and/or high-energy limit, perturbation theory fails for both theories.”

Explicit computational recipes are known that effectively repair the pathologies of EM; the theory is said to be “renormalizable”. No renormalization recipes are presently known for GR; however string theory provisionally suggests frameworks in which GR renormalization recipes may someday be formulated.

In an ultra-minimal nutshell, the pathologies of EM manifest themselves only locally in space-time and thus are feasible to repair, whereas the pathologies of GR manifest themselves globally — as black-hole event horizons, for example — such that EM renormalization methods don’t work for GR.

Note also that EM too exhibits nonperturbative dynamics, for example the QED dynamics of (hypothetical) elements with atomic number Z>137 is nonperturbative and hence problematic to calculate; this limit can be approached in heavy-ion colliders.

@thepenforests #94 my favorite story of a false negative comes from the low temperature lab of Kamerlingh-Onnes in Leiden. He was the first to liquify helium, discover superconductivity, and was the first to make numerous measurements on the low temperature properties of materials, including the heat capacity of liquid helium. He measured an anomalous heat capacity at around 2.2 kelvin, but ignored it, because nothing interesting was ‘expected’ to happen there. Of course we later learned that it was the superfluid transition, a discovery that had to wait another 25 years.

@John #85, I would like an explanation of the following excerpt from the paper:

“GW150914 is the strongest event of the entire search… Measured on a background equivalent to over 67,400 years of data and including a trials factor of 3 to account for the search classes, its false alarm rate is lower than 1 in 22 500 years. This corresponds to a probability [of at most] 2 × 10−6 of observing one or more noise events as strong as GW150914 during the analysis time, equivalent to 4.6σ.”

This is the source of my confusion. In particular, I do not understand this 67,400 number. It suggests to me some numerical simulations of the background noise. Consequently, at the moment, I consider the sigma numbers to be essentially fabricated.

I did read the IOP description before my initial post, but the present experiment involves a continuous sample space of measurements, i.e. I don’t see how it relates to flipped coins or colliding particles.

The ultrashort answer is “yes” and the longer answer is “yes, but the contextual details are messy.” See for example the essay “Opening Remarks/Historical Development of Vacuum Concepts” by Walter Greiner, which Google Books kindly shows us in the collection “Quantum Electrodynamics of Strong Fields.”

Observation Equally in QED, QCD, and GR, the postulate that quantum field theories prescribe unitary dynamics rests upon exceedingly shaky theoretical foundations; so much so, that the QCD case is a Clay Institute Millennium Prize Problem.

So these problems aren’t easy or new; they’ve been around for awhile! 🙂

Conclusion We can hope that any quantum dynamical framework that cleans up the pathologies that are associated to quantum black hole evolution, will also help to clean up the pathologies that are associated to high-Z hydrogenic systems in quantum electrodynamics, and quark/gluon scattering pathologies in quantum chromodynamics.

• “We expect different time shifts to yield independent event counts […]” and so the LIGO analyses do vary the relative time-shifts (from LHO and LLO) in estimating the random false-alarm rate; this is how LIGO’s analysis (in effect) expands the duration of the dataset.

Opinions may vary, but for most STEM professional (including me), and most journal editors too, LIGO’s analysis method is mathematically well characterized and statistically reasonable, and therefore constitutes a robust approach to estimating error rates.

Let me return it by shouting out his new nonfiction writing guide (optimized for long, discursive blog posts about possibly-controversial topics—i.e., what he writes 700 per week of). A guide to that type of blog post by Scott Alexander is like a playwriting guide by Shakespeare, and it does not disappoint. There are several things he mentions where I’ve failed in the past, and where his guide motivates me to improve—e.g., the need to appease members of a tribe by “proving you’re not a typical outgroup member” before asking them to consider any idea they associate with their outgroup, and the need to imagine every single sentence of your post pulled of context and used to sum up your humanity. Also, shorter paragraphs, and the judicious use of images to break up the flow of text.

A future SI and/or SSC essay that many people would welcome (me at least) would address the simple question “What is rationality”? Definitely the Wikipedia page isn’t much help! At least, it’s not much help in relating traditional notions of rationality to the post-rational (?) descriptions of cognition that are emerging in works like Nina Strohminger’s and Shaun Nichols’ “Neurodegeneration and identity” (2015), from which the following excerpts are taken:

Injury to the moral faculty plays the primary role in identity discontinuity. Other cognitive deficits, including amnesia, have no measurable impact on identity persistence. […] Our results mark a departure from theories that ground personal identity in memory, distinctiveness, dispositional emotion, or global mental function. […] The essential-moral-self hypothesis proposes that moral capacities are the most central part of identity. […]

This investigation dovetails nicely with existing social-cognition work demonstrating that moral character is at the heart of person perception. […] A key question for future research is whether the privileging of moral traits we observed in recognition of other people extends to recognition of the self.

Discussions of these evolving (?), post-rational (?), empathy-positive (?) notions find a reasonably friendly reception on progressive (?) websites like Mathematics Without Apologies; not so much on rationalist (?) sites like Slate Star Codex and Shtetl Optimized. That is why a thoughtful Shtetl Optimized essay on the general theme “Evolving notions of rationality” would be welcome by many (including me most definitely).

Conclusion Recent research helps us to appreciate more of the reasons why rationality and rationalism aren’t easy to talk about — clearly or otherwise, scientifically or otherwise, logically or otherwise, dispassionately or otherwise.

Which is all the more reason — isn’t it? — to try to speak more clearly about rationality and rationalism. We can at least begin by defining these terms as plainly as we can! 🙂

In Kennefick’s article on Einstein we learn about a John T. Tate, an editor of Physical Reviews. That man was the father of the deservedly more celebrated mathematician John T. Tate, a recipient of the Abel Prize. Both men are called John Torrence Tate and I am dismayed by the incredibly ridiculous narcissism of some men who want to give their son the exact same names and surname that they bear. Have they no sense of the grotesque at all ?

It’s odd — isn’t it? — that nominative projection is deprecated when individuals do it, yet celebrated when corporations and politicians do it. Have corporate entities and politicians no sense of the grotesque at all? 🙂

I am curious about the following (sorry if it was asked before): The LIGO witnessed an event amounting to a release of energy larger than all stars in the observable universe combined within less than 2 Billion year-light from us seems rather major. How frequent can we expect that events of such magnitude are occurring?

I think they have since seen one additional, but much weaker, event. Maybe more.

On a slightly related note: I am under the impression that LIGO event #1 was fairly soon after the new version of LIGO was up and running, let’s call the elapsed time to that event T_1, and the elapsed time until now T_now. Obviously doing statistics on a population of 1 is risky, but a zero’th order approximation would be that so far we should have seen \Theta (T_now / T_1) similar events. I think we have seen maybe one.

Does anyone know the current value of T_now / T_1 ?

If this ratio is large, say 10, 100, or 1000, the possibility of a glitch associated with startup become a more attractive option for consideration. I have no reason to think that LIGO #1 was a bad signal, but every nook must be examined.

If a gamma ray signal was in fact detected about 0.5 seconds later, that is pretty strong evidence in my book.

As for the expectation that black holes merging should be “dark” — the astronomical amount of energy involved would make any minor “higher order” effects into a heck of a flash.

I am wondering if you ever revisited this question. In particular, I would be very interested in hearing your current thoughts on the answer by David Simmons, which was given 5 years after you originally asked the question.

Jeremy #127: Thanks; I hadn’t seen that answer by Simmons, and found it well-thought-out and interesting. In particular, I agree that it’s ill-defined to talk about BBα(n) where α is the Church-Kleene ordinal—I would only go up to BBω(n) for ω<α.

Now, the position of Feferman, which Simmons talks about, would only seem to go up to BBβ(n), where β is the Feferman-Schutte ordinal, which is lower than the Church-Kleene ordinal. Maybe I could be persuaded to adopt that more conservative position, were I given a good argument for it that I understood! But in any case, Feferman and I seem to agree on the highest-level point, which is that the largest succinctly definable numbers are going to be of the form BBα(n) for some countable ordinal α, and the “quantitative” question is simply which ordinals α we take to be well-enough defined (a question which might, in turn, depend on our “higher-up” beliefs about set theory).

I’m curious, how can the energy from these waves ever be recovered? How can they be “received”? Is that in anyway overlapping with “detected”?
For EM, we have the time-reversed equivalent of emission just as often as we have the emission, but for these gravitational waves that would be absurd… so does that mean mass is just constantly disappearing in the form of unrecoverable and quickly dispersing gravitational waves?

As a computer science student who slightly regrets not knowing as much physics as he should, I try to keep myself a little physics literate too by taking the occasional physics elective and self-studying a little. In this endeavor, I find you to be a great inspiration, Scott. Did you take a lot of physics courses in your undergrad / grad school, or is this self-learnt?

Frequency? Duration of disturbance? For instance, if your body was hit by a compression wave had a displacement magnitude of mm/meter but was imparted over the duration epoch of 1 min. you would not feel it. But if that compression epoch was a nanosecond, you would most likely shatter in a nuclear explosion. So isn’t it the energy density of the event that matters and not the compression distance?

Also, the fall off with radial distance… please explain why gravity wave energy fall off wouldn’t follow the spacial rule of 1/r^2? What does this say of relativity?