While this coming week will see the start of some incredible stories from the American Astronomical Society’s annual meeting (schedule here if you have recommendations for what you want to see an article on), there’s always time to look back on the last articles of 2016:

From Sinisa Lazarek on long-term dimming of Tabby’s star: “The overall century dimming I got from this paper which is linked on wiki. https://arxiv.org/abs/1601.03256
Am not skilled to judge anyone’s findings on this matter. But agree with you that when looking at the magnitude measurements you posted, it’s far less apparent or true.”

Well, the table above is from the very paper you linked to; it’s very difficult to do long-term studies because of differences in equipment. Looking at Tabby’s star, its brightness varies by about 0.2 magnitudes, mostly decreasing over time. But the two reference stars vary by 0.11 and 0.12 magnitudes, respectively, with increases and decreases also. But that also underscores why the Kepler data is so interesting; as Sinisa discovered later (quote from this paper):

Over the first ~1000 days, KIC 8462852 faded approximately linearly at a rate of 0.341 +/- 0.041%/yr, for a total decline of 0.9%. KIC 8462852 then dimmed much more rapidly in the next ~200 days, with its flux dropping by more than 2%. For the final ~200 days of Kepler photometry the magnitude remained approximately constant…

So why is the flux dropping? And why does it drop in weird, strange patterns? And why is it so different from all the other stars we know of? This is one of the most fun mysteries in science, and no matter how it turns out, we’re going to learn something incredible about the Universe.

Different ways of measuring cosmological distances in the expanding Universe. Image credit: Wesino at English Wikipedia.

From Omega Centauri on distances in the Universe: “From one standpoint, that of determining the inverse square law dimming of the light, only one of these “distances” will give the correct result (there is also a factor due to the decrease in frequency, but that is a separate multiplicative factor), there should be only one correct distance. I think this distance is the same as that which determines the reduction in the solid angle of the sky the object occupies.”

You have to be very careful to get those multiplicative factors correct, especially when you go to large distances. We have this (incorrect) intuitive notion that as you look farther and farther away, objects will appear fainter (as 1/distance^2) and smaller (as 1/distance) on the sky… but that’s only partly true. Once you reach a redshift of about 0.1, there are corrective, redshift-dependent terms. There’s actually a minimum angular size objects will reach, so beyond a certain redshift, stars and galaxies will appear larger again! If we had a 10 meter-class telescope in space, we could resolve the internal structure of pretty much any galaxy in the Universe. That’s pretty incredible!

An older view (pre-main injector) of Fermilab, as I remember it best from 1997. Image credit: Fermi National Accelerator Laboratory, a.k.a. Fermilab.

From Michael Kelsey on my introduction to physics research: “what experiment did you work on?”

There comes a time in every aspiring scientist’s life where they get exposed to research for the first time, and it rarely looks like you’d expect. In the summer of 1997, I started working for a professor who was testing various detectors for the D0 experiment on the fixed-target beamline. I got to tune the electromagnets focusing the beam, perform voltage tests on the detectors, study angular variations in the detectors’ sensitivities and other duties like that. It was honestly an incredible experience for learning about myself, what I was interested in, and what I was/wasn’t passionate about more than anything else.

My best memories of that summer were of the time I spent with Roger Dixon and Erik Ramberg, who led groups of undergrads (maybe 10-12 of us) on some incredible journeys through particle physics. I also remember Drasko Jovanovich, whose Fermilab badge ID# was 7. (Mine was 8000-something.) Experimental particle physics turned out to be something I was “good enough” at but not great at, but getting exposed to it at all, as I did, was an incredible and formative experience for me.

A multistage rocket that lost and jettisoned mass as it moved faster and faster would be required to reach speeds approaching the speed of light, like the Super Haas rocket shown here. Image credit: Dragos muresan, under c.c.a.-s.a.-3.0.

From G on acceleration via spaceship/propulsion: “What’s a reasonable consensus estimate of the highest velocity that could be reached using hydrogen fusion as the means of propulsion?”

As Michael Kelsey said, there is no upper limit, but there is a terrible tradeoff: the longer you want to accelerate for, the more fuel you need to bring. If your fuel is not energy efficient (chemical is worse than nuclear is worse than antimatter), this gets you in trouble quickly. Why? You need to not only accelerate your payload, but all the remaining fuel you have on board. This is why rockets jettison their used-up stages, so you don’t have to keep accelerating all that mass. There’s no technical limit for the speed you can reach (or the amount of time you can accelerate for), but depending on your fuel’s efficiency — and hydrogen fusion is about 0.7% efficient — you’re going to be limited by the mass/size of your initial rocket, fuel included.

Light and ripples in space; as the light passes through non-flat space, it changes how an observer at any other location perceives the passage of time for the light. Image credit: European Gravitational Observatory, Lionel BRET/EUROLIOS.

From ketchup on how we talk about time in the expanding Universe: “You mean the merger was detected on September 14th, 2015. Since it happened over a billion light years away, and gravitational waves travel at the speed of light, the merger itself was hundreds of millions of years ago.”

We have a lot of conventions in astronomy, and many of them make people unhappy. Which is unsurprising, since a convention in how we refer to things is a choice, and not everyone has the same personal preferences. Since there are no such things as absolute space or time, however, we talk about the arrival of the first signal of the “event” as when the event occurred.

For these black holes that created LIGO’s first event, the merger did occur some billion+ years ago; the gravitational waves traveled through space for all that time; they arrived on Earth about 16 months ago; we detected them as soon as they arrived. But when we talk about when the event occurred, we’ll conventionally say it occurred the moment it was detected. Measurements of the CMB are occurring right now, and we can talk about the CMB “today,” even though the light is from 13.8 billion years ago. It’s not right or wrong to make a different choice, but this is how people talk about this.

From Wow on IMBHs (intermediate mass black holes): “A black hole loses MOST of its mass in ejection during the supernova. A 60 solar mass star may form a 6 solar mass black hole. […] Therefore to get to mid size, a black hole has to accrete the extra matter.”

Kind of. If you form a massive enough star (or a metal-free-enough star that’s modestly heavy), you can get a direct black hole, where 100% of the star’s mass becomes a black hole. Intermediate mass black holes, however, are outstanding candidates for dynamically relaxing and heading towards the galactic center, where they contribute to the merger and growth of the central, supermassive behemoths present. If you want to look for them, the best place is within the central few hundred parsecs of a galactic center, and we expect to find many between, say, 20 and a few hundred thousand solar masses. The large black holes found by LIGO’s first detected merger — 36 + 29 becomes 62 solar masses — were the first black holes robustly found in this mass range.

A clumpy dark matter halo with varying densities and a very large, diffuse structure, as predicted by simulations, with the luminous part of the galaxy shown for scale. Image credit: NASA, ESA, and T. Brown and J. Tumlinson (STScI).

From Omega Centauri on dark matter and star wars: “I realized a couple of weeks back, watching “A New Hope”, that the Jedi knew of dark matter a long time ago. ObiOne (sic) describing the force “it binds the galaxies together…””

Dark matter does surround us, it does penetrate us, and it does bind the galaxy together. But unless we can figure out how to coax it into interacting with either itself or some form of matter in a way that goes beyond standard gravitation, we’re never going to directly detect it. Or, you know, use it to move material objects or choke an incompetent Captain.

From Amos Dettenville on Einstein’s blunders: “I think the first item on your list is wrong. You do a dis-service to your readers by contributing to the spread of the erroneous claims of Ives (anti-relativity crank) and Ohanian, et al. Genuine scholars like Stachel and Torretti long ago debunked the mistaken ideas that you’ve repeated in your Forbes article. For a good recent overview of the facts, see https://arxiv.org/abs/1407.8507.”

There is a colorful history of this controversy, that involves anti-semitism, relativity denialism, but also some genuine flaws and incomplete analysis/understanding in the early days. As is usually, the case, I was aware of some of it (Ives, Hasenöhrl) but not all of it (the Field paper). What I said in the article, that I believe still holds up, is:

Einstein was only able to derive E = mc^2 for a particle completely at rest. Despite also inventing special relativity — founded on the principle that the laws of physics are independent of an observer’s frame of reference — Einstein’s formulation couldn’t account for how energy worked for a particle in motion. In other words, E = mc^2 as derived by Einstein was frame-dependent!

The big advance of von Laue, which I think deserves a ton of credit, is that in 1911, he was the first to realize that inertial mass doesn’t mean what it does in classical (Newtonian) mechanics. Around 1911, von Laue made a formulation of a flow of continuous matter in special relativity. “Mass” doesn’t fully characterize the inertial behavior of an extended physical system, and that’s the incompleteness of Einstein’s E = mc^2. Max von Laue made it clear that you need, at minimum, 10 functions, which would become the 10 independent components of the stress-energy tensor in General Relativity. The story I’m most familiar with comes from this book, which is an incredibly informative look at the original source material.

Even people who we otherwise dislike, for whatever reasons, may make some amazing statements of wisdom. I have quoted Oprah, Aquinas, Ken Ham and Crowley, among others, and won’t stop quoting someone who says something interesting or thought-provoking just because they say or believe other things that I think are insane. The Crowley quote, by the way, was:

The joy of life consists in the exercise of one’s energies, continual growth, constant change, the enjoyment of every new experience. To stop means simply to die. The eternal mistake of mankind is to set up an attainable ideal.

And finally, from Mark Thomas on our finely-tuned Universe: “If ‘initial conditions’ are very close it will not produce an exact outcome of our Universe but a similar 4 dimensional Standard Model Universe with near similar dimensionless constants. To obtain the very exact same ‘initial conditions’ which would produce an exact replica of our Universe might be very improbable. This might produce a vast array of similar Universes as the ‘initial condition’ spectrum may be very large.”

One of the interesting questions one can ask ourselves is how close the Universe needs to be to our own to give a similar Universe. If you vary some of the fundamental constants by a small amount, you’ll get a Universe that’s only slightly different from our own, and where intelligent life is possible. But other constants could vary by a little — or if you vary the wrong constant by just enough — and that will ruin everything! There is an incredible book that I’ve just started reading that addresses this very concept: A Fortunate Universe by Geraint Lewis and Luke Barnes. In fact, maybe I’ll finish it and do a review sometime soon.

In any case, hope your new year is off to a great start and hope that 2017 is full of happiness, wellness and fulfillment for all of you!

Related

Comments

So we can easily have black holes in the 5-10 stellar mass range, from a star that maybe goes hypernova and leaves a remnant too massive to be a neutron star. We can have black holes in the tens to maybe up to around a thousand solar masses if you have a very metal-poor (population III) and very massive star that collapses straight into a black hole via photodisintegration. In a galactic core, a black hole will grow and grow via accretion of mass near it, and can grow to maybe 100,000 solar masses in a dwarf galaxy to maybe billions of solar masses in the largest galaxies. How does one make a black hole in the thousands to the few tens of thousands of solar masses? A star greater than about a thousand solar masses is I think would find it impossible to achieve hydrostatic equilibrium. That leaves accretion as the only mechanism to increase mass to that point, but in a mass-rich region like a galactic core what’s to stop a really big black hole from continuing to do so until it becomes supermassive?

While digging around youtube during these holidays (ahh the spare time).. just came across something that blew me away. For all you astronomy buffs and DIY guys and girls…. found something which (if you’re in US) is a mind blowing value for money… and even if most likely noone will do it.. at least it’s a wet dream of sorts 😉

so here it is… there are people selling used genome sequencers on ebay… the particular model that is interesting to us is Roche 454 FLX genome sequencer.

which.. if you wanted to go and actually buy would take you back around… ghm.. $80-100.000 😀

the rest I leave to your imaginations… but it makes me wanna cry. Sure… there is lot’s of headache here.. like how to mount a 10kg cryo camera to a mount.. in other words.. you will need some steelworks prior to using this…. but this is like finding a ferrari for $2000

The performance of this CCD is WAAAAAY beyond what amateur astronomers can get for 2k. With this.. you can do actual science, in some aspects getting better quality images/data then some professional telescopes.

The biotech that comes up for dirt cheap in auctions on eBay is ridiculous. Back in 2013 when the FDA banned 23andMe from providing health related genetic marker data to end users, I’d toyed with opening up a lab across the boarder in Mexico to fill the niche using sequencers purchased off eBay.

a week ago I had no idea that genome sequencers use CCD’s, nor that they rely on precise aquisition of very faint photon emissions. Thus making them basically the same ones that we like for astronomy/astrophotography. Nor that they can be found for 2k 😀 Funny how much you can learn by looking at a video a guy tearing junk yard tech and some research after 😀