AstroWrighthttp://www.personal.psu.edu/jtw13/blogs/astrowright/
en-usCopyright 2014Wed, 02 Jul 2014 17:33:32 -0500http://www.sixapart.com/movabletype/http://www.rssboard.org/rss-specificationNew site -- update your feedsPenn State has moved over from its old Moveable Type software to a WordPress-based solution at sites.psu.edu. All old links and such should still work, so this should be transparent to most users, except that the layout is different. I'm also hoping that commenting will become easier, and that this new site is better at keeping out the spambots that are constantly posting ads for Oakley sunglasses in the comments.

If you subscribe to this blog you might need to re-establish your RSS feed or whatnot tomorrow, when the new site goes live.

]]>http://www.personal.psu.edu/jtw13/blogs/astrowright/2014/07/new-site---update-your-feeds.html
http://www.personal.psu.edu/jtw13/blogs/astrowright/2014/07/new-site---update-your-feeds.htmlWed, 02 Jul 2014 17:33:32 -0500One NSF proposal per yearScience magazine recently interviewed me for an in-depth/opinion piece they ran last week here (paywalled). The gist of it is that the NSF is worried that acceptance rates are getting too low (15% now) and that this is putting a big burden on reviewers:

Later this summer, NSF's astronomy division intends to announce a new policy that will "strongly encourage" scientists to submit just a single proposal for each annual funding cycle. The voluntary cap is designed to boost success rates, which would please applicants. It's also meant to ease the workload and frustration levels of peer reviewers poring over proposals that they know have little chance of getting funded.

The "one proposal" rule would, apparently, include being on a proposal in any capacity, including being a co-I (but would only apply to AAG's). I'm quoted here:

A strict limit could be "a disaster," worries Jason Wright..."having a ton of co-PIs with different skills is what makes an application so strong."... [He] says a ceiling would also change his approach to grant writing: "If you only get one shot, I'd put in more sprawling proposals, and ask for more money."

These are excerpts from a long phone conversation (spontaneous oral hyperbole doesn't translate well to formal text in Science).

The basis for the Science piece was a presentation Ulvested gave to the Astronomy and Astrophysics Advisory Committee (AAAC). He's apparently going to start by asking pretty-please in a "dear colleague" letter, and if people keep submitting lots of proposals implement a formal rule.

In the meantime, the AAAC is going to perform a demographics survey of AAG proposers to see if this solution really fits the problem.

Here is a comment I submitted to the piece, which is still not published:

I encourage the AAAC to ensure that its survey includes plenty of input from low-effort co-I's, science-PI's, admin-PI's, and soft-money researchers.

Let me elaborate on my concerns regarding why a "strict limit" would be a "disaster" (apologies for the hyperbole, which was made in an oral conversation):

1) This will hurt soft-money folks and young researchers most -- running dry can end their careers; tenured faculty can try again next time.

2) Many proposers add co-I's (not necessarily "a ton" of them) who provide specialized service at low effort that significantly strengthens the proposal. Many researchers will have to decline to join collaborations as co-I for, say, 2 weeks' effort if it means they can't submit a proposal to fund their own group. This effect will not help the submission rate problem but it will do structural damage to collaborations and make proposals weaker.

3) Acting as administrative-PI is an important role for tenure-line researchers to play for postdocs and adjuncts of various sorts who, for purely bureaucratic reasons at their host institutions, cannot PI proposals themselves. This rule would discourage this source of collaboration and professional development.

4) If fewer than 15% proposals feature duplicate proposers, as this letter suggests, then this proposal won't actually have a big effect on success rates, but it will hurt the quality of the science in those proposals.

Finally, Ulvested neglected two more ways to boost success rates (perhaps beyond his control, but not to be neglected): more funding, and, to a lesser extent, more support and less stigma for those thinking about leaving the field. The 1-proposal plan feels a lot like treating a minor symptom (reviewer burnout) of a much bigger disease. As someone who has served as an NSF reviewer, I do not support this "1 proposal" plan.

So, I would ask that this request be modified to limiting only full PI-ships to 1 per year, and allowing multiple low-effort co-I and admin-PI roles. But even with this less strict rule, I still worry about the effects on the careers of soft-money and untenured researchers, and suspect that it will cause us to unnecessarily lose a lot of really good people.

]]>http://www.personal.psu.edu/jtw13/blogs/astrowright/2014/06/one-nsf-proposal-per-year.html
http://www.personal.psu.edu/jtw13/blogs/astrowright/2014/06/one-nsf-proposal-per-year.htmlTue, 24 Jun 2014 10:08:23 -0500A Hard Rain's A-Gonna Fall on the Dark Side of the Moon III: The Crust is Made With RefractoriesThe primary problems with my original idea (described in Part II) for addressing the Lunar Farside Highlands Problem (described in Part I) are that the crust of the Moon formed long after the Earth had cooled down, and that the elements that form the Moon's crust (that is, that distinguish it from the mantle) are actually refractories, not volatiles: the heat of the Earth should actually concentrate them, not cause them to vaporize and flee to the farside.

Calcium and aluminum are very concentrated in the crust of the Moon. This is because when you add calcium and aluminum to silicates and then cool it, you get minerals like the anorthositic plagioclase feldspars, which are what found their way to the top of the magma ocean that covered the Moon after the giant impact and made the crust.

When Arpita showed us the vaporization temperatures of the various elements, we were at first disappointed, because it meant my idea wasn't just wrong: it was exactly the opposite of how things should have gone. But then we realized we could reverse the model, and we had a new explanation that actually made more sense.

Since the farside crust of the Moon is about twice as thick as the nearside crust, apparently the farside hemisphere has a lot more calcium and aluminum than the nearside, because that's what its crust is made of.But how does Earthshine accomplish this?

As the Moon was forming (and the Earth was re-accreting lots of material, too) lots of material was in vapor phase.As material condensed it formed rocks that crashed into the proto-moon, partially remelting and revaporizing. There was lots of stochastic, localized heating from collisions.The Moon would have had a thick rock-vapor atmosphere. It was a messy, multi-phase, non-equilibrium mess.

And since the Earth was very hot, the whole mess had a very hard time cooling down. In particular, the nearside of the Moon and the atmosphere there was very hot, and so was the disk that was forming the Moon. But the outer part of the disk would be colder, the part of the disk in the shadow of the Moon would be colder, and the farside of the Moon was colder.All other things being equal, a temperature gradient of this sort should lead to condensation fronts and gradients: as you move away from the Earth the refractories have an easier time condensing out of vapor phase, then much further away the volatiles can finally condense.A temperature gradient should lead to a chemical gradient.

And a chemical gradient is what we see on the Moon! What's more, the chemical gradient that is responsible for the crust looks like a condensation sequence, which is exactly the signature you expect from a temperature gradient. There is also some early observational evidence to support a continuous differentiation mechanism versus stochastic deposits (Ohtake et al. 2012), which seems encouraging.

We identified three possible mechanisms that in this mess, refractories might preferentially find their way to the farside because of Earthshine.

1) The outer part of the disk might preferentially "feed" the farside of the Moon. If refractories were more commonly in solid phase in the outer parts of the disk, then they might preferentially collect there.

2) Impacts re-vaporize materials on the surface, as well as the impactor. In the cold of the farside, refractories are likely to always remain solid, but on the hot nearside even refractories vaporize and mass deposition is equally inefficient for all species.

3) The farside lunar atmosphere might form a cold trap for refractories. Re-vaporized material enters the lunar atmosphere, then find their way to the cold trap and snow out. Gaseous disk material entering the farside shadow would condense out Ca/Al rich grains might then be more likely to be deposited into the still-forming Moon.

If I had to bet, I'd bet on scenario 3 being the most important.

After the farside got preferentially polluted with refractories, the rest was deterministic: over the next thousands to millions of years, as the magma cooled into mantle and crust, the extra Ca an Al formed extra feldspars, which bound their way to the surface and formed an extra-thick crust. And that's why it looks so different: all that extra armor on the back prevented impacts from penetrating down to the interior magma, so the maria couldn't form there.

So, bottom line:

1) The Moon has probably always had the same face towards the Earth, even during formation

2) The Moon's formation was a messy business, with vapor phase likely to have been important

3) Earthshine was an important component of thermal energy budget of the post-giant-impact system, and should have produced a chemical gradient in the protolunar nebula and proto-lunar atmosphere

4) The present-day chemical dichotomy on the Moon looks an awful lot like the result of the condensation gradient one expects from Earthshine.

5) The lunar farside highlands are the result of a primordial chemical gradient caused by tidal locking and the temperature gradient caused by the hot post-impact Earth and, possibly, the shadow proto-Moon.

I've compressed this storyline a bit for clarity, and because my memory of the exact order and importance of various conversations have faded over the three years this project has spanned. Also, there were many other people who helped shape our initially inchoate ideas into the coherent thesis of Arpita's paper as we shopped this idea around; among them are Matija Ćuk, Bethany Ehlmann, Andrew Ingersoll, James Kasting, David Stevenson, Yuk Yung, Gary Glatzmaier, and Kevin Zahnle. Arpita also contributed to this series of blog posts.

]]>http://www.personal.psu.edu/jtw13/blogs/astrowright/2014/06/a-hard-rains-a-gonna-fall-on-the-dark-side-of-the-moon-iii-the-crust-is-made-with-refractories.html
http://www.personal.psu.edu/jtw13/blogs/astrowright/2014/06/a-hard-rains-a-gonna-fall-on-the-dark-side-of-the-moon-iii-the-crust-is-made-with-refractories.htmlMon, 09 Jun 2014 08:00:00 -0500A Hard Rain's A-Gonna Fall on the Dark Side of the Moon II: Thinking About EarthshineLast time, I discussed the Lunar Farside Highlands Problem, and how I first got interested in it.

Back in 2011, Diana Valencia was giving our departmental colloquium when she mentioned as an aside CoRoT-7b, an apparently rocky planet so close to its parent star that its nearside is not just molten, but partly gaseous. The gasses form an atmosphere that blows around the planet and some minerals preferentially freeze out on the cold side, creating very different star-side and space-side minerals. Crazy!

Anyway, I was wondering if the CoRoT-7b situation was at all analogous to the young Moon after the giant impact that formed it: after the giant impact, the Earth would have been hot -- probably thousands of Kelvins -- and the Moon would have only been a few Earth radii away. This is quite similar geometrically and thermodynamically to CoRoT-7b orbiting its parent star. Might the Earth have vaporized the Moon's crust and caused it to redeposit on the farside?

Arpita asked, shortly after Diana's talk, about a possible 2nd year project, something short and not at all what she had been working on. I mentioned my idea, and she ran with it. I decided to ask Steinn Sigurðsson to help, since I had never written a theory paper before, and we needed a broad theorist to help with the tricky bits. With Steinn offering us the appropriately reckless encouragement to head off into a field we all had zero training in, Arpita started a self-directed crash course in lunar geology.

Penn State has very few people that would call themselves "planetary scientists". Jim Kasting, who fills the role nicely but declines the title, patiently listened to my idea about the lunar highlands over lunch, and politely gave me a rough lay of the land about why he thought that wasn't right. I wasn't too dismayed, though -- Jim wasn't very familiar with the proposed solutions to the lunar highlands problem and I chose to interpret his politeness as gentle encouragement to learn more about the problem.

We enlisted the help of two other planetary types in the University Park penumbra. The first was Neyda Abreu (left) of Penn State DuBois, a meteoriticist who has a lab here on University Park campus. Neyda graciously met us over coffee several times to help us out with the geology details that we weren't sure about, like the typical sequence of crust formation and how to pronounce geology words like "plagioclase".

Once Arpita had come up to speed on the lunar geology jargon, she got to work answering some questions.

First: When did the Moon tidally lock? It seems naive to think that the side we see now was the side that formed facing us, but could it be true?

Answer: The tidal locking timescale goes as the sixth power of the distance between the Moon and Earth, so when it formed it was very strongly tidally damped. The would have lost an e-folding of rotational energy every 100 days or so, and since this less than the time it took to form, the moon essentially formed tidally locked. Unless a subsequent impact spun it up (or down) again, the Moon has always had the same side facing the Earth!

So, check off that box. The nearside might have a thinner crust because it's the nearside, not vice versa.

Second: Would the Earth really keep the nearside that much warmer?

Answer: Yes, when the Moon was forming, the nearside cools slowly towards half the temperature of the Earth looming large in the sky (40 degrees across!), while the farside cools more quickly towards 250 K. Of course, the Moon does not necessarily reach these equilibrium temperatures, but because it has further to cool, the farside will cool faster. The hot Earth essentially insulates the nearside against radiative cooling.

Third: Can we get the crust to collect on the back somehow? This was much trickier. Over coffees, Lynn Carter of Goddard Space Flight Center, a bone fide lunar scientist and occasional visitor to State College, helped us a lot in sorting out the basics of how the Moon's crust forms, what it is composed of, and what the Earth might have had to do with any of this.

Once we had a feel for what the literature did (and didn't) say about the Moon's formation, Arpita was able to quickly identify the primary problem with my original idea that the young Earth vaporized the nearside lunar crust, which doesn't work at all. But that actually led us to a big insight...

]]>http://www.personal.psu.edu/jtw13/blogs/astrowright/2014/05/a-hard-rains-a-gonna-fall-on-the-dark-side-of-the-moon-ii-thinking-about-earthshine.html
http://www.personal.psu.edu/jtw13/blogs/astrowright/2014/05/a-hard-rains-a-gonna-fall-on-the-dark-side-of-the-moon-ii-thinking-about-earthshine.htmlWed, 28 May 2014 08:37:43 -0500A Hard Rain's A-Gonna Fall on the Dark Side of the Moon I: The Lunar Farside Highlands ProblemArpita Roy, Steinn Sigurdsson, and I just finished a long project in lunar geology theory. It's been a trip! You'll hear more about it in the near future, I think, so I'm leading up to it in one of my "slow bloggings."

It all started with a talk by Diana Valencia here at Penn State which, for reasons I'll get into later got me thinking about the Lunar Farside Highlands problem.

I remember, the first time I saw a globe of the Moon as a boy, being struck by how different the farside looks. It was all mountains and craters, not at all like the frontside with its broad, dark seas of basalts ("maria"). Why? I also remember never being able to get a good answer to my question, and it's bothered me ever since.

It turns out, when the Luna 3 spacecraft orbited the moon on Oct 7, 1959, it transmitted back to Earth the first images of the part of the Moon we never see. It was to everyone's surprise then that there were very few maria and lots of mountains.The reason for this remains a major mystery in lunar formation, called the "Lunar Farside Highlands Problem".

First image of the farside of the Moon, compiled by the Soviet Luna 3 mission

The Moon is thought to have formed via a giant impact of the Earth by a Mars-sized impactor.The idea is that this impact shaved a significant fraction of the Earth's mantle off, and that and most of the impactor went on to form the Moon, very near the Earth.Since then, the Moon has slowly drifted away due to tides, but the shared origin of the Earth and Moon explains their very similar compositions.

Artist's renditions of the Moon-forming giant impact. The material thrown off eventually formed a disk around the Earth that eventually formed the Moon.

I should note that recent studies have shown that the chemical and isotopic compositions of the Earth and Moon are nearly identical, not just close, which may actually be inconsistent with the Giant Impact Hypothesis (see Linda Elkins-Tanton's letter in Nature Geoscience, wondering if we need to start all over again!)

Most models that seek to explain the lunar dichotomy take one of two approaches:construct a "just-so" story about how stochastic processes just happened to give us a two-faced moon, or else start by assuming that the moon had some initial, inherent dichotomy (like a thicker farside crust) and then showing that the difference in appearance naturally follows from that.

So, for instance, if the farside crust is thicker, then very large impacts are less likely to pierce the crust and create the upwelling of mantle material that creates the maria on the Moon. The nearside, having a thinner crust, thus has several maria, while the farside is mostly mountains. This makes sense... but WHY does the farside have a thicker crust?

One thing to note is that the thickest part of the crust probably HAS to be pointed directly away from the Earth (or maybe directly at it). This is because the crust has a lower density than the rest of the moon, so a dichotomy leads to a gravitational quadrupole that tides can act upon. Aharonson, Goldreich, and Sari showedthat the configuration we have today is the energy-minimum state (i.e., if you spun the moon up, it is most likely to eventually settle back down with this side facing us). There is also a stable configuration where the other side faces us, but it is less likely. Unless the Moon has been spun up my some giant impact, the side we see today has been the side that faces the Earth since it became tidally locked.

But we're still left wondering WHY the farside has a thicker crust in the first place. Suggestions usually invoke stochastic processes: from asymmetric heating on the two hemispheres (but WHY is it asymmetric?) to asymmetric bombardment (maybe because the orbit of the Moon made impacts hit one side preferentially).

A nifty paper by Jutzi and Asphaug suggested that maybe when the Moon formed, it had a companion moon that formed exactly opposite it in the same orbit.This companion moon's orbit would not have been stable, but it might have been just properly placed to make a very soft collision with the moon as they both orbited the Earth.This would have plastered the companion moon against the side of the Moon, giving it a thicker crust there.Then tides would have reoriented the Moon so that face would be the farside.This "slap-on Moon" idea is really neat (you need a slow collision to make it work, which is what would happen to a companion moon, and the idea of a companion moon is totally plausible).

However, reflectance spectra from the SELENE lunar orbiter (nicknamed "Kaguya" by the Japanese, after the legendary Japanese moon princess!) seem to indicate that elemental abundances (specifically Mg to Fe ratio) seems to vary continuously from farside to nearside. Ohtake et al. argue that this suggests a continuous differentiation mechanism rather than a foreign source that was "slapped-on", whether it be a companion moon or impact deposits.

OK, that's enough for now. Next time: how Diana got me thinking about this, and our approach to the problem.

Images in these posts are linked to their source. Parts of this post were written by Arpita Roy.

]]>http://www.personal.psu.edu/jtw13/blogs/astrowright/2014/05/a-hard-rains-a-gonna-fall-on-the-dark-side-of-the-moon-i-the-lunar-farside-highlands-problem.html
http://www.personal.psu.edu/jtw13/blogs/astrowright/2014/05/a-hard-rains-a-gonna-fall-on-the-dark-side-of-the-moon-i-the-lunar-farside-highlands-problem.htmlThu, 22 May 2014 10:28:32 -0500Best*. Photometry. Ever. V: "Explosive Debonding" and, What is the Best Photometry Ever?Last time, I showed the great results Ming got at Palomar using the holographic diffuser --nice, stable stellar images in marginal seeing and with a sub-optimal diffuser. But that was way back in March --where's the progress?

Well, we had hoped to post the results of the new diffuser, but we've had two problems since then. One is weather -- Ming seems to have the worst weather luck at Palomar, and so we haven't had any successful observations since Fall. The second is explosive debonding. More specifically, this email Ming forwarded to me from Palomar:

I regret to advise you [never a good start -- JW] that our P200 NIR camera WIRC has suffered a catastrophic failure of its Hawaii-2 IR array...

The specific failure of the WIRC array was explosive debonding & separation of the semiconductor from its substrate. This is apparently a well-known failure mechanism of the H2 array family, and we believe it is most likely due to thermal cycling over the ~ 10-year lifetime of the camera. We are evaluating our operational procedures to mitigate the likelihood of this particular failure in a repaired WIRC (e.g. minimize thermal cycling as we do with TSpec) going forward.

Or, as I put it on the social media:

which got these responses:

Yes, it's nice to be working with a well-staffed and -funded observatory like Palomar! It's still not clear if we're going to get an engineering grade old H2 array or a newer RG array. If they install the old array, Ming will have to re-characterize the non-classical nonlinearity in the array to achieve this sort of precision again. Eventually, we hope to be able to move over to the RG detector, which would mean that the photometry WIRC can achieve will now be even better than we had promised (but also that all that work on the nonlinearities won't be needed any more -- good problems to have, I guess).

1) One way to think of it is in terms of relative error in the flux received; basically, if the true flux is constant at F, then how much does your measurement of F vary from observation to observation? If your sources of noise aren't too red, then things will improve with the number of photons you receive (ideally as sqrt(N)), so the more photons you have the better your precision. This means if you observe a bright star on a big telescope for a long time you have an advantage. This is the regime where we are: a Ks=10 star on a 5-meter observed for 3.5 hours. The scatter in our 30-minute bins is 110 ppm -- and by this metric I think 110 ppm is the best NIR ground-based precision ever. This is comparable to Bryce Croll's WASP-12b (Ks=10.2) JHKs band photometry, which had very high precision as well. Their ~150-point bin (corresponding to ~35min) reached ~170ppm with the CFHT a 3.6m telescope. Jacob Bean's group has achieved ~150ppm in K band (in one of their spectral channels) using the 6.5m Magellan multi-object spectrograph on a Ks=10.5 star, although it's unclear what their 30-min scatter was.

2) Another way to do it is to measure the quality of the technique in a more normalized way: if someone can achieve 120 ppm in 15 minutes then their technique is better that what we have achieved, even though 120 ppm is more than 110 ppm, because if they went for the full 30 minutes they'd certainly get below 110 ppm. This is a "fairer" way to judge things.

By this metric, one of the best ground-based observations ever (but in the optical) is John Johnson's photometry of HD 209458 in 2006 using the orthogonal parallel transfer imaging camera (OPTIC) on the UH 2.2m. The idea here is very similar to holographic diffusion, but at the detector instead of within the optical path: you shift the charge in the CCD around to smear out the starlight into a shape of your choosing (John chose a square). It worked really well: John got 470 ppm in 1.3 minutes. If you bin that down assuming Gaussian noise, that corresponds to 90 ppm in 30 min.

Of course, this metric still incorporates the stellar brightness and telescope into the problem.

3) The true test of a technique would compare to the photon limit. As a fraction of the photon limit, I think the best you can do is count photons with very low background. High energy / X ray astronomy might approach this limit in some sense, because when you only have a few detections your photon noise from the source dominates. Plus you're doing absolute photometry. Of course, the challenge isn't to find a source with 3 counts per hour and negligible background, it's to come similarly close to the photon limit in the limit of LOTS of photons. I'm not sure what the record is here -- I imagine of curve of how far above the limit you are as a function of the number of source photons you detected.

The record by this metric is probably something like Brown and Charbonneau's HD 209458 observations with the Hubble Space Telescope, where they got 110 ppm in 80 seconds (so, about 4 times better than John's observations of the same star with a similarly-sized telescope and similar exposure times). The photon limit for these observations was 80ppm, so they were only 38% above the photon limit!

But their telescope was in space. That's cheating.

OK, that's our story (so far!).

David Hogg has some nifty ideas about truly optimal photometry, and we've swapped data and "secret sauce" with his group to see if there are algorithmic improvements to be had over aperture photometry. We're still about 2x the photon noise limit, so there is room to run, no matter how you measure our photometry.

I invite readers to submit their nominations for the Best. Photometry. Ever., along with which of the 3 metrics I've used they are "best" in (or suggest a new one!)

This Minimum presumably also corresponded to a time when the sun was slightly dimmer than usual (sunspots appear when the Sun is very slightly brighter than usual), and indeed this period is connected with the so-called Little Ice Age in Europe, so there's a climate connection.

In her seminal work on stellar activity, Sally Baliunas measured activity levels, which should track starspot number, for many Sun-like stars to put the Sun in cosmic context. She found that something like like 30% of Sun-like stars were in states analogous to the "Maunder minimum," with very low activity levels. This discovery had potentially profound implications for global climate models of Earth -- if the Sun spent 30% of its time in Maunder minimum states, then perhaps the current climate was unusually warm just because the Sun is brighter (it's the Sun!).

Why hadn't anyone noticed this before? Because subgiants look a lot like dwarf stars, except that they're brighter. The effects of their lesser gravity are actually pretty subtle in their spectra, so if you don't have parallaxes (distances) it's very easy to mistake them for dwarfs.

During my colloquium a question came up from the audience while I was showing this figure:

The dark circles in this figure are the "Maunder minimum" analogs that I had found, including those Sally had identified. The gray line is the Main Sequence, where truly Sun-like stars live. The boxed stars are "Sun-like" stars (and subdwarfs) with anomalously low activity, the rest are actually evolved (luminosity goes up, so they are over-luminous, meaning bigger, meaning they are subgiants).

The question from the audience was: "whose sample is that?" I answered that it was a combination of mine, Sally's, and the sample of Todd Henry. The response came back:

Well, fortunately for me, Todd was wrong (a rarity!). Those were his stars and, because he didn't have parallaxes when he constructed his sample, he had inadvertently included lots of subgiants in the sample. Before HIPPARCOS, people just didn't appreciate how common subgiants are in magnitude limited samples!

A similar issue came up in the context of John Johnson's thesis work measuring the planet occurrence rate around subgiants: John used HIPPARCOS to find subgiants and look for planets around massive stars, but Jamie Lloyd argued that those stars can't be as massive as John thinks, because such stars are rare. We've gone back and forth over this, and I'm hoping the issue will be settled soon.

ANYWAY, this is all a leadup to a new paper on astro-ph by (future Penn State Hubble Fellow) Fabienne Bastien. I last wrote about Fabienne's work in the context of "flicker" -- the short-period brightness fluctuations in stars that seem to predict both gravity and radial velocity "jitter".

Well, she has a new paper out using "flicker" to measure gravities (and, thus, luminosities) of lots of Kepler stars. She finds that a stunning 50% of the bright Kepler stars are subgiants. Since subgiants are large, this means that the stars are larger than we thought, which means that a large number of the "small" planets Kepler has discovered are actually 20-30% larger than we thought, especially for stars more massive than 90% of the Sun (so Kepler 186 is OK).

Crazy stuff. Subgiants, subgiants everywhere!

]]>http://www.personal.psu.edu/jtw13/blogs/astrowright/2014/05/subgiants-everywhere.html
http://www.personal.psu.edu/jtw13/blogs/astrowright/2014/05/subgiants-everywhere.htmlTue, 06 May 2014 08:54:09 -0500Best*. Photometry. Ever. IV: Promotion and ResultsIn the last installment, we discussed our efforts to characterize the holographic diffuser and convince Palomar to install it on WIRC. Today... results!

With Palomar and Heather on board, and lab proof-of-concept photometry in hand, we went to NASA for funds to support Ming's research. We applied for an Origins of Solar Systems proposal with Ming as Science-PI and me as administrative-PI, because as a CEHW postdoc Ming did not have PI privileges at Penn State. Ming wrote a beautiful proposal showing all the great things one can do with the diffuser at Palomar, along with supporting spectroscopic observations from other observatories.

In November, we learned that our proposal had been accepted! We were thrilled. Ming was now (mostly) supported for 3 years, and had a major grant to put on his faculty applications. Ming's postdoc was up, so we went to the astronomy department administration and argued that now that Ming has grant experience, we should bring him on as soft-money faculty. The university agreed, and Ming is now a Research Associate at Penn State.

The panel review of our proposal was remarkably good. I don't mean "positive", although it was that, too -- I mean that sometimes reviewers really read the proposal and "get it", and sometimes they don't (for better or for worse). This panel "got it" which is a testament to Ming's proposal writing skills (and the team's comments, especially Heather's and Jason's--MZ). In particular, there was a very fair weakness noted by the panel:

The diffuser has been tested in the lab but not yet used in actual observations with a telescope. The actual performance on-sky is somewhat uncertain.

Indeed. Not even a month after we read those words, Ming was doing the test that would determine if the concerns identified by the panel would hound us, or evaporate.

Installing new hardware on a workhorse instrument like WIRC is not a trivial matter. After much back and forth we determined that the diffusers we needed were at the very limit of what our vendor could produce -- they were not used to such tiny angles. We requested a tophat diffuser that would turn point sources into uniform disks -- the ideal shape for photometry.

Eugene Serabyn's team at JPL used their Palomar testbed to test the diffuser when it arrived. The results were disappointing -- the diffuser was Gaussian, not tophat, and highly speckled, not uniform. The vendor was very understanding and after some back and forth agreed to send along a better one at no charge-- but in the meantime we moved ahead with the in-hand diffuser. The speckles were actually not going to be a problem because natural seeing at the telescope would wash them out, and the relatively coarse pixels of WIRC (15 micron = 0."25) would bin them, anyway (and we could always defocus a bit, if necessary!) The Gaussian profile had broad wings that would necessitate a wider-than-optimal extraction aperture, and that would give us excess background noise from the sky. But for a test run, this would do, especially since Ming's test run was coming up and the new one wouldn't arrive for a while.

Next, Palomar had to install the filter. WIRC is a cryogenic infrared instrument, meaning that the components are very cold, and in fact they are in a vacuum chamber. Changing anything means slowly letting air in and letting the system come up to room temperature, which hardware really doesn't like to do. Do this too many times, and eventually some piece of delicate equipment (maybe your $300,000 detector!) will fail. So in general one wants to minimize the number of times this happens by queuing up all of your repair and upgrade work until the next scheduled maintenance. The next time things were due, in went the test diffuser.

Ming tells the story:

We made an on-sky test observation with the diffuser and WIRC on December 21, 2013 (Ming Zhao & Joe O'Rourke, a grad student of Heather's). We were barely able to observe that night due to high humidity and fog passing-by, and were interrupted by fog later on. So the condition was not quite photometric and we could only observe a test star. Nonetheless, we took 40s exposures continuously for ~3.5 hours, and the PSFs were stabilized quite well by the diffuser throughout the observation. The result, analyzed using standard aperture photometry, is shown in the figure.

The top panel shows the differential light curve of a Ks~10 magnitude star normalized against 8 nearby references. The raw RMS scatter is 1235 parts per million (ppm, or 1.235 millimagnitudes). The middle panel shows the scatter of 9-data point bins (8 min), and the bottom panel shows the scatter of 30-point bins (28 min). The light curve still has some time-correlated noise so the scatter doesn't follow 1/sqrt(N) exactly. The 28min-binned RMS is 110 ppm in the bottom panel. And 100 ppm is roughly where our current noise floor is at. The bottom line is that we have reached an RMS of ~100 ppm with 30-min bins for a Ks=10 star, which is one of the best precision in the NIR from ground to our knowledge.

The most important thing is that we should be able to reach this precision regularly, even under non-optimal conditions, and even with this sub-optimal diffuser. So we don't have to bet on luck any more -- a much more settling situation! We hope we can get exciting scientific results in the upcoming observing runs this year, despite all the tough luck on weather and instrument we have had!

Also, the new, tophat, smooth diffuser has arrived and Heather has scheduled another lab test with Eugene's team. It will have a much more compact PSF with sharper boundaries, which means fewer pixels, which means less background, which means higher precision.

So why does the diffuser work so very well? Ming has put together a great animation showing the "before and after" effects of a diffuser on the distribution of light. As you'll recall fromEpisode I: The Problem, variable flux on a few pixels makes photometry very difficult. What you want is perfectly stable poor focus or seeing.

Click below for links to the results in movie form: typical seeing, typical defocussed seeing, and the diffuser. Note the change in pixel scale on the rightmost image:

]]>http://www.personal.psu.edu/jtw13/blogs/astrowright/2014/05/in-the-last-installment-we.html
http://www.personal.psu.edu/jtw13/blogs/astrowright/2014/05/in-the-last-installment-we.htmlFri, 02 May 2014 09:41:21 -0500Best*. Photometry. Ever. III: Prototyping Holographic Diffusers In the first installment of this series, Penn State Research Associate Ming Zhao presented "the problem" of precise NIR photometry.

In the previous installment, we made a detour into the not-so-distant past and revealed how fate had intervened to give Suvrath Mahadevan the idea to install holographic diffusers on a spectrograph that needed to be blurrier. Survath recommended we try the same thing for photometry.

In this installment, Ming and I describe the efforts of many people to prove the diffuser could actually do what we needed.

We immediately looked into this possibility with Suvrath's consultation, and searched for places that make diffusers and contacted all of them. Unfortunately, none of them made diffusers that fit our specifications, but luckily, some of them do have development capability. After a few months of searching and back and forth communications, we were referred to a vendor that sent us some sample diffusers to further investigate the feasibility of Suvrath's idea for photometry.

We tested the samples on sky with Davey Lab's rooftop telescope with the kind help of Prof. David Burrows and Lea Hagen, a dedicated and hard working graduate student who was in charge of the telescope back then. We inserted the diffusers into the optical path and imaged some stars, but the results were disappointing -- we did not get the nicely diffused images we had hoped for.

We tested the samples' performance in Suvrath's lab with the help of Sam Halverson, a bright graduate student works on instrumentation in Suvrath's group. Those tests were critical, as they allowed us to learn that the diffuser will only work when the beam sizes of the stars are much larger than those of the diffusing sub-structures in the diffuser (i.e., collimated beams that fill the diffuser are the best).

Test images of laser light through the prototype diffuser (left) and without the diffuser (right) taken in Suvrath Mahadevan's lab at Penn State. The diffuser successfully spreads the light out over many pixels, just like we want.

We started thinking about diffusers for the MINERVA array, but found it would be hard there because the filter wheels are not in a collimated beam, and the result would be a lot like our rooftop tests. But the lab tests showed us this would actually work great (in principle) on the Wide-field Infrared Camera (WIRC) at Palomar (where Ming does his secondary eclipse work), since the beam there is collimated at the filter wheel, then re-imaged onto the detector. So Ming discussed this with his colleague Prof. Heather Knutson at Caltech. She agreed that we should put a diffuser at the Palomar 200-in, and none of what happened next would have happened without a big push from her.

We still had to convince Caltech to put one of these things into WIRC, and we still weren't sure that it would actually work on sky. Fortunately, there was a Palomar Science Meeting coming up (thanks for advocating me to go, Jason -- MZ). Heather managed to get Ming on the schedule to give a long talk about the great secondary eclipse work they had been doing together. Ming ended his talk with the results of our diffuser tests and argued that Palomar would be much more efficient at this work with a diffuser installed (no more praying for perfectly bad seeing!). Heather followed later up with Palomar administrators and convinced them that this was the right way to proceed.

Then, over the next several months, Heather led the effort to communicate with the vendor, spent large funds on the development and purchase of the diffuser, and coordinated with Caltech's staff on the implementation. We worked closely with her to ensure the quality and the specs of the newly developed diffuser met our requirements, since the opening angle we needed was at the manufacturing limit. She also coordinated with Eugene Serabyn's team at JPL to test the final product at their Palomar testbed (more on this next time). Finally, by the end of last year, a diffuser was delivered and installed on one of WIRC's filter wheels, and everything was ready for on-sky test, thanks to all the teamwork and efforts of the Palomar staff and everyone involved.

]]>http://www.personal.psu.edu/jtw13/blogs/astrowright/2014/04/best-photometry-ever-iii-prototyping-holographic-diffusers.html
http://www.personal.psu.edu/jtw13/blogs/astrowright/2014/04/best-photometry-ever-iii-prototyping-holographic-diffusers.htmlTue, 29 Apr 2014 08:58:36 -0500Best*. Photometry. Ever. II: Fate IntervenesIn part 1 of this series, Ming Zhao outlined the problem: how to get a nice, uniform spread of light across lots of pixels when trying to do sensitive infrared photometry? We discussed lots of options, but it was a conversation with another assistant professor here at Penn state, Suvrath Mahadevan, that showed us the way. The story starts way back in Suvrath's grad student years...

[queue wavy flashback effects...]

Vintage photo of Suvrath, then a young graduate student in snowy Florida

Suvrath was at the University of Florida, working with his thesis adviser Jian Ge (both of them formerly of Penn State, actually) on the Kitt Peak Exoplanet Tracker. The Exoplanet Tracker was an externally dispersed interferometer (I worked on one, too -- TEDI, at Palomar, with James Lloyd, Dave Erskine, and Jerry Edelstein, who was also my wife's thesis adviser... wait, I'm getting off track).

Anyway, ET was a planet-finding instrument that measured precise radial velocities by passing starlight first through an interferometer then through a low- or medium-resolution spectrograph. The problem was that while the interferometer was great for precise velocities, the fringes it produced in the spectra were terrible for instrument calibration. Normally, one sends spectrally featureless light through a spectrograph to get a "flat field" and measure the instrumental response. They had a build a much more stable version of the interferometer than the one run by a piezo, but the new version couldn't be jiggled around to wash out the fringes (sound familiar?). Without flat-fielding, the project wouldn't work, and they couldn't take flat fields.

What's worse, they wanted to do do wavelength calibration to take out the effects of a tilted slit using an arc lamp, and the fringing of the interferometer made this very had, as well. The fringes were the whole secret sauce that made the ET and MARVELS projects work at all, and yet they were confounding all of the normal ways one calibrated instruments.

Suvrath had been working on the problem to no avail, and decided to put it down for a while and switch gears. As he turned and got up, a copy of the Edmund Optics catalog fell to the ground and opened up to a random page. Survath picked it up and...

light poured down from from the heavens, choirs sang, a gentle breeze tousled Suvrath's flowing locks as his eyes widened and he saw the very answer he sought on the open page...

A holographic diffuser.

A holographic diffuser is an optic that scrambles the directions of light that passes through it. Sort of like frosted glass makes everything behind it (very very) fuzzy, these do the same except they don't block or reflect any the way frosted glass does, so you don't lose any light.

If you put a holographic diffuser in the light path of a camera or spectrograph, then light that normally would come to a nice sharp focus will get diffused out into a Gaussian, or tophat, or some other shape that the designers choose. Up close they look like clear wafers with little worm shapes etched into them -- I think that the direction light gets redirected is basically a function of what part of the diffuser it hits.

So, Suvrath and Scott Fleming, another student of Ge's, installed a holographic diffuser in ET (and, later, MARVELS) to make "milky flats" -- these diffusers washed out the fringes caused by the interferometer and allowed them to calibrate their instrument. They also allowed for "non-fringing" arc exposures to be taken through (and despite of) the interferometer.

Scott Fleming, one of the implementers of holographic diffusion for ET and MARVELS

Problem solved. Project saved. Magic book of fate to the rescue.

Or, I should write, problemS solved, because years later in a conversation about Ming Zhao's project, Suvrath would suggest these diffusers to solve the problem the telescope that wouldn't defocus enough, and the seeing that was never quite bad enough. If we installed a holographic diffuser at Palomar, it might blur out the stars just the way we needed, and in a very stable, predictable way.

Maybe. These things have a way of being much more complicated than you think they should be. Stay tuned for the next installment...

]]>http://www.personal.psu.edu/jtw13/blogs/astrowright/2014/04/best-photometry-ever-ii-fate-intervenes.html
http://www.personal.psu.edu/jtw13/blogs/astrowright/2014/04/best-photometry-ever-ii-fate-intervenes.htmlFri, 25 Apr 2014 20:38:25 -0500Best*. Photometry. Ever. I: The ProblemI was having coffee with David Hogg, and he asked, essentially, "what's so hard about photometry?""why can't we do Kepler from the ground?".I gave the usual slew of answers, but he wasn't sure if this wasn't fundamentally a (solvable!) data analysis problem.I told him about Ming Zhao's efforts at Palomar to get outstanding photometry for secondary eclipse work on hot Jupiters, and that I thought we might have achieved the highest precision ground-based O/IR photometry ever at about 3x times the photon limit on a bright star with a 5-meter.

Since then, our two groups have been exchanging tidbits and ideas and data, trying to produce the best possible ground-based photometry.

Hey exoplanet and astronomy geeks: What are the most precise ground-based photometry systems? Please send references!

In this first installment of a series, Penn State Research Associate (and NASA Origins of Solar Systems award recipient) Ming Zhao guest blogs "The Problem":

How to get ultra-high precision differential photometry?

The key is to understand the systematics of your measurement and calibrate them well, and/or keep your instrument as stable as you can so that the instrumental systematics don't affect your differential measurements. The latter was essentially what the Kepler spacecraft was doing until its reaction wheels failed. Before that, Kepler had achieved <10 parts-per-million (ppm) precision on bright stars and had detected thousands of small planets by keeping its pointing extremely stable over time.

Similarly, by calibrating the instrumental effects in addition to highly stable pointing, astronomers pushed the limits of both the Spitzer space telescope and the Hubble space telescope to allow high precision measurements of the dauntingly tiny signatures from exoplanetary atmospheres. [For more details on Ming's secondary eclipse work, how it works, and how it uses precise photometry, look at theselinks-- JTW]

Because of the Earth's gravity and atmosphere, it is a lot harder to do that from the ground. Gravity makes it difficult to keep the pointing extremely stable with a gigantic telescope, and induces flexures to optics that cause astigmatism. The atmosphere makes the point-spread-function highly variable with time. These effects are usually very tiny and do not affect most astronomers, but are disastrous for high precision measurements of exoplanetary signals.

To address these issues, one common approach astronomers take for ground-based observations is to defocus the telescope so that the PSF is spread out over many pixels. This is key to mitigate the difficult-to-calibrate inter-pixel variations of a detector, as it makes the instrumental systematics more Gaussian-like. It also has the advantage of significantly improving the observing efficiencies since it takes longer to saturate the detector with a defocused image.

Like other groups that were carrying out this type of study, we were also facing these issues when we started using Caltech's Palomar 200-in Hale telescope to measure thermal emission from exoplanets. After improving the guiding stability of the telescope, we were able to get precision of better than 200 ppm using the defocusing approach under the best conditions. However, due to astigmatism, the defocused images always have bright spots that cause highly time-correlated systematics and also damage the observing efficiency.

As a result, our best precisions could only be reached sporadically when the atmospheric seeing is consistently bad for a period of several hours. This basically means that our observations were uncontrollable in some sense and were based almost completely on luck -- we cross fingers to hope for the worst, stable seeing every time (quite an opposite to other astronomers!)

This is unsettling. So Jason and I brainstormed a few times to find ways to address this problem. We thought of dispersing the light, creating artificial dome seeings, better calibrating the optics, and Jason even thought of shaking the camera in some patterns [It turns out engineers really really really don't like it when you suggest deliberately shaking instruments -- JTW] But none of these approaches is simple and practical to implement since we didn't have the flexibility to modify an existing instrument. Jason then discussed this with our local instrumentation expert, Prof. Suvrath Mahadevan. Suvrath inspired us with a brilliant idea...

How do you express a spectrum in a universal way, without appealing to units?

Spectra are fundamental to astronomy. When we disperse light from stars, we are mathematically taking the power spectrum of the electric field as a function of time, but physically we are sorting the light by energy/wavelength/wavenumber/frequency/color, i.e. making a rainbow out of it. Some colors are better represented than others: the quantification of this observation is the spectrum. Why do we need units for this?

Astronomers have lots of funny names and units for the spectrum. If your spectrum is very coarse then you might call it a spectral energy distribution (SED), or just refer to the "broadband photometry" or "N-band magnitude" of the object (where N is some filter). If you are just looking at whether redder or bluer photons are more common you might refer to its spectral index (especially in the high- and low-energy regimes). If you have a very narrow-band or high resolution spectra of an absorption or emission line you might just refer to a "line profile".

But it's units where we get really creative. The fundamental unit of radiation is intensity, or specific surface brightness. It measures the energy of a given color per unit time (power) coming from (or going into) some patch of sky as it crosses some surface. It might be the upward-going intensity off of the surface of the Sun, or the downward-going intensity striking a telescope primary mirror from a patch on the moon. Units are W/m2/sr/Hz or W/m2/sr/cm (with the last bit depending on whether you like to divide your light up by frequency or wavelength).

Astronomers, being astronomers, are not content to just use one set of units. We use cgs versions, we multiply by big numbers from there (Janskys), we use brightness temperature (so Kelvins is a unit of intensity!) we express wavelength differences as velocities (so Kelvin kilometers per second (K k/s) is a flux!) we count photons instead of ergs, and so on. And of course we love to take the base ten logarithm and multiply by 2.5, make it go backwards, and call it a "magnitude" (for back-compatibility with naked-eye Greek astronomers, of course).

What's more, the whole "surface brightness"/"per patch of sky" thing is generally glossed over in favor of just measuring "flux", the total amount of energy collected per area per second (something David Hogg disapproves of, and he has a good point). Flux has units of W/m2.

If you disperse the light you collect, then you have to specify how big your color/frequency/energy/wavelength/wavenumber bins are to express your spectrum in physical units. We call this a specific flux or flux density or spectral irradiance, and the units are W/m2/Hz or W/m2/cm or W/m2/eV or W/m2/Å (or W/m2/Gyr-1, I suppose). If all you are doing is choosing between units of wavelength (cm vs. Å), then these units differ by just a constant factor, but switching to energy changes the underlying shape of the spectrum, which is annoying to deal with (as students calculating the Wien peak of the Planck function the world over have discovered). This is because your bins are bigger for bluer photons if you use energy, but smaller if you use wavelength. When you have uneven bin sizes, your histogram gets distorted.

This unfortunate situation is one reason that astronomers often publish spectra with νFν or λFλ for their units of flux density: by multiplying the flux density by the wavelength (λ) or frequency (ν), they recover units of flux and become agnostic to the (arbitrary) choice of wavelength vs. frequency binning. In fact, since λFλ = νFν, you can even switch between them in a paper (I've been guilty of this).

Richard Wade pointed me to an interesting paper in Observatory, here, by Disney and Sparks called "On Sensible Units for Apparent Flux" published in 1982. It begins:

Gentlemen, --

The day must surely come when the present Babel of units to describe the apparent fluxes of astronomical objects is replaced by a more rational system....

...we feel that the sooner astronomers openly debate amongst themselves what they want the sooner action is likely to come. Without laying claim to any originality but in the hope of stimulating such a discussion, we suggest a unit which we have presumptuously named the Hershel.

We will pause to *sigh* about he whole "gentlemen" thing and acknowledge that "ladies" (and all other astronomers) might also be interested in their discourse.

OK, moving on, Disney and Sparks go on to suggest the astronomers adopt the measure of apparent luminosity, the brilliance, B(x). As a function, B(x) accepts as an argument the base 10 log of the frequency in Hz and returns λFλ = νFν . Its units are Herschels, such that a source emitting one bolometric Solar luminosity per decade of frequency centered at x from a distance of 1 parsec from Earth delivers 1 Herschel of brilliance. They define the base 10 log of the brilliance measured in Herschels to be the strength of the signal.

(Above, a figure from Disney and Sparks, which confusingly and unnecessarily plots intensities with a variety of scaling factors, after arguing that we need a more uniform system!)

Now, I would quibble with their choices of parsecs, Solar bolometric luminosities, and base-10 logarithms (as they said folk would). I don't see what's wrong with SI (or cgs) units and natural logs (does that make me a Jansky fan?). But I appreciate the effort.

OK, with that table-setting out of the way, let me contribute to the discussion Disney and Sparks sought to have.

In a recent paper I wanted to express the spectrum of a galaxy as a composite of many underlying sources -- dust, stars, nonthermal emission, and so on -- each responsible for some fraction of the total luminosity, L. But I didn't want to wed myself to any particular set of units -- I just wanted to express the shape of the spectrum gosh darn it.

Plots of theoretical spectra often have something like "νFν (arbitrary units)" on the y-axis; the pedant in me says that the units of flux are not arbitrary and if you want to plot a dimensionless quantity you should just do it. I also wanted to parameterize away the distance and luminosity as a "nuisance" term, so I wrote down:

f = νFν(4πd2/L)

and called f the "dimensionless SED" of the object, or its "dimensionless spectrum". It's nice because it's area normalized to 1, so can be equally well applied to the flux or the intensity, and has no preference for things like bases of logarithms (except the natural one, I guess). To plot it you still have to choose units for your x-axis, but that is unavoidable.

I like this because it conforms to our intuitive sense of what a "spectrum" is: a shape, without any appeal to arbitrary choices of units. For instance, lasers emit (close to) delta functions, which need no normalization expressed as a dimensionless SED because they are also area-normalized to 1 (in physics, anyway).

Now, you can't use this to express how bright an object is, but for that you can use Herschels, which simply scale the dimensionless spectrum by the apparent brightness (though I think I would prefer something like Jy Hz).

What do folks think? Useful? Interesting, at least? Too obvious to even write about? Am I missing existing jargon for the "dimensionless spectrum" of an object?

As a kid I found my parents' old LP's: The Rolling Stones (Let It Bleed!), Bob Dylan (Blood on the Tracks!), Big Brother and the Holding Company (Cheap Thrills!). As a result, I feel I have a good appreciation for the roots of modern rock and roll.

So it's good to see that the kids these days are acknowledging the classics. New Penn State grad student Ben Nelson, working with Eric Ford, has been, as he put it "remastering the RV classics" by reanalyzing the LONG data streams of radial velocities for some of the longest-known and best-observed systems, like 55 Cancri (5 planets, one transiting) and GJ 876 (4 planets, probably, with strong mean-motion resonances).

My quick and usually-good-enough approach to fitting multiplanet systems measured with multiple telescopes is to use the RVLIN approach I developed with Andrew Howard (published here, code available here, parameter uncertainties available thanks to Sharon Wang's work here). But this approach does not incorporate planet-planet interactions (usually not a problem -- they are too small to detect for almost all systems) and is a strictly "frequentist" chi-squared approach, which is decidedly out of fashion in astronomy these days.

Ben, as any good Eric Ford grad student will, brings to the problem a rigorous Bayesian (Markov chain Monte Carlo, or "MCMC") approach that generates parameter posteriors. He also incorporates dynamical effects, so that planet-planet interactions are not just accounted for but can help constrain the physical parameters of the system. His code also naturally accounts for the independent radial velocity time series not just for the four telescopes that have observed these exoplanetary systems, but for potential offsets between data streams for different detectors on the same telescope. It also independently determines the quality of the data (the "instrumental jitter") for each detector/telescope.

Oh, and he also incorporates dynamical stability constraints, so that long term (108 years) unstable configurations are not part of the final posterior sample.

Oh, and he does the whole thing on a supercomputer with multithreading.

Oh, and the "supercomputer" in question is actually a cluster of graphics processor units (GPU's), which are cheap and fast but much trickier to hack into doing this sort of calculation than a "proper" supercomputer.

Really, the whole thing is a tour-de-force of how to do the problem "right".

Ben is also an old-school hip hop fan. Apparently, the coincidence of the initials of Markov Chain, Monte Carlo, and "Master of Ceremonies" has been too much to resist in astronomy. First we had "emcee: The MCMC HAMMER" (http://arxiv.org/abs/1202.3665), public code by Daniel Foreman-Mackey that samples MCMC ensembles very cleverly.

Applying RUN-DMC to 55 Cnc, Ben finds that each planet has something interesting to teach us:

b and c are near a mean motion resonance, but not actually in the 3:1 resonance. They may, however, be apsidally locked at 180 degrees with a large libration amplitude (something Eugene Chiang refers to as the "metronome" formulation of the simple harmonic oscillator problem, as opposed to the usual "pendulum" formulation about 0 degrees). Note that the period ratio incorporating planet-planet interactions (blue) differs by many sigma from the purely Keplerian solution (orange). (The green solutions are osculating elements, I think -- you have have to average over large time intervals to determine a robust period ratio, which gives you the blue cloud).

d's revised period and eccentricity make it one of the best Jupiter analogs known (though it has inner massive planets, so 55 Cnc is not a good Solar System analog). For reference, Jupiter has P=4332 and e=0.05.

The transiting, e component is probably reasonably well aligned with the other 4 planets (within 60 degrees, based on dynamical stability), and has a density of 5.5 (+1.3/-1.0) g/cc, very close to Earth's (5.5, though the mass of e is at least 8 times higher than Earth's, so it probably has more volatiles and maybe a big atmosphere).

f is in the Habitable Zone, but its amplitude is still to low to get a good handle on its eccentricity.

Incidentally, when Ben gave a talk in our department about this code, several of our department's freshmen were in attendance as part of an assignment in my First Year Seminar class to attend a department talk. They said they were confused, in particular why everyone else laughed when Ben announced the name of the code was RUN-DMC. They had never heard that term before.

Now that makes me feel old. Run was the King of Rock! There is none higher! They're in the Rock and Roll Hall of Fame for goodness' sake!

Back when I was a graduate student at Berkeley, I hosted the astronomy department Movie Night, which included sending teasers of movies we would screen in the department. I had a lot of fun with these, but one of my favorites was the one I did for Groundhog Day, arguing (tongue-in-cheek) that it is one of the most influential films ever made.

In honor of the late Harold Ramis, who died on Monday, here it is, slightly revised:

How much wood would a woodchuck chuck if a woodchuck would chuck wood?

A:Just as much wood as a woodchuck could chuck if a woodchuck would chuck wood.

I post this insight to illuminate one of the many fascinating corners of the cultural phenomenon that is Groundhog Day. Before I explain, let's lay some groundwork.

Before the arrival of Christianity in parts Europe, many pagan cultures based their annual celebrations on agricultural events associated with the seasons. The most important of these events was often the celebration of the vernal equinox, a rebirth ceremony marking the arrival of baby crops and animals after winter. The other "quarter days", the autumnal equinox and the winter and summer solstices, were also marked and celebrated (not just by Europeans, but by cultures around the world). Perhaps the most famous example of this practice stands today in the ruins of Stonehenge where the alignment of the stones marks the position of the setting sun on the quarter days. Likewise, the ruins of Tulum in Cozumel, Mexico feature long holes in the stone which, (just like in Indiana Jones and the Raiders of the Lost Ark) allow sunlight to illuminate a chamber only on one of the quarter days.

In addition to these holidays, cross-quarter days celebrated the days midway between quarter days. Perhaps the most famous relic of the pagan cross-quarter days in Halloween, whose imagery is still totally divorced from the Christian holiday (the Eve of the Feast of All Saints or "All Hallow's Eve'n"), that attempted to supplant it. You see, the Roman Catholic Church, as it spread across Europe, associated many Christian holidays with these quarter and cross-quarter holidays in an attempt to ease pagans into the faith. Thus, Saturnalia and Yuletide became Christmas, the vernal equinox celebrations (complete with those images of fertility, rabbits and eggs) became Easter, All Saint's Day supplanted the precursors to Halloween, and on the 2nd of February, midway between the winter solstice and vernal equinox, became Candlemas.

Candlemas, or the Purification of the Blessed Virgin, marks the 40th day after the birth of Christ and the day, under Mosaic law, that Mary went to the temple to be purified after the birth of a son. The pagan traditions and symbolism remained, however, as Candlemas offered a convenient marker than spring was six weeks away. Scottish tradition held that the weather on this day foretold whether spring would come early or late that year:If Candlemas day be dry and fair,The half o' winter to come and mair,If Candlemas day be wet and foul,The half of winter's gone at Yule.

I guess it rhymes in a Scottish accent.

Anyway, tradition holds that Roman legions brought this rule of thumb to the Germans, who associated it with the hedgehog and its shadow (since if shadows were cast on that day then "Candlemas day be dry and fair" and winter is only halfway over). From there, the Pennsylvania Dutch (as in "Deutsch", not as in Holland) brought the tradition to the New World, but were frustrated by the lack of hedgehogs here. To compensate, they pinned the predictive power on the local equivalent, the woodchuck (or "groundhog").

To this day, the Punxsutawney Groundhog Club of Punxsutawney, PA promotes their local woodchuck, Phil, and every 2nd of February gathers around him as he emerges from is hole with television crews which record the event for filler segments on news broadcasts across the country.

Then in 1993, Harold Ramis overthrew thousands of years of reverent tradition with "Groundhog Day", a film about a weatherman, who, disgruntled at being upstaged and out-predicted by Punxsutawney Phil, is damned by the gods to repeat his day of shame until he learns the true meaning of love and, I guess, Groundhog Day.

As a result, a popular reference to Groundhog Day is now more likely to refer to a repetitive daily routine or eerie repeat of a previous experience than to the ancient February 2nd holiday. It is a true testament to the power of the cultural force of Harold Ramis that his film so effortlessly supplanted and all but erased millennia of Catholic and pagan tradition.

In my book, that makes "Groundhog Day" one of the most influential films ever made.

Center for Exoplanets and Habitable Worlds Research Associate Chad Bender has been hunting down signals buried in the noise for a while now. Way back when, I blogged about his work in the Kepler-16 system, where he used Hobby-Eberly Telescope data to dig out the very weak spectrum from the light of a faint star in this amazing binary star system (that has a giant planet orbiting both stars!) This is tricky because the light of the bright star almost completely washes out the signal of the fainter star, but Chad exploits his knowledge of the likely spectrum of the faint star and his knowledge of its orbital motion to figure out exactly where it must be, which gives him a lot of leverage on the problem.

Astronomers have been measuring the molecular chemistry of exoplanet atmospheres for more than a decade. But most of those detections require a very specific geometry that requires a planet to pass in front of a star, as viewed from Earth, (commonly referred to as "transiting") so the total number of planets that have been probed is still very small.

We used the NIRSPEC spectrograph on the Keck II telescope, located on Mauna Kea in Hawaii to obtain high resolution spectroscopy of the planet and measure the water in the spectrum. This measurement was exceedingly difficult because the planet is about 10,000 times fainter than its parent star, but they are so close together on the sky that the data we received at the telescope contains the blended light from both the planet and the star.

Only after advanced processing were we able to separate out the planet's signal.

This difficult endeavour was carried out by Caltech graduate student Alexandra Lockwood and Penn State graduate student Alexander Richert. Also integral were John Carr, from the Naval Research Lab, and Travis Barman, from the University of Arizona, who provided computer models of the star and planet spectra, and Geoff Blake & John Johnson who provided access to the Keck Observatories

You can access the full paper, which appeared in The Astrophysical Journal Letters on February 24, 2014.

If you can't get through the paywall, download the pre-print from arXiv.org.