amaya

In 1572 C.E., a “new star” appeared in the sky, gradually brightening to be as bright as Venus at peak brightness, staying visible for nearly two years. This event, now known as Tycho’s supernova, helped to usher in a new age of science where the heavens were no longer fixed. Nearly 450 years later, we know this object as Tycho’s supernova remnant (Figure 1). The shock wave from this supernova has expanded to a radius of nearly 4 parsecs and is still racing into the interstellar medium at over 6000 km/s in places. Because the remnant is so young (and our telescopes are so great!) a particularly exciting aspect of Tycho is that we can watch it expand over baselines of a few years. This video shows the expansion of the remnant over five epochs of observations between 2000 and 2015 taken by the Chandra X-ray observatory.

Tycho’s supernova remnant is known to be the result of a Type Ia supernova, and while we have a lot of observational data, we do not yet have a great understanding of what the progenitor systems of these supernovae are. It seems that they result from the thermonuclear explosion of a white dwarf star in a binary system, but whether the companion star is another white dwarf or an AGB or red giant star is unknown (both models have pros and cons). We also do not understand how the explosion begins in the degenerate star and propagates through it. However, models of the explosion offer different predictions for the motion of the ejecta after the star has exploded. As an example, Figure 2 shows the velocity distribution of Silicon, an element produced in the supernova, for two different explosion models from Seitenzahl et al. (2013), where the difference between the two is the number of ignition points for the detonation inside the white dwarf. Plotted is the velocity of the ejecta along various cuts in the XYZ coordinate plane. As can be seen, the amount of symmetry in the velocities of the expanding ejecta is not the same between the two models.

Figure 2: Slices through all three coordinate axis planes for the velocity distribution of Si shortly after the explosion. A model with many ignition points for the detonation is shown on the left; a model with only a few is shown on the right.

The question then becomes whether we can constrain these models with observations. My collaborators and I studied new and archival observations of Tycho’s supernova remnant, identifying nearly sixty dense knots of ejecta from the supernova that had a measurable proper motion between the epochs. Of course, this only gives the two-dimensional motion in the plane of the sky, which is only part of the story. However, these ejecta knots, dominated by emission from silicon and sulfur, are also quite bright and Chandra’s CCD cameras produce a spectrum for every pixel. We were able to use the redshift and blueshift of the Si and S lines in the X-ray spectrum between 1.8 and 2.6 keV to measure the velocity along the line of sight. Measuring an accurate shift in the lines required a significant effort to ensure that the atomic physics was properly accounted for. (The rest energy of X-ray lines can be a function of the temperature and ionization state of the gas; see Williams et al. 2017 for more details on this).

Our results showed that there is no measurable asymmetry in the expanding ejecta, meaning that, as far as we can infer from this data, the explosion of Tycho was symmetric. Models with more ignition points inside the white dwarf better reproduce the observations in this case. The knots have velocities ranging from just over 2000 km/s to well over 6000 km/s and the 3-D velocity vectors all point away from the center of the remnant (as one would expect). Our work represents the first 3D map of the expanding ejecta from a Type Ia supernova and is an important step in understanding these stellar explosions. We cannot stretch these results too far: Tycho is just one supernova remnant and there may well be more than one way to explode a Type Ia supernova. Still, results like this are encouraging and with Chandra and the next generation of X-ray telescopes studies of more Ia remnants are possible.

Stars evolve through many phases in their lifetime. Towards the end, when they have fused all of the Hydrogen (H) in their cores to Helium (He), they expand, cool down, and become giants. During these final stages, stars also begin to lose a significant amount of mass from their surfaces. A star like our Sun will lose approximately 50% of their mass, while more massive stars (> 8 Msun) will lose as much as 80% to 85%. Nearly all stars will eventually shed their outer layers and expose their hot and ultrahigh density cores. These remnants are called white dwarfs and, unless they experience mass transfer from a companion, their fate is to slowly cool with time.

However, a small percentage (approximately less than 2%) of high-mass stars can reach the necessary high densities and pressures in their cores to undergo a different fate: iron (Fe) will be created in their central core, but because it cannot undergo fusion to continue to generate energy, the star eventually becomes unstable and gravitationally collapses on itself creating a core-collapse supernova (CCSN). While it is commonly adopted in astronomy that stars with initial masses greater than 8 Msun will end their lives as a CCSN, we still do not fully understand at which stellar mass this transition occurs.

Knowledge of this transition mass is important because it sheds light on stellar evolution processes like mass-loss and convection/dredge up, and on the CCSN rate and how large of an effect these supernovae have on the production of elements (important to understand chemical evolution), on the energetics/feedback in galaxies, and on star formation.

Determination of the CCSN Mass Transition

There are three main methods to determine this mass. They all have their challenges and limitations, but jointly they may be able to begin to precisely constrain this transition mass:

1) Supernova studies. With the ability of the Hubble Space Telescope to resolve stars out to nearby galaxies, when a nearby CCSN occurs there is likely deep and resolved archival images of the progenitor star. Additionally, the stellar population of lower-mass stars that formed with the progenitor can subsequently be observed. Using stellar evolutionary models and the photometry of the progenitor and/or of the nearby lower-mass stars that formed with it, it is possible to infer the progenitor’s mass (e.g., Smartt 2009, Smartt 2015, Williams et al. 2014, Jennings et al. 2014). Within the past several years, the sample of nearby supernova has increased enough to provide meaningful statistics, indicating that the lowest-mass stars that undergo supernova have masses of ~9.5 Msun (Smartt 2015; Figure 1). These data also suggest that stars more massive than 18 Msun likely collapse but do not undergo an observable supernova.

Figure 1: Nearby supernovae are displayed with their inferred initial masses and errors. These masses are based on STARS and Geneva models (Eldridge & Tout 2004; Hirschi et al. 2004). In solid black, a trend is fit to the data suggesting a lower limit of ~9.5 Msun. In dashed black a second trend illustrates the expected number of stars at each initial mass based on a Salpeter initial mass function. This shows that such a large number of 10 to 17 Msun supernovae progenitors should be accompanied by a smaller but observable number of higher mass supernovae progenitors. This suggests that stars with mass greater than ~18 Msun may not undergo standard observable supernovae.

2) Stellar evolution models. A second method to estimate the CCSN transition mass is to use stellar evolution models to determine at which mass the core will reach the necessary conditions to create an Fe core and induce core-collapse (or produce an electron-capture CCSN). While a variety of models currently exist to account for the complex physical processes involved, many of them are beginning to converge around 9.3–9.8 Msun (Eldridge & Tout 2004, Poelarends 2007, Ibeling & Heger 2013, Doherty et al. 2015). Figure 2 illustrates the models from Doherty et al. (2015) and the transition masses at differing composition, with solar being represented by Z = 0.02.

Figure 2: This diagram illustrates the evolutionary result of a star with an initial mass at a given composition (Z = total mass fraction of a star that is not H or He), where solar composition is Z = 0.02. This model distinguishes between the different types of white dwarfs (WDs) and illustrates where white dwarf formation ends and supernovae begin. These models suggest a narrow region where electron-capture supernovae (ECSN) occur followed by the traditional Fe-core CCSN and shows that composition is an important secondary variable in these mass transitions.

3) White dwarf studies. Another way to constrain the transition mass is to study the remnants of the stars that do not undergo CCSN, i.e. white dwarfs. Spectroscopic analysis of high-mass white dwarfs in star clusters, together with evolutionary models, allow us to both directly determine the mass of the white dwarf and to infer the mass of the white dwarf’s progenitor. This is known as the initial-final mass relation (IFMR). Figure 3 illustrates the IFMR derived from white dwarfs across multiple clusters and a broad range of masses (Cummings et al. 2015; 2016a; 2016b; in prep.), and with evolutionary timescales based on PARSEC stellar models (Bressan et al. 2012). The IFMR provides a direct constraint on the mass loss that occurs during stellar evolution and how it varies with initial mass. In terms of constraining the CCSN transition mass, there currently are very few ultramassive white dwarfs that have been discovered in star clusters, but our current project is to survey for more white dwarfs approaching the highest mass a white dwarf can stably have (the Chandrasekhar mass limit at 1.375 Msun). The initial mass of a star that will create a Chandrasekhar mass white dwarf will also define the CCSN transition mass, which will provide a critical check of the other methods. However, based on cautious extrapolation, we note that our current relation suggests a consistency with the CCSN mass transition occurring near 9.5 Msun.

Figure 3: The current IFMR based on the spectroscopic analysis of white dwarfs that are members of star clusters. Comparison of the spectroscopically determined mass and age of the white dwarf can in comparison to the age of the cluster be traced back to the mass of the star that would have formed a white dwarf at that time. The solid line represents a fit to the data while the dashed line represents the theoretical IFMR of Choi et al. (2016). This shows a relatively clean relation with the clear formation of larger and larger white dwarfs from larger and larger stars. The highest-masses, however, remain poorly constrained, but precise measurement of where this relation reaches the Chandrasekhar mass (1.375 Msun; the upper limit of the y-axis), will define the progenitor mass at which stars begin to undergo CCSN.

The three methods described above, mostly independent from each other, are beginning to suggest a convergence around a CCSN transition mass of ~9.5 Msun. This implies that the commonly adopted value of 8 Msun, which is based on older and more limited data and theoretical models, has led us in the past to overestimate the number of CCSN by ~30%. Such an overestimation would greatly affect our understanding of the chemical evolution, energetics, and feedback in galaxies. Furthermore, the maximum mass for CCSN inferred by Smartt (2015), 18 Msun, further decreases the number of expected CCSN in a stellar population. This may solve the supernova rate problem that resulted from assuming that all stars with a mass greater than 8 Msun should undergo a CCSN. Under that assumption, the number of predicted CCSN is twice the observed rate (Horiuchi et al. 2011). But if only stars in the 9.5–18 Msun mass range undergo CCSN, based on the standard Salpeter stellar initial mass function, the new estimated CCSN rate is almost exactly half of the original estimate. This would bring observations and models into remarkable agreement.

Future work

Significant work remains to be done. More nearby supernovae need to be observed, with subsequent study of their progenitors, to further constrain their lower and upper mass limits. More ultramassive white dwarfs need to be discovered in star clusters, a focus of our research group, to refine our understanding of higher-mass stars and their mass-loss processes. Increasing the white dwarf sample size will provide an independent measurement of the CCSN transition mass, and it may also begin to provide an important observational test for the effects of differing stellar composition on mass-loss, evolution, and this mass transition. For stellar evolution models, a more coordinated approach between these observations and theory will improve the models, which also affects the inference of progenitor masses in the supernova and white dwarf studies. An iterative process, aiming for self-consistency across all steps, will ideally bring a more precise convergence of the mass transition at which supernova begin to occur. In any case, the assumption that all stars with mass greater than 8 Msun will undergo a CCSN is appearing more and more inaccurate.

Sometimes you design a perfectly good experiment based on years of experience and a wealth of previous data. You develop some models and carry out simulations that show you’ve designed an experiment that can recover your models. This lets you write a really compelling proposal and – eventually – you have the opportunity to carry out that experiment. And then the results surprise you. Because simulations are just that, simulations: they’re only as good as the physics you know to put in. You can’t account for what Donald Rumsfeld famously called the “unknown unknowns”. We found things we weren’t expecting, but on the other hand we found some things we were expecting and learned some new things as well.

My example is a spectroscopic monitoring program that was undertaken with HST in Cycle 21 with the goal of probing the inner structure of an active galactic nucleus (AGN, often called quasars, when they’re luminous enough). Together with its ground-based counterpart, this program is known as the AGN Space Telescope and Optical Reverberation Mapping (AGN STORM) project. Our goal is to understand how the supermassive black holes at the centers of galaxies are fueled.

The current paradigm for AGN inner structure (Figure 1) is that at the center of these systems is a central supermassive black hole (typically a million to several billion solar masses) surrounded by a hot accretion disk that extends out to tens of gravitational radii (Rg = GM/c2, where M is the black hole mass). On scales of a few hundred to several thousand gravitational radii, there is diffuse gas that absorbs the ionizing radiation from the accretion disk and reprocesses it within minutes into emission lines. The emission lines are strongly Doppler broadened because they are in the deep gravitational potential of the black hole. The geometry and kinematics of this “broad-line region” (BLR) remain elusive since these properties cannot be deduced from direct imaging as they project to less than 100 mas (milliarcseconds) even in the most favorable cases. What we know about the inner structure of AGNs is based on flux variability.

Figure 1. Classic schematic of the inner structure of an AGN from Urry & Padovani (1995). Here I restrict attention to the central black hole, surrounding accretion disk, and the broad-line region.

The continuum radiation from the accretion disk varies with time (as I’ll describe elsewhere) and the broad emission lines respond, but with a delay due to the mean light travel time across the BLR. This is the basis of the technique known as “reverberation mapping” – the emission lines appear to “reverberate” in response to the changing continuum, and measurement of the timescale can be converted to a size estimate (for a technical primer on reverberation mapping, see Peterson 1993). While gas is spread throughout the BLR, the response of any particular emission line is relatively localized to where some combination of emissivity (photons emitted per unit volume) and responsivity (rate of change in emissivity per unit continuum change) are maximized for that line. At any given time, the highest-ionization lines respond more rapidly than lower-ionization lines, demonstrating ionization-stratification of the BLR. For any given emission line, the radius at which the peak response occurs depends on the mean continuum brightness: the peak response occurs at longer lags when the AGN is brighter (Figure 2). What makes this interesting is that if you compare the measured lags with the line widths, you find that the Doppler width ΔV is inversely correlated with the time lag τ (Figure 3), consistent with ΔV ∝ τ -1/2, which is what you’d expect if the dynamics of the BLR are dominated by the gravitation of the central black hole – strictly speaking, it implies a 1/r2 force, so radiation pressure will have the same signature, but that’s a detail we can worry about later. In any case, without knowing the net motion of the BLR – which could be inflow, outflow, rotation, or mostly likely some combination of all of these – we can construct a “virial product” ΔV2cτ/G that is proportional to the black hole mass. Actually getting the mass, though, requires knowing more about the structure and kinematics of the BLR, as well as its orientation. In the absence of this knowledge, we parameterize our ignorance into a single dimensionless parameter f defined by M = f × ΔV2cτ/G. If, for example, the BLR is a simple flat disk (it’s not…) lags are insensitive to inclination, as long as the emission-line photons aren’t absorbed within the disk, and orbital velocities project as sine of the inclination i, so f = 1/sin2i. Our goal is to determine the structure and kinematics of the BLR, which is equivalent to knowing f and thus M for a particular AGN. It turns out this is hard.

Figure 2. The relationship between the size of the broad-line region as measured from the Hβ emission line and the luminosity of the AGN (Bentz et al. 2013).

Figure 3. The relationship between emission-line Doppler width and reverberation lag for multiple emission lines in four AGNs. The ΔV ∝ τ -1/2 dependence is expected for a system dominated by the gravity of the central black hole. The dashed lines are the best fits to the data, and the solid lines have a forced slope of -1/2. Based on data from Peterson & Wandel (2000) and Onken & Peterson (2002).

Reverberation signals are quite weak: over the BLR light travel time, the continuum and emission-line fluxes generally vary only a few to several per cent, at most. Over many light travel times, the flux variations can be larger, 10% or more. This tells us right away that we’re going to need high signal-to-noise, homogeneous spectra that are well-sampled in time over a long duration. This also tells you something about why progress in reverberation mapping has been slow – it requires a lot of telescope time. Consequently most reverberation experiments are carried out on relatively small telescopes on apparently bright, relatively nearby AGNs. Even then, the data are generally of insufficient quality to discern the structure and kinematics of the BLR, so the factor f remains undetermined. We can, however, compute an ensemble average value for f if we have another mass indicator that we trust. The one we have been using is the M- σ relationship, the apparently tight correlation between central black hole mass and the stellar velocity dispersion of the host galaxy bulge σ, that has been found for non-active galaxies. If you plot the virial product versus σ for AGNs, you see a relationship that is parallel to the M- σ relationship, and if you multiply the virial product by a factor of 4 or 5, the two relationships are indistinguishable (Figure 4). Thus < f > ~ 4 – 5. There are only a few AGNs where the black hole mass can be measured directly by stellar dynamics, and these show consistency with the reverberation estimates to within the uncertainties of around a factor of 3 or so.

Figure 4. The relationship between black hole mass and host galaxy bulge velocity dispersion, known as the M- σ relationship. The red points are for quiescent (non-active) galaxies and the blue and green points are for AGNs. From Grier et al. (2013).

But we still want to know what the BLR gas is actually doing and, in the process, make more accurate mass measurements. Moreover, we’d really like to get reverberation measurements for the strong UV lines, like Ly α λ1215 and C IV λ1549: much of the BLR emission is in these lines and we know from several International Ultraviolet Explorer (IUE) reverberation programs from over 20 years ago that the lags for the UV lines are about half those of the hydrogen Balmer lines in the optical, so they probe a different part of the BLR. The IUE data were ground-breaking, but not high-enough quality to determine the structure and kinematics of the BLR, only the mean lags. That would require Hubble and its superb spectrometers.

Hubble time is hard to get, especially if you need a lot of it. We knew that we’d need a really compelling science proposal and a seamless technical case for the large allocation to do a reverberation program right. We started out assuming that a realizable cadence would be one observation per day, that a single visit must yield spectra of the required quality in a single orbit, and that the program would have to be completed with no significant gaps in one observing season. This put constraints on the luminosity of the AGN (since the BLR size depends on it), its apparent brightness, and its location on the sky. We further desired a target AGN that was previously well-studied so we could avoid AGNs where the UV emission lines were strongly self-absorbed and so we could accurately model its behavior to determine how many orbits we would actually require – the number of orbits was the most difficult parameter to pin down, since sometimes AGN flux variations behave in ways favorable for a reverberation-mapping experiment, and sometimes they don’t. Our success rate on the ground is typically around 60% or so, so this is kind of a high-risk business. After lots of simulations, we determined that NGC 5548 was the best target and that our best estimate of the required number of visits was 180. None of our simulations succeeded with fewer than 100 visits, about half succeeded with 150 visits, but all of them succeeded with 180 visits.

We first submitted this proposal in Cycle 12 in 2003. We were finally awarded the time to carry this out in Cycle 21 – this is either a case study in perseverance or obsession, I’m still not sure which. It was a challenging program to schedule and execute, but the schedulers did a wonderful job, and we wound up with 171 epochs with only a few one or two-day gaps due to safing events, against which our program was robust, as anticipated in our simulations. We had to deal with some complications, such as moving to different positions on the detector to avoid depletion by geocoronal Ly α, but this only complicated the data reduction and didn’t adversely affect the final results. A major amount of work went into completely recalibrating the Cosmic Origins Spectrograph because our data-quality requirements exceeded specifications of the standard pipeline reduction.

The final light curves are beautiful (Figure 5), although some of the behavior was surprising, even in our initial quick-looks based on standard pipeline reduction. For the first 60 days of the program, things looked nominal – the emission-line light curves look like a smoothed and time-shifted version of the continuum light curve, though the time shifts (or lags) are shorter than we expected. After this, the emission lines behavior began to deviate from the expected linear response in a complicated way.

Figure 5. Light curves based on HST COS spectra obtained in the AGN STORM project. The top panel shows the continuum variations and the lower panels show the light curves for Ly α, Si IV λ1400, C IV λ1549, and He II λ1640. From De Rosa et al. (2015).

Equally disturbing was the fact that the UV resonance lines were strongly absorbed (Figures 6 and 7). Recall that one criterion for target selection was weak or absent absorption in the emission lines. While narrow absorption features had previously been detected in NGC 5548, a combined XMM/HST campaign the previous year (Kaastra et al. 2014) had shown strong and variable broad absorption for the first time (Figure 8). The broad absorption weakened toward the end of the 2013 campaign, and all we could do at that point was hope that trend would continue into 2014. Our first spectra in early 2014 showed, however, that variable broad absorption was still present, though weaker than in 2013. This added another layer of complexity to the analysis.

Figure 6. The top panel shows the mean C IV profile during the AGN STORM program. Note the strong narrow and broad absorption features shortward of line center. The middle panel shows the rms residual profile, which isolates the variable part of the emission line. The bottom panel shows the mean reverberation lag in each velocity bin. In all cases, black is for the entire campaign, gray is for the first half, and orange is for the second half. From De Rosa et al. (2015).

Figure 7. The mean, rms, and reverberation lag profiles as in Figure 6, but for Ly α. The broad (damped) absorption shortward of the broad emission line is due to interstellar absorption in our own Galaxy and the narrow emission superposed on it is geocoronal Ly α emission.

Figure 8. Historical C IV profiles for NGC 5548. The cyan profile from 1993 shows no broad absorption. The black profile is from AGN STORM and shows weak broad absorption compared to what was observed a year earlier (green, red, and blue profiles) by Kaastra et al. (2014). Figure courtesy of G. Kriss.

The data product that we most desired is a projection of the BLR kinematics and velocity field into the two observable parameters, Doppler velocity and time delay (Figures 9 and 10). This “velocity–delay map” is essentially the observed response of the emission lines to an instantaneous (“delta function”) outburst by the continuum source. Recovery of the velocity–delay maps for the various emission lines was complicated by the non-linear emission-line response during much of the campaign and by the strong broad absorption features. Nevertheless, we were able to recover velocity-delay maps for the three strongest lines, Ly α, C IV, and H β. All of them show the signature of an inclined disk with a fairly sharp outer boundary, though the response of the far side of the disk is surprisingly weak. The weak response of the far side suggests that fewer ionizing photons are reaching the far side than the near side: this might also explain the surprisingly small lags (since mostly we’re seeing the response of the near side) and the anomalously small equivalent widths of the lines (i.e., the emission lines are weak compared to the continuum).

Figure 9. Preliminary UV velocity-delay map based on AGN STORM data. The upper left panel is the velocity-delay map for Ly α + NV, Si IV, C IV, and He II; the orange dashed ellipses trace the faint disk signature for a mass of 6 × 107 solar masses at an inclination of 50°. The lower left panel shows the variable part of the line profile: the average for all time delays is in black, and the averages for binned lags of 0-5 days, 5-10 days, 10-15 days, and 15-20 days is shown in blue, green, orange, and red, respectively. The upper right panel shows the “delay-map” (i.e., integrated over all velocities) for Ly α, Si IV, C IV, and He II in red, orange, green, and blue, respectively, and in black for the entire spectrum. Figure courtesy of K. Horne.

Figure 10. Preliminary velocity-delay map for He II λ4686 and Hb λ4861 from AGN STORM optical spectra. Panels are as in Figure 9. In the upper right panel, He II is shown in blue, H β is in red, and the core of H β is in orange. Based on data from Pei et al. (2016). Figure courtesy of K. Horne.

The latter two points are things we know because NGC 5548 is such a well-studied AGN: there have been almost 20 reverberation campaigns – mostly ground-based optical abut two UV campaigns, one involving HST – that included this source. NGC 5548 is essentially a “control” object in the sense that while some properties of this AGN are expected to change over timescales long compared to reverberation (luminosity, BLR radius), others are not (black hole mass, inclination) – if reverberation-mapping is working as it should, we should get the same mass every time. Because we have this wealth of archival data, we could tell that something odd was happening in NGC 5548 rather than erroneously conclude that NGC 5548 is simply an odd source.

So what exactly is going on with NGC 5548? A couple things. First, we find that the narrow absorption lines are varying. This provides a strong diagnostic of the unobservable ionizing continuum as each line responds to the continuum at the ionization energy of the relevant ion. As the continuum at the ionizing energy increases, the ionization level increases so the line becomes weaker. For example, singly-ionized silicon has an ionization potential of 16.3 eV, so when the continuum at 16.3 eV (~760 Å) increases, more of the silicon becomes doubly ionized and the equivalent width of Si II λ1526, which arises from singly ionized silicon, decreases. However, this pattern is broken after the first 60 days of the campaign. The lower-ionization absorption lines are still following the pattern, but the higher ionization lines are not responding anymore. While the continuum that drives the variability of the broad Balmer emission lines (just shortward of 912 Å) is still varying with the observable continuum (~1150 Å), the higher energy continuum (at wavelengths shorter than, say, ~500 Å) is not. So at least part of the reason the emission line response is changing is because the shape of the ionizing spectrum has changed. As an aside, we were also able to determine where the narrow absorption arises, based on the “recombination time”, i.e., the timescale to return to the lower ionization state when the continuum becomes faint again. The narrow absorption arises ~1 – 3 pc from the black hole, in the same gas that produces the [O III] λλ4959, 5007 emission lines seen prominently in the optical spectrum (Peterson et al. 2013).

Second, the broad absorption is also present, but weaker than it was in 2013 (Figure 8). While we don’t know where the broad absorption arises, it’s likely that it occurs on the BLR scale. It also stands to reason that if there are absorbers along our line of sight that there are absorbers along other sight lines as well. We can speculate that, in fact, there is very heavy absorption between the accretion disk and the far side of the BLR, which would account for the weakness of the emission lines, the unexpectedly short lags, and the faint response of the far side seen in the velocity–delay maps.

I’ve focused this discussion almost entirely on the BLR because that was the original goal of the experiment. Our preliminary analysis confirms the black hole masses that we’ve estimated from the simpler sort of reverberation analysis described earlier. We’ve learned that the BLR in NGC 5548 is at least in part a disk seen at moderate inclination, and we’ve concluded that there is a lot of absorption on different scales – anticipation of the importance of strong variable absorption was the omission in our original simulations, simply because we didn’t expect it to be a factor. The presence of absorbing gas has complicated the analysis, revealing a richer, more complex environment than we’d anticipated. While we think we have the basic ingredients now, we’ve still got a lot of detailed modeling to do. So far, the AGN STORM project has produced 6 papers (see references below) and several more are in preparation.

Some of the more important things we found had to do with the accretion disk itself, and I’ll have more to say about that another time.

In 2009, there was a call for ambitious proposals to use Hubble for projects that were beyond the scope of what a typical time allocation could accomplish. Hubble time is usually doled out in “orbits.” One orbit of Hubble takes about 90 minutes yielding 45 minutes to an hour of observing time (because the Earth typically blocks a portion of the sky from view). A typical proposal will be for a few orbits of observing time. In this particular call, proposers were asked to consider projects needing at least 450 orbits.

Two teams responded to this call with very ambitious proposals to observe representative patches of sky to search for the most distant galaxies, study the assembly of galaxies over cosmic time, trace the formation of black holes in the centers of galaxies, and study distant supernovae. The proposals were similar in many respects, and the time allocation committee recommended merging the two teams. Thus the CANDELS collaboration was formed, with participation of nearly 100 astronomers with diverse backgrounds and interest. The time allocation was 902 orbits, which is the largest in the history of the Hubble telescope.

Why did so many astronomers – on the proposal teams and the time allocation committee – think this kind of observation was important? And what have the observations revealed?

The answer to the first question goes back to a fundamental assumption of cosmology – that the universe is basically the same in all directions. Obviously this assumption breaks down on small scales (otherwise there wouldn’t be planets, stars, and galaxies), but it appears generally true when averaging over scales larger than about 10 million light years. The Hubble observations allow us measure the past: to observe galaxies and supernovae that are so distant that their light has taken billions of years to reach us. Any single Hubble image will have both nearby galaxies and galaxies for which the light-travel time more than 13 billion years (the universe itself is 13.8 billion years old). To get a reasonably fair census of the distant universe, we need to point at places that are out of the plane of the Milky Way galaxy. We need to take fairly long exposures to collect enough photons. We should observe these same patches at other wavelengths (from x-ray to radio). All else being equal, we should divide the total area into several patches that are disjoint on the sky to reduce systematic errors due to foreground dust or large-scale cosmic structures. Hence the CANDELS survey: a public Hubble survey of the most-studied patches of sky, coordinated with observations from other major observatories.

The CANDELS observations were completed in 2013 and so far there have been over 200 papers published using the data. It’s possible to give only at taste of the scientific results in this blog article. There are many more summaries on CANDELS blog site.

Cosmic Dawn

Ever since the installation of the WFC3 camera on Hubble in 2009, the race has been on to identify the most distant galaxies. It was unclear at the outset which strategy would be most successful: taking very deep exposures over a tiny area, shorter exposures over a wider area, or pointing at galaxy clusters and using gravitational lensing to magnify galaxies in the background. Over the course of several years, Hubble has done all three, and the current record holders are in one of the CANDELS fields and in the background of a cluster of galaxies. Follow-up observations of a bright candidate in the CANDELS GOODS-North field suggest that it is at a redshift z=11.1, about 400 million years after the Big Bang (Oesch et al. 2016).

Aside from the lure of seeing the most distant galaxies, there is much to learn from studying statistical properties enabled by the large survey – with samples now approaching 1000 galaxies within the first billion years and 10000 within the first two billion. (Prior to the installation of WFC3 and the CANDELS survey, there were only a handful of good candidates identified at these early times.) There appear to be enough of these very young galaxies to explain the rather rapid “re-ionization” of the universe. About one billion years after the big bang there was a huge injection of energy that stripped 99.99% of the electrons away from the protons in the hydrogen between galaxies. The observations show that there was enough energy in young galaxies to explain this; although we are not yet certain that enough of the photons at just the right energy to ionize hydrogen can escape, because the gas within the individual galaxies might absorb most of it. Galaxies in the first billion years have bluer colors than their counterparts at later epochs – probably because they have not yet had enough time to build up the heavy elements needed to form large amounts of dust and to lower the temperatures of young stars. Nevertheless, in spite of being bluer, few if any of the galaxies show the very blue signature expected of galaxies forming their first generation of stars. Comparing the evolving numbers and stellar masses of galaxies to the theoretically-predicted numbers of gravitationally-bound dark-matter “halos,” leads to the conclusion that the star-formation rates are almost – but not entirely – governed by the somewhat clumpy inflow of gas as the gravitational pull of the newly formed dark-matter halos draws in more gas from the surrounding intergalactic medium.

Figure 1: The left panel shows the number of very distant galaxies identified by the CANDELS survey (red) and deeper surveys (blue) since the WFC3 camera was installed on Hubble. The right panel shows the estimate of the “cosmic star-formation rate” – the number of stars formed per year in a fixed volume of the universe – as a function of time since the Big Bang.

The addition of infrared wavelengths – both from Hubble and from the Spitzer and Herschel observatories at longer wavelengths – has been essential for searching for galaxies that are either full of dust or shutting off their star formation. Such galaxies are red enough that they are difficult to pinpoint as distant-galaxy candidates in the Hubble images alone or entirely invisible in the Hubble images. Massive dusty or “quenched” galaxies are expected to be extremely rare in the early universe because there simply hasn’t been time for them to form. Nevertheless, there are dozens of interesting candidates found in the CANDELS fields when inspecting the infrared images. These will high-priority targets for spectroscopy with JWST and ALMA, which will be able to confirm their distances.

Cosmic High Noon

The overall cosmic rate of star formation peaked at a redshift z ≈ 2, when the universe was about 3-4 billion years old. The CANDELS observations provided the first large samples of galaxies with high-resolution images spanning wavelengths from the rest-frame ultraviolet to the optical. The longer wavelength data from Spitzer helps to pin down the total stellar masses of the galaxies, by providing extra sensitivity to some of the oldest, reddest stars. Using samples of tens of thousands of galaxies, we are able to assess the successes and failures of our current theoretical understanding of galaxy evolution, and provide some clues to guide future developments. The observations tell us that something is “quenching” the star-formation in massive galaxies as early as 2-3 billion years after the Big Bang. These quenched galaxies emerge as very compact “red nuggets,” which must grow substantially in size and over the next ten or so billion years, increasing in mass mostly by merging with neighboring galaxies rather than forming new stars in situ. The compact star-forming progenitors of these galaxies (blue nuggets) appear to be present in sufficient numbers to account for the red nuggets, but we do not yet entirely know how or why star-formation is shutting down. The blue nuggets have a somewhat higher incidence of active nuclei: central black holes that are accreting gas at a high rate, and perhaps heating the gas that would otherwise cool to form stars. Quenched galaxies have higher central densities of stars than most star-forming galaxies, so the thought is that when sufficiently large amounts of gas collect in the center, this triggers a burst of star formation and perhaps also feeds an active nucleus. The energy feedback from the star formation and the nucleus are sufficient to shut off subsequent star formation. High-resolution computer simulations of forming galaxies suggest that the trigger for this gas funneling is a mix of gravitational instabilities within a star-forming disk of gas and mergers with surrounding galaxies. When dust is included in these simulations, they look remarkably like the galaxies we see, but differ enough in their statistical properties (for example their colors) that we know that some aspects of the physical models are not quite correct.

Figure 2: Computer simulations vs. observations. The bottom panels show some of the highest-resolution hydrodynamical simulations of galaxies that have yet been constructed on supercomputers. The images in the middle show the same galaxies viewed from two different camera directions and placed at a large distance from the telescope so that our view matches what we might see from Hubble. The top panels show galaxies selected from the CANDELS survey. Qualitatively, the computer simulations doing a very good job of matching what we see in deep observations.

Towards the present day

CANDELS has provided us with large enough samples of galaxies that it is possible to try to find examples of what the Milky Way galaxy might have looked like in the past. We can attempt to match progenitors to descendants in the overall population of galaxies by isolating galaxies that are at about the same rank in the overall ranking of galaxies by stellar mass (from biggest to smallest). Figure 3 shows a visual summary of the results of this kind of effort – in what might be considered to be a family tree of the Milky Way. The progenitors are smaller, bluer, and generally do not have the familiar spiral-plus-bulge structure that we see in present-day galaxies. The same study provides a way to infer the amount of cold gas that ought to be present as fuel for star formation, and these predictions are being tested with ongoing observations from the ALMA observatory.

Figure 3: Examples of progenitors of a Milky-Way-mass galaxy taken from the CANDELS survey. Redshift and time (in billions of years since the big bang) run along the horizontal axis. The figure has been divided into three panels for convenience; the earliest times are at the bottom and the latest times are at the top. The galaxies are shown to the same physical scale and the colors are a fair representation of their rest-frame colors. The position along the vertical direction illustrates how blue (or equivalently, hot) the galaxy is, with red toward the top and blue toward the bottom.

Core-collapse supernovae (SNe) are the explosions of massive stars (>8 Msun) that reach the end of their lifetime. No longer able to radiatively support themselves by nuclear core burning after depleting their fuel, the stars collapse and release gravitational energy that rips apart the star entirely. The resulting explosions exhibit differences in their spectra and light curves that can be grouped into one of several subclasses.

From a theoretical perspective, these differences once seemed straightforward. Single star models indicate that the strength of a stellar wind increases as a function of the star’s initial mass and metallicity (Heger et al. 2003). In turn, stronger winds can remove more of a star’s outer envelope, resulting in the distribution of observed SN subclasses. Accordingly, a Type II SN has hydrogen in the spectrum, suggesting a lower mass (~8-25 Msun) red supergiant (RSG) progenitor. In contrast, a Type Ic SN has neither hydrogen nor helium in its spectrum, suggesting a higher mass (>40 Msun) progenitor.

Direct images of the individual stars before they explode provide the strongest observational constraints, but are difficult to obtain because they require deep, high-resolution, multi-color, pre-explosion imaging. Before the Hubble Space Telescope (HST) was launched, one of the few progenitors directly observed was the progenitor to SN 1987A in the Large Magellanic Cloud (LMC) at just 0.05 Mpc. The Cerro Tololo Inter-American 4-meter telescope obtained several images of the LMC between 1974 and 1983 (Walborn et al. 1987). The direct observations showed a progenitor consistent with a blue supergiant, which contradicted most stellar evolution theory and set the field on the course it is still on today.

Figure 1: The famous SN 1987 both before (right) and during (left) the explosion. The exploding star, named Sanduleak -69deg 202, was a blue supergiant.

Ground-based imaging is only sufficient for detecting progenitors out to 1-2 Mpc. HST extended this range out to about 20 Mpc. Cost and time, however, prohibit HST from obtaining pre-explosion imaging of the thousands of galaxies within this volume. Instead, these data must be obtained serendipitously via other science programs. The number of galaxies with pre-explosion imaging has grown steadily since HST was launched in 1990. With only a few SNe within this volume each year, a statistically significant sample of SNe with corresponding HST pre-explosion images was not accumulated until the mid-2000s (Smartt 2009). As predicted by the theory, Type II SNe had RSG progenitors. The most mystifying result, however, was the fact that the Type I SNe (i.e., those without hydrogen) had no confirmed massive (and thereby luminous) star progenitors, even to very deep limits.

The solution to this mystery is still not solved but may involve binary star progenitor systems, which are now known to account for ~75% of massive star systems (Sana et al. 2012). As opposed to single stars systems, where stars lose their envelopes in their winds, a binary companion star can remove the outer envelope of the primary via tidal stripping. This process allows for increased mass-loss from lower mass, less luminous stars that may evade detection in pre-explosion imaging. This scenario has long been preferred for a specific subclass referred to as the Type IIb (i.e., a hybrid of the Type II and Ib subclass) since most, but not all, of the outer Hydrogen envelope is removed.

Figure 2: This illustration shows the key steps in the evolution of a Type IIb supernova. Panel 1: Two very hot stars orbit about each other in a binary system. Panel 2: The slightly more massive member of the pair evolves into a bloated red giant and spills the hydrogen in its outer envelope onto the companion star. Panel 3: The more massive star explodes as a supernova. Panel 4: The companion star survives the explosion. Because it has locked up most of the hydrogen in the system, it is a larger and hotter star than when it was born. The fireball of the supernova fades. (Credit: NASA, ESA, and A. Feild (STScI))

While the primary (i.e., exploding) star in the binary system may be too faint to be detected in the pre explosion imaging, the companion star may be bright enough to test the binary hypothesis. As the primary star loses mass, the companion will gain mass and become more luminous and blue. Despite these changes, detecting the companion star in a binary system is not straightforward. The stellar spectrum of the companion will peak towards the ultraviolet (UV). Since most serendipitous pre-explosion imaging does not consist of UV observations, a UV search for the companion must occur only once the SN has faded. To date, a companion star has only been observed in a single instance for the Type IIb SN 1993J in M81 at just 3.5 Mpc (Maund et al. 2004, Fox et al. 2014).

Figure 3: This is an artist’s rendition of supernova 1993J, an exploding star in the galaxy M81 whose light reached us 21 years ago. The supernova originated in a binary system where one member was a massive star that exploded after siphoning most of its hydrogen envelope to its companion star. After two decades, astronomers have at last identified the blue helium-burning companion star, seen at the center of the expanding nebula of debris from the supernova. HST identified the UV glow of the surviving companion embedded in the fading glow of the supernova. (Credit: NASA, ESA, and G. Bacon (STScI))

The future of progenitor detections lies with HST and the James Webb Space Telescope (JWST). HST offers UV-sensitive instruments that allow us to search for the binary companions to these stripped envelope SNe. JWST will offer more than 7 times the light collecting area than HST. While JWST lacks UV capabilities necessary for companion star searches, it will increase the sensitivity to primary stars that peak at redder wavelengths. This increased sensitivity will not only provide stronger constraints on the progenitors, but it will allow progenitor searches to extend out to larger distances, thereby increasing the search volume and sample size. These new progenitors discoveries will have direct implications on our understanding of star formation, stellar evolution models, and mass loss processes.

This Month’s Featured Author

Dr. Brian Williams received his B.S. from Florida State University in 2004 and his Ph.D. from North Carolina State University in 2010. He was a NASA Postdoctoral Fellow at NASA Goddard Space Flight Center for three years, after which he worked as a research scientist at NASA GSFC with Universities Space Research Association. He arrived at STScI in February of 2017, and is currently a Support Scientist in the Science Mission Office. His research interests include supernovae and supernova remnants, shock physics and particle acceleration, and dust in the interstellar medium.