The Vapor Cell Atomic Clock

John Coffer, Jeremy Milne (in back), and James Camparo stand in front of their laser-pumped, rubidium-atom, vapor-cell-clock test bed.

Advanced atomic clocks, suitable for space deployment, must allow for extended periods of autonomous constellation operation, which enhances system robustness. Additionally, advanced atomic clocks can lower a mission control station’s workload. Air Force space programs that depend on precise timekeeping, such as GPS, Milstar, and Advanced EHF (AEHF), place constraints of size, weight, power, and environmental sensitivity on the spacecraft atomic frequency standards.

James Camparo, Electronics and Photonics Laboratory, said, “The specific objective of this effort is to develop prototypes of advanced rubidium vapor-cell and cesium atomic-beam spacecraft clocks, and to aim the development of these prototypes toward improving performance while reducing the overall size, weight, and power of the clock.” The development of these prototypes is designed to help solve the scientific and engineering problems confronting next-generation spacecraft clocks. Those working on this effort also include John Coffer and He Wang, both of the Photonics Technology Department.

The operation of an atomic clock requires the creation of a population imbalance between two atomic states connected by a microwave transition: the greater the imbalance, the better the frequency stability of the clock. In current rubidium clocks, such as those used for GPS, the population imbalance is created by optical pumping with a discharge lamp. For these devices, fractional population imbalances of ~0.1 percent are typical. Theoretical work conducted by Camparo and his team has shown that the population imbalance could be increased by nearly two orders of magnitude using a diode laser. Additionally, efforts are underway to use coherent (laser-induced) atomic excitation processes to generate atomic clock signals without a population imbalance. These efforts are aimed at chip-scale atomic clocks and take advantage of a phenomenon called coherent population trapping (CPT). While most research organizations focus on ground-based standards, The Aerospace Corporation’s laser-pumped rubidium clock activities concentrate on compact devices suitable for space applications. Two significant problems in this area include understanding the origin of excess noise in laser-pumped clock signals (and developing means for its mitigation), and creating means for smart-clock technology (i.e., a clock that senses and corrects perturbations that could lead to frequency instability).

Michael Huang adjusts a diode laser used to generate an atomic clock signal. In this experiment, the microwave signal is superimposed on an optical carrier, a technology that has allowed atamomic clocks to reach “chip-scale” dimensions.

In the cesium-beam clocks used in GPS, Milstar, and AEHF, a population imbalance between atomic states is achieved by passing an atomic beam through state-selecting magnets. These magnets transmit less than 1 percent of the atoms in the beam. Previous studies prepared by The Aerospace Corporation showed that 100 percent of the beam could be used if magnetic state selection was replaced with laser optical pumping. In addition to increasing clock signal, optical state preparation uses the clock’s cesium supply efficiently, increasing clock lifetime. Though laser-pumped beam clocks in many other laboratories are large instruments in carefully controlled environments, the efforts at The Aerospace Corporation focus on compact, lightweight devices suitable for spacecraft use.

A second major application of lasers in cesium beam clocks relates to atomic momentum manipulation. Using lasers to slow the speed of atoms (i.e., longitudinal cooling) increases the time that the atoms spend in a microwave cavity, thus narrowing the clock transition’s line shape. Transverse cooling results in the beam’s collimation and “brightening,” thus improving the clock’s signal-to-noise ratio. A significant technological problem addressed in this area is the creation of a cold, continuously operating (as opposed to pulsed) atomic-beam clock for use onboard spacecraft.

Camparo said, “Over the years, MOIE atomic clock investigations have provided the basis for continuous technical support to the Air Force and national security space programs. This support has primarily been to on-orbit anomaly resolution, assistance in manufacturer clock development efforts, and simulations of system-level timekeeping.”

In the coming year, The Aerospace Corporation’s research team will continue to operate its atomic-clock flight simulation test bed for Milstar/AEHF rubidium atomic clocks. In particular, this will include exercising the rubidium clock under stressing conditions and developing means to mimic the behavior of a mixed Milstar/AEHF constellation. Also investigated is the operation of RF-discharge lamps that produce the atomic signal in the rubidium clocks flown on Milstar, AEHF, and GPS satellites. These investigations have shown that RF power variations in the lamp’s circuitry primarily affect the lamp’s operation via heating of the rubidium vapor within the lamp. This may have implications in explaining anomalous on-orbit clock frequency jumps observed for a number of GPS satellites. The team also continues to examine integrity monitoring for the GPS system, where the clock autonomously senses that a problem has occurred and sets the satellite’s navigation message to nonstandard code. While the second harmonic signal from the rubidium clock is used as a status-of-health indicator, it is not understood how this signal depends on various clock parameters; research is aimed at addressing that question. Finally, the team constructed a Monte Carlo simulation of AEHF system timekeeping and used it to verify the contractor’s ability to meet certain system-level requirements.

Advanced Visible and Infrared Focal-Plane Sensors

Space-based electro-optical (EO) imaging systems collect vast quantities of data across various spectral regimes from a wide range of orbital altitudes. These systems range in size and complexity from units as small as consumer cameras to structures as large as NASA’s Webb infrared (IR) telescope/observatory with its 20-foot-diameter primary mirror. At the heart of an EO system, focal-plane imaging chips convert optical data into electronic analog (and eventually digital) signals for each pixel.

An Aerospace study, “Advanced Visible and Infrared Sensors,” has been investigating characteristics of these devices—in particular, signal, noise, and image quality. Funded by the Mission Oriented Investigation and Experimentation (MOIE) program, the study has examined how those properties are analytically modeled, as well as their experimental characterization. The experimental work is particularly important in diagnosing anomalies and design errors and in describing the devices’ fundamental imaging properties, thus providing feedback for design improvement.

Terry Lomheim, distinguished engineer in the Sensor Systems Subdivision, explained that “Visible and IR focal-plane devices are complex, mixed-mode (analog and digital) light-sensing integrated circuits (ICs). The most familiar ones—charge-coupled devices (CCD) and complementary metal-oxide semiconductor (CMOS) focal planes designed for detecting light in visible wavelengths—are part of cell phone cameras, camcorders, and digital still cameras. They consist of single (monolithic) silicon IC chips with numerous pixels wherein light enters through the frontside of the device.” Lomheim is the team’s principal investigator, and his coinvestigators are Jonathan Bray and Bruce Lambert of the EO Device Evaluation Lab and Jerry Johnson and Jeff Harbold of the Visible and IR Sensor Systems Department.

One motivation behind the project, Lomheim noted, is the fact that lower payload mass, power, and volume result in lower sensor-system life-cycle costs: “Smaller payload mass and power level increase compatibility with commercial spacecraft buses, for instance, and allow the use of lower-cost launch systems,” he said. “Improved radiation hardness may allow the use of orbital altitudes that are associated with higher space radiation dose levels, but are more cost-effective in terms of overall sensor constellation architecture.” Moreover, he said, advances in payload signal processing can reduce the cost of ground systems. “Visible and IR camera systems that collect images in many spectral bands, measure changes in the polarization of light, or operate at extremely low light levels all might enhance the information-extraction ability—and therefore the utility—of space EO sensor missions.”

The DOD and NASA have used advanced versions of these devices for several decades, and the architectures are maturing, with higher detection efficiencies, improved sensitivity, higher frame rate (the rate at which unique images are consecutively produced), larger pixel formats, and on-chip analog-to-digital conversion (ADC). On-chip ADC enables these devices to operate in a “photons in, bits out” manner.

Advanced devices include extremely thin silicon imagers that collect light through the backside for enhanced, as well as hybrid imagers. In the hybrid imagers, a grid of light-sensing pixels is mated to a corresponding grid of pixel unit cells inside a readout integrated circuit (ROIC). These pixel unit cells process the signal photocurrent, converting it to signals in the voltage domain. Each one contains a photocurrent/charge-to-voltage conversion preamplifier with a minimum of three transistors.

The wide variety of EO camera applications dictates a wide diversity of focal-plane requirements for parameters such as line rate, frame rate, dynamic range, linearity, operability, noise/sensitivity levels, and radiation hardness. As a result, a broad range of operating characteristics is needed, one that includes distinctly different focal-plane pixel unit cell electronics, multiplexing circuits, numbers of ADCs, operating modes, and operating temperatures. Aerospace has been examining how to optimize focal-plane designs to meet the appropriate signal-to-noise and image-quality requirements despite the limitations of the detecting material technologies and the IC manufacturing process.

In the area of advanced signal and noise modeling, Lomheim’s team has concentrated on focal plane arrays (FPAs) with built-in ADC capability, novel unit cell ROICs, circuits optimized for processing multispectral and hyperspectral data, and new detector technologies that span the visible to longwave IR region. Special pixel unit cells capable of wide dynamic range and low noise may further enhance these applications.

In 2009, the team also concentrated on perceptive signal and noise models on focal-plane devices that use smaller photolithographic design features. These devices will be manufactured using 0.18 micron CMOS design rules and must function at cryogenic temperatures. This represents a new operating regime for the key transistors in the mixed-mode pixel unit cell circuits.

Aerospace is also studying image-quality measurements. CCD visible focal-plane technology—the workhorse for advanced imaging systems since the mid- to late 1970s—is gradually being replaced by frontside- and backside-illuminated monolithic CMOS and hybrid silicon PIN visible focal-plane approaches. (A PIN photodiode serves as a light-receiving element to convert an optical signal into an electrical one.) New system applications of interest include panchromatic and multispectral sensors that require large-area, high-frame-rate two-dimensional arrays also capable of ultrawide dynamic range (i.e., full sunlight to night imaging). Key figures of merit for these focal planes include the modulation transfer function (MTF) or, equivalently, the point spread function (PSF); noise floor; well capacity; and uncorrected pixel gain and offset nonuniformity effects. Precise characterization of these parameters for a high-performance visible focal plane requires precise multicolor calibration of the optical system. Image quality is sensitive to spatial noise effects, which are determined empirically by nonuniformity and nonlinearity characterization over the pixel dynamic range. When the low end of this dynamic range covers lunar illumination, simulation in a laboratory setting requires optical setups involving multiple light sources and extreme “light tightness.”

In another image-quality measurement activity, Aerospace’s MTF and spot-scan characterization capabilities have been refined to enable precision-staring pixel spot-scanning over a wide range of spectral wavelengths. In this technique, a small spot of light is generated and moved around a pixel for diagnostic purposes. The Aerospace effort involved a confocal microscopic setup aimed at detailed pixel inspection in support of the spot-scanning work. The work of Lomheim’s team improves the corporation’s ability to cover these new measurement regimes to support SMC and other customers developing large, small-pixel visible/IR focal planes for an ultrawide dynamic range.

The MOIE project has scrutinized the process of modeling the focal plane sensors, with productive results. Understanding the properties of new imaging devices is vital to the design and planning of imaging systems, and one way Aerospace is achieving this understanding is through modeling the spectral MTF and PSF characteristics of the latest focal-plane pixel designs. The MTF, a numeric value, characterizes the response of the array to increasing spatial detail in the scene being observed. The PSF describes a system’s response to a point source, like a star. Such modeling will provide crucial design guidance in the development of these large arrays.

Aerospace-developed narrowband Er:YAG laser seed source. An Er:YAG crystal is configured in a nonplanar ring oscillator (NPRO) geometry to achieve narrowband output at 1645 nanometers with a line width less than 1 megahertz. The output of the NPRO will seed a larger Q-switched laser to generate high-peak-power narrowband pulses for eye-safe LIDAR applications. The observable green emission, derived from an optical upconversion process, traces the infrared optical path within the NPRO resonator.

In CMOS visible imager MTF and PSF modeling, Aerospace has refined numerical two-dimensional Fourier transform methodology for converting empirical spot-scan–generated PSF data to a precision MTF description of a pixel response. This has proven successful and has clarified certain effects, thought to have been data anomalies, as real physical effects in the pixel response. The effort employed a new process involving the mapping of multipixel data into a single effective pixel grid. This allows much shorter data-collection times and avoids data uncertainties associated with systematic drifts and slow instabilities in the spot-scanning optical setup. The effort also demonstrated how model development directly affects experimental research work and vice versa.

The Aerospace sensor project has completed a significant upgrade to its experimental color-dependent spot-scan capability. The updated configuration includes additional diagnostic tools that more completely characterize the operation of the system and a confocal microscope fitted into the optical system for more precise determination of spot focus. The new configuration permits acquisition of highly accurate and repeatable wavelength-dependent pixel response data with time-reduction factors as high as 100.

Aerospace used two independent experimental techniques to derive wavelength-dependent MTF data for two CMOS imagers: a tilted-knife-edge method, with an Offner relay optical reimaging system, and the spot-scanning method described above. These techniques quantified the impact of design and manufacturing variations on the color-dependent MTF characteristics of the imagers. Specific diffusion-related and pixel circuitry layout effects were precisely correlated to the measured spectrally dependent MTF degradation.

This MOIE project’s improved, efficient MTF/PSF laboratory characterization capability has enabled the detailed color-dependent characterization of a frontside-illuminated CMOS imager (developed by JPL) using precision spot-scanning and corresponding/confirming tilted-knife-edge MTF characterization. Lomheim described the imager’s electronics: “This CMOS imager has a spacing between adjacent photodiode pixels of 9 microns. Its photodiodes are formed between an n well and p epitaxial layer, characterized by a lower doping level and hence a much deeper depletion depth that would prevail for typical cell phone camera CMOS imagers. For this device, the photodiode area is inscribed toward the center of a pixel pitch and surrounded by pixel electronics and an opaque contact along one direction and pairs of overlying metal lines along the orthogonal direction.”

This type of detailed pixel-level examination of the relationship between the device manufacturing layer parametrics and the imager’s EO imaging capability is essential to improving this technology and guiding it toward the future goals and requirements of Aerospace customers.

Lasers For Space Applications

In 1971, Aerospace performed its first illumination of a Defense Support Program satellite in orbit to calibrate the sensor on board. The illumination from the ground was accomplished with a hydrogen fluoride (HF) laser, which emits light near 3 microns. For the next 25 years, this laser was used for all Aerospace satellite illuminations and became the cornerstone of Aerospace’s laser beacon effort. Aerospace’s success led to an increasing demand for this capability, as well as the desire to illuminate satellites from multiple ground sites. This prompted the need to develop a more reliable, transportable, and user-friendly replacement for the HF laser. By the mid-1990s, an Aerospace Mission Oriented Investigation and Experimentation (MOIE) effort began for this purpose, and led to the development and implementation of two solid-state 3-micron sources—an Er:YAG laser and an optical parametric oscillator (OPO). World-record output power and efficiencies were achieved with both devices.

Ongoing research involves the evaluation and development of new laser technologies for improving defense capabilities in remote sensing and satellite sensor calibration.

“Our most recent laser development efforts have focused on a 3-micron wavelength-agile source for remote detection of toxic chemical species; a narrowband eye-safe 1.6-micron laser for various light detection and ranging (LIDAR) applications, including cloud, wind, and plume detection; and a 4.5-micron laser source for national security space applications,” said Todd Rose, principal investigator of the project and laboratory manager in the Photonics Technology Department (PTD). Coinvestigators from PTD include DaWun Chen, senior scientist, and Steven Beck, department director.

“Frequency-agile laser sources are useful for remote sensing applications that use differential absorption LIDAR, or DIAL, to detect trace chemical species in the gas phase. DIAL can be used to track plumes of toxic industrial chemical vapor formed by accidental or adversary-caused release near populated areas or other areas of interest. Detection of multiple species in a timely manner requires laser systems whose frequency (color) can be tuned quickly and accurately to select spectral absorption features of target gas species,” Rose said. The team is working on a rapidly tunable 3-micron DIAL source, which is based on a nonlinear optical approach called difference frequency generation and optical amplification. “The goal of this effort is to demonstrate a 10-watt wavelength-agile system using an available high-power 37-kilohertz-repetition-rate Nd:YAG laser pump and commercially available telecom tunable laser diodes,” Rose said.

For defense applications, a tunable OPO is being developed to provide output near 4.5 micron. This device will be pumped with a 20-watt, 1.9-micron thulium fiber laser and will generate midwave-infrared output via a nonlinear optical process similar to difference frequency generation. A second approach using a pulsed holmium YAG laser to pump and OPO is also being pursued. Other LIDAR applications, such as the characterization of winds in the vicinity of aircraft or target identification on a battlefield, necessitate the use of eye-safe sources. A compact narrowband Q-switched (pulsed) Er:YAG laser is being constructed for this purpose. A key component of this laser system is a new Aerospace-developed nonplanar ring oscillator (NPRO) that provides narrowband seed light at 1.6 micron. This is the first demonstration of an NPRO operating in this important eye-safe wavelength region.

Preliminary analysis at Aerospace suggests that detecting tests of particle injection schemes from space will be quite challenging, especially for unannounced small-scale, localized trials with short- term observable effects.

Active lidar sensors, such as the Cloud-Aerosol Lidar with Orthogonal Polarization instrument (CALIOP) on NASA’s CALIPSO spacecraft, can detect aerosol layers with relatively high sensitivity and provide accurate aerosol heights and horizontal positions; however, a long revisit rate limits their suitability for continuous global monitoring. Images courtesy of NASA.

Climate researchers and theorists have suggested that industrialized nations could combat global warming by injecting aerosols—sulfur dioxide, aluminum oxide, or manufactured nanostructured particles—into the stratosphere to actively manage the amount of solar radiation that filters through. Proposed means for lofting these aerosols into the stratosphere include large-caliber guns, rockets, balloons, tethered hoses, aircraft, and even via photophoresis (the process whereby small particles suspended in gas or liquid move away from a sufficiently intense light source).

As promising as this might appear at first glance, there are many potential downsides. The influence of aerosols and clouds is the largest source of uncertainty in climate models and forecasts, and the uncertainties and risks involved in particle injection are significant. Ideally, any experimentation with solar radiation management would be based on a global consensus regarding what strategy to pursue and how to pursue it. In reality, a single state might unilaterally attempt action. One reasonable fear is that a country may begin experimenting with solar radiation management, even at the risk of adversely affecting neighboring nations or the planet as a whole.

The Aerospace Corporation has been investigating how space-based sensors could be used to detect and track injected particles to help enforce any future international agreements concerning climate engineering and solar radiation management. Initial findings suggest that any effort to do so will be difficult. (see sidebar, Aerospace Support to Global Climate Research)

A Postulated Scenario

A country or organization developing a particle-injection capability would need to conduct extensive developmental testing. Initial tests with natural particles like sulfur dioxide or aluminum oxide might be used to evaluate various injection methods and to quantify factors such as particle aggregation, dispersion, and persistence. These early experiments would lead to larger experiments and eventually to a full-scale test deploying a huge quantity of particles. Such a full-scale test would be easy to detect—but by then, it would be too late to do anything about it. The ability to detect small precursor tests would provide the international community with more options for intervening or possibly deterring unsanctioned activities altogether.

Developing a system to detect such activities presents an enormous challenge because of the wide range of unknowns—for example, the type of material released, particle size, amount released, altitude of release, release mechanism, and area of dispersion (initial density). In addition, the process of dispersion itself is highly variable: estimates of eddy diffusivity in the stratosphere can vary by more than an order of magnitude. Requirements for data access and dissemination, redundant verification, reliability, and operational control would be similar to current systems for monitoring arms control agreements. Geopolitical constraints and possible funding mechanisms would also be important considerations.

To determine the requirements for a space-based monitoring capability, Aerospace considered, as a typical scenario, a small clandestine test involving an aircraft release of 1 to 10 metric tons of precursor gases or engineered particles. Detecting this type of unannounced test (which could be conducted anywhere in the globe) would require nearly continuous global monitoring. The aerosol cloud would not stay together more than a few hours at detectable levels, and the detection threshold would vary greatly depending on background and sensor technique. The maximum size of the cloud at detectable levels might be on the order of a few kilometers. The high wind speeds and shear prevalent in the stratosphere would transport the cloud hundreds of kilo- meters downwind while shredding it into filaments. As a rough quantitative example, 1 metric ton of sulfur released over an initial volume of 107 cubic meters is estimated to have a mean particle density of 1000 particles per cubic centimeter after 1 hour and 100 particles per cubic centimeter after 10 hours. This calculation assumes an eddy diffusivity value of 100 square meters per second horizontally and 0.1 square meters per second vertically. These concentration values would change greatly depending on the parameters and the modeling technique assumed.

System Requirements

The purpose of the clandestine tests would be to assess injection techniques and better understand the effectiveness of the particles in changing the albedo. Objectives would include demonstration of the delivery mechanism, observation of aerosol formation and growth rates, observation of particle dispersion characteristics, observation of vertical spreading and motion, observation of evolving particle size distribution and location, observation of particle attitude (where relevant), measurement of albedo levels, and validation of associated models.

In terms of space sensor requirements, these goals translate to an ability to quantify aerosol optical depth or extinction coefficients in the stratosphere as a function of wavelength, enabling an estimate of particle density and size distribution. Spectral information would also be used to discern particle composition. Specialized algorithms would have to be developed (most likely from ground-based reference test data) to differentiate particle shapes, orientations, and makeup. Quite a bit of uncertainty surrounds the derivation of these parameters from the directly observed radiance and backscatter measurements, so a significant research program would be needed to substantiate the baseline science and establish confidence in the retrieval methodologies.

The ability to accurately determine the altitude of an aerosol layer would be critical for determining its origin—but not sufficient in itself. For example, with the exception of volcanic aerosols and some phenomena associated with specific polar regions and seasons, natural clouds generally do not extend into the stratosphere. Thus, an aerosol cloud in the stratosphere could be a good indication of human intervention. However, at higher latitudes, jet aircraft do fly above the tropopause—so in these regions, it may be difficult to distinguish normal jet contrails and cirrus clouds from a particle injection test. Also, because observed instantaneous aerosol optical depth values can change by a factor of two or more from day to day, only very large spikes in sensor measurements would indicate intentional particle injections.

The required sensor revisit rate, spatial resolution, and measurement accuracy all depend upon the type of sensor, the dispersal rate, and other characteristics of the aerosol tests, especially during the first minutes to hours after injection. Other critical parameters to monitor (in addition to ambient conditions) are particle size distribution and spatial distribution as the plume spreads out.

System Architecture

The types of space-based sensors that would be most effective in detecting intentionally injected aerosols are passive multispectral imagers, both reflective and emissive, and active laser-based sensors or lidars. These two types of sensors have complementary advantages and deficiencies and would need to be used in combination to be most effective.

For sensors with nadir-viewing geometry, such as NASA’s Moderate Resolution Imaging Spectroradiometer (MODIS), the combination of background clutter and relatively short column depth makes it difficult to detect and characterize aerosol concentrations with low optical depths (i.e., less than or equal to 0.1–0.3). Even thin high cirrus clouds, consisting of rather large ice crystals, are difficult to detect or measure with these instruments.

Solar occultation sensors (which view the atmosphere tangentially against the backdrop of the sun as it sets or rises) are significantly more sensitive to small aerosol concentrations as a result of very long viewing paths and related factors; however, viewing is limited to times and regions correlating to occultation events, resulting in spotty coverage for any given orbital pass. In addition, the sensing geometry results in poor horizontal resolution and geolocation capability.

Active lidar sensors, such as the Cloud-Aerosol Lidar with Orthogonal Polarization instrument (CALIOP) on NASA’s Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) spacecraft, can detect aerosol layers with higher sensitivity than the nadir-looking passive sensors and provide accurate aerosol heights and horizontal positions. In particular, the low background density in the stratosphere (less than 10 particles per cubic centimeter at 20 kilometers) means that even fairly diffuse particles can be detected with lidar.

One of the challenges in detecting injection tests lies in distinguishing intentionally injected particles from naturally occurring particles. There may be some peculiarities with regard to spectral region, polarization, or geometrical behavior that would allow for their differentiation. For instance, nonspherical particles tend to depolarize the scattered photons from a polarized light source. Thus, if the scattered signals are resolved polarimetrically, lidar sensors can provide some information regarding the shape of the aerosols.

This view of Earth’s limb from space shows the various layers of the atmosphere. The pinkish-white layer constitutes the stratosphere, the region where an aerosol injection would probably occur.

The main disadvantages of using lidar sensors are the small field-of-view and relatively high-power requirements. For example, CALIOP’s ground swath is only 100 meters wide, resulting in a 16-day revisit time—far too long for a single spacecraft to accomplish an effective monitoring mission.

Increasing the footprint of an orbiting lidar sensor would probably entail an increase in laser power, allowing the beam to be either spread out or split into multiple spots while maintaining sufficient power density for high sensitivity. While high-power solid-state and fiber lasers have been demonstrated on the ground, considerable development will be required to qualify any of these to meet the challenging requirements for use in space.

Detection of a particle injection test would require extensive analysis of the temporally and spatially colocated passive multispectral sensor data and lidar data. However, even with advanced spacecraft-based sensor systems, detection of the small tests would be difficult, given the background noise and infrequent revisit rate of a single spacecraft. A large constellation of spacecraft would reduce the revisit time, but the huge cost of such a system weighed against the benefit of quickly detecting a small particle-injection test would make it impractical. Given the high level of uncertainty and the lack of a background reference data set, the job of detecting, identifying, and monitoring such tests for treaty purposes will need to be shared and cross-checked by numerous assets. (see sidebar, Pros and Cons of Solar Radiation Management)

Conclusion

Deterrence of unsanctioned and clandestine solar radiation management activities will require monitoring systems that can reliably detect early test phases involving relatively small amounts of particles. Doing so from space will be very challenging. Indeed, future treaty negotiations may need to consider alternative methods of monitoring such activities. As with monitoring nuclear tests, detecting clandestine particle-injection experiments and development activities will require a combination of techniques on the ground and in space. Still, given the strong need for improved understanding of the role of aerosols in the stratosphere, as well as the need to monitor volcano dust for airline safety, the impetus may exist for the development of a multifunction system of space-based sensors.

Acknowledgments

The authors would like to thank Carl Rice and Richard Walterscheid for their helpful advice related to this article.

An earlier version of this article appeared in the proceedings of the American Institute of Aeronautics and Astronautics Space 2010 Conference and Exposition.