Strained Silicon Photonic Devices for Optical Modulation

Phase response of a strained silicon modulator.

The demand for compact, low-cost components for optical communication, signal processing, and optical sensing is driving the rapid development of silicon photonics. Silicon is a chemical element used in semiconductor electronics and integrated circuits and is the basis of most of today’s computers. Crystalline silicon can also be used to make photonic components and can be used to integrate electronics with photonic functionality on a single chip-scale platform. Most of today’s optical systems rely on discrete optical fiber components or free-space optical elements. However, these components take up a lot of space, therefore limiting the overall capacity and functionality that can be engineered into a small volume. National security space applications will benefit from the development of next-generation integrated high-speed optoelectronic circuitry based on silicon. The benefits of using silicon components on these systems include increased bandwidth capability, smaller physical footprints, and reduced power consumption.

A key building block in any photonic system is the electro-optical modulator, an optical waveguide device that imprints an electrical waveform or data stream onto an optical carrier. Analog waveforms or data bits may be encoded by shifting the amplitude or phase of the light that passes through it with this device. The phase or amplitude shifts are generated by the applied input electrical waveform that alters the refractive index of the modulator material (electro-optic effect). Because of the cubic symmetry of the silicon crystal lattice, however, this material does not naturally exhibit this effect. Other techniques, such as charge-carrier depletion and thermo-optic tuning, have been used to modulate the refractive index of silicon, but these have limited data bandwidth capabilities and consume excessive electrical power.

By introducing strain into the silicon crystal structure, silicon can exhibit electro-optic activity. Aerospace researchers are exploring this strained silicon approach. Andrew Stapleton, a member of the technical staff, Photonics Technology Department, is the principal investigator of a team that is using strained silicon to make electro-optical modulators. The research team includes Peter DeVore, Heinrich Muller, and Todd Rose, all of the Photonics Technology Department. These team members have supported several national security space programs in which lithium niobate optical modulators have been used. The team is applying findings from this work to its research into silicon straining techniques. The researchers have found that the silicon modulators they are developing do not appear to degrade in the space environment, unlike lithium niobate modulators, which are currently being used for this application.

Scanning electron microscope image of a cross section of the device showing a one micrometer wide buried silicon waveguide.

To achieve strain in silicon waveguides, a thin layer of silicon dioxide is deposited over the crystalline silicon optical waveguides. The strained silicon modulators are fabricated at Aerospace with commercially available silicon-on-insulator wafer substrates. These wafers have a 0.26-micrometer-thick silicon device layer over a 2.0-micrometer layer of silicon dioxide. A thicker, silicon substrate lies under the oxide. The top silicon layer is sufficiently thin to support a single transverse optical mode of light at a wavelength of 1.5 micrometers. Before depositing the dielectric straining layer onto the device, the waveguide patterns are first defined in a photoresist mask on top of the wafer substrates. Then, a sulfur hexafluoride plasma etching process is used to transfer the waveguide patterns from the mask to the underlying silicon layer, and the photoresist mask is removed. Once the optical device layer is fabricated, a 1.5-micrometer silicon dioxide top layer is deposited in a plasma-enhanced chemical vapor system at 300 degrees Celsius. Because of the difference between the coefficients of thermal expansion for silicon and silicon dioxide, the silicon waveguide core is strained as the structure cools to room temperature.

Because strain distorts the silicon crystal lattice’s symmetry, strained silicon exhibits an electro-optic effect so that the index of refraction of the crystal waveguide, and hence the phase of the transmitted light, can be controlled with an applied electric field. “The first experiments involved simple optical phase modulators in which gold electrodes were deposited on the waveguide samples,” said Stapleton. The researchers placed the silicon modulators in a free-space optical interferometer, quantifying the optical phase change under various voltages. This technique allows for estimating the strained silicon’s nonlinear susceptibility, a key performance metric.

Numerical simulations using COMSOL (a software program for modeling and simulating scientific and engineering problems) indicate that 2.5 percent of the total voltage was applied across the silicon waveguide, with the bulk of the voltage dropped across the electrically insulating oxide above and below the waveguide core. From this analysis, the nonlinear susceptibility was estimated to be 2.7 picometers per volt. While this value is less than that of gallium arsenide or lithium niobate (99 and 360 picometers per volt, respectively), further improvement is expected if the magnitude of the strain in the waveguide can be increased.

For this reason, the Aerospace research team has been working to understand and quantify strain in fabricated devices under various waveguide geometries and processing conditions. One analytical technique being used is micro-Raman spectroscopy. In this technique, light from a blue laser is focused on the waveguide sample, exciting the silicon crystal lattice’s optical phonon modes. The scattered light emitted from the sample surface is then collected and analyzed under a microscope. Unstrained silicon has a distinct characteristic Raman peak at 520 waves per centimeter.
When the analyzed sample is a strained silicon waveguide, the Raman peak shifts to either a higher or lower wave number (corresponding to compressive or tensile strain, respectively). The magnitude of this shift can be used to quantify the level of strain in the waveguide.

Raman spectra collected from a region of the unstrained silicon substrate (solid dots) and a one micrometer wide strained waveguide (open dots). The lines are numerical fits to a Lorentzian profile.

“The silicon waveguide samples studied with micro-Raman spectroscopy were found to be under tensile strain,” said Stapleton. Similar Raman data from other waveguides with different widths indicates that the magnitude of this strain increases as the width of the waveguide is reduced. For example, the strain in 0.5-micrometer-wide waveguides was approximately twice the strain observed in 1.0-micrometer-wide waveguides. These findings indicate that the electrical-to-optical conversion efficiency (i.e., gain) of strained silicon modulators can be significantly improved by fabricating devices with narrower waveguides.

In addition to developing a silicon optical phase modulator, the research team has demonstrated a fully integrated optical intensity modulator containing two strained silicon phase modulators along with waveguide splitters and combiners in a Mach-Zehnder interferometer configuration. This demonstrates that strained silicon devices can be integrated with other optical components to form more complex optical systems on a single chip.

While the development of strained silicon modulators is very much in its infancy, this approach points to the feasibility of devices that are more chemically inert and tolerant of space environments compared to current state-of-the-art lithium niobate modulators. Further work will need to focus on improving the overall modulation efficiency of strained silicon modulators before this class of devices can be considered a viable substitute.

Toward a Better Understanding of Electrostatic Discharge

If current spacecraft power trends toward increased solar array size and operating voltage continue, the risk of on-orbit performance losses from electrostatic discharge (ESD) will also increase. Without proper electrical charge mitigation, solar arrays and other satellite components are subject to significant charging in the space environment. Such charging can lead to substantial and frequent electrostatic discharge, which is a sudden flow of electricity between two differently charged objects, sometimes causing an on-orbit anomaly.

Spacecraft ESD is a complex phenomenon that can disrupt normal operations and even destroy solar array circuits and other electronics. Studies have been conducted since the 1970s to expand the awareness and understanding of these events, leading to the development of analysis tools such as NASCAP-2K (NASA/Air Force Research Laboratory [AFRL] spacecraft charging analysis program). This next-generation spacecraft charging analysis code can create realistic geometric models, simulate charging, and calculate and display surface potential/currents, space potentials, and particle trajectories.

As sophisticated as many of these analysis tools have become, they still lack some of the critical parameters and physics needed to make accurate predictions for ESD in the space environment. Decades of laboratory tests have attempted to establish electrical charge dynamics and propaga-tion, but there is limited consensus on the basic mechanisms and fundamental quantities of these effects, including propagation velocity, which helps to determine the magnitude of current transients. Spacecraft developers therefore continue to rely on the testing of components in a simulated space environment to create high-performance designs.

In an effort to develop more accurate prediction techniques, Jason Young, a member of the technical staff, and Mark Crofton, a senior scientist, both of the Propulsion Science Department at The Aerospace Corporation, have been conducting research to better understand the dynamics and scaling of electrostatic discharge on large satellite components, including solar arrays and multilayer insulation. They are using ESD1 for the testing, which is Aerospace’s new space-simulation vacuum facility, built to carry out electrostatic discharge testing in a more optimized way (most of the historical Aerospace work was performed in the EP2 electric propulsion chamber).

View from end port of ESD1 vacuum chamber in D1. The US Round Robin Coupon from NASA GRC is mounted inside for inverted gradient charging tests.

“Although we know a great deal about how surface charge builds up on spacecraft components, many details of how that charge is released are uncertain. This makes it difficult to properly predict the shape, peak amplitude, and duration of the discharge current for array-size structures, which is the ultimate determinant for risk of component damage or failure,” said Young.

Over the past several decades, the entrenched paradigm for electrostatic discharge has been the brushfire model. According to this model, an ESD current radiates outward at a constant speed from a single central arc point, instantly clearing all surface charge along the propagating front. As the electrical charge is cleared from dielectric surfaces like coverglass, currents are generated on adjacent conductors, like solar cells.

The brushfire model assumes the propagation of ESD occurs at a fixed speed that is independent of surface morphology or composition. However, experiments on complex test components that measure this propagation have often produced varied results.

Young and Crofton’s research has so far revealed that ESD is much more complicated than the findings of the brushfire model. Instead of using a complex, reduced-scale solar array coupon, which is the standard component for electrostatic discharge testing, the researchers have used a one-square-meter panel of concentric ring electrodes covered in Kapton, a polyimide film used in flexible printed circuits. The panel has been irradiated with electrons to create a net positive surface charge that is similar to conditions found in geosynchronous Earth orbit during an extreme space weather event. Electrical arcs were generated from a defect at the test panel’s center, resulting in induced electrical currents on the ring electrodes.

“By measuring these currents, we could determine the radial progression of an electrostatic discharge event. The data show that instead of a sequential removal of charge from the innermost to outermost ring, charge removal begins on all rings nearly simultaneously, with the charge removal on more distant rings taking more time. The data also show that peak current amplitude is not proportional to ring radius, which is in contradiction to the brush fire theory,” said Young.

Surface charge neutralization current for a single ESD event as a function of time and radial distance from inception point on the ring coupon. (Inset) Image of the Kapton-covered ring coupon.

The researchers have also found that during an ESD event, electrical current grows more or less linearly with time, along with brief, intermittent increases and decreases of current rapidly propagating from inner to outer ring electrodes. Because of their proximity to the panel center and smaller surface area, the released charge fraction of inner ring electrodes changes more rapidly. As the remaining surface charge decreases, the electrode current declines exponentially.

“These characteristics of electrostatic discharge are reminiscent of a diffusion process, which has important implications for components operating in space. Since electrostatic discharge propagation has been found to be dispersive in these tests, a peak anomaly current does not necessarily increase linearly with distance, as was originally believed. In fact, it seems to increase more slowly, suggesting more favorable scaling for large spacecraft components,” said Young.

Moving forward, Young and Crofton plan to introduce more variations in the ring panel’s surface and morphology and the composition of the electrical arc initiation point. The researchers are hoping to correlate the ring electrode current patterns using external diagnostic instruments such as Langmuir probe arrays and imaging spectrometers. Once these instruments are calibrated, they could then be transitioned to test more complex satellite components, such as solar array coupons and multilayer insulation sheets, in which the electrode patterns are not as optimized for measuring electrical current.

The Aerospace researchers are also participating in a series of tests with AFRL and other ESD researchers around the country. In similar setups at multiple facilities, various solar array coupon designs are being tested to measure ESD propagation characteristics in simulated geosynchronous and low Earth orbit charging environments. The team is also working with a pulsed laser that can produce ESD on demand. Interestingly, the researchers have found that the ESDs produced at specific intervals by the laser varied more significantly in magnitude and duration than those produced spontaneously. This technique is anticipated to produce insights into ESD initiation thresholds and electrical arc rates.

Through the development and application of new approaches like these, the team hopes to augment the current database and understanding of ESD in the space environment, improve qualification test procedures, and reduce the risk of disruptive on-orbit ESD events.

Mobile Code and COAST: Potential Allies to Cybersecurity in the Cloud

Cloud computing presents an attractive option for hosting digital services where storage and computational demands may vary significantly over time. Computing clouds are elastic infrastructures that can be quickly expanded or contracted in response to variations in service demands or computational needs. These modern clouds rely on virtual machines to emulate the hardware and peripherals of physical servers with multiple virtual machines executing on a single physical server. Modern large-scale clouds, offered by commercial vendors such as Amazon, Google, and Microsoft, are provisioned on a massive scale. For example, the Amazon cloud is estimated to contain approximately 500,000 high-performance physical servers, and the Google cloud is believed to be significantly larger.

Government agencies can reduce costs by sharing a cloud with other organizations and users, while the cloud infrastructure transparently manages service load variations behind the scenes. Ample cloud computing and networking resources simplify the design, implementation, and deployment of critical information services, while spikes in service demands can be accommodated within minutes. However, these resources and the flexibility they offer come with a price: this same massive cloud-computing infrastructure can launch cyberattacks against other information services within and outside the cloud.

Mobile code is a technology that allows the seamless transfer of running computations from one network location to another, or equivalently from one virtual machine to another, and therefore is an attractive complement to cloud-based virtual machines. Mobile code simplifies the construction and deployment of cloud-based services by allowing programs to be dynamically moved to different virtual machines as the cloud is expanded or contracted. In addition, services using mobile code can be deployed on the fly to modify or augment services in response to evolving missions or the immediate, urgent demands of a cyberattack.

However, due to its easy deployment and flexibility, mobile code is also a superlative tool for mounting attacks against a system. Thus the same mobile code technology that promotes system flexibility and adaptivity also puts systems at considerable risk. Can the benefits of mobile code technology be enjoyed without increasing the risk of a cyberattack? At a minimum, the mobile code and its supporting infrastructure must be secured against attack—a difficult and vexing problem for which no compelling solution has been articulated.

To counter these threats, Michael Gorlick, senior engineering specialist, Computer Systems Research Department, and Richard Taylor, a professor of information and computer sciences at the University of California, Irvine, are investigating mobile code options for securing against cyberattacks that arise from within the cloud. Gorlick and Taylor have developed COAST (COmputAtional State Transfer), an architecture style that is based on mobile code for
Internet-scale network services where the mobile code itself becomes a critical element in a coordinated defense against cyberattacks.

“COAST is based on capability security, which fuses the designation of a service and access to that service into a single, unforgeable reference—a capability,” said Gorlick. The COAST architecture prescribes cloud-based mobile code systems with a set of simple yet powerful rules. It defines a set of mechanisms that allow programs to defend themselves against hostile or malicious visitors, halts communication with ill-behaved programs in the cloud or network, and prevents unrecognized programs from communicating directly with the COAST-based services residing within the cloud.

“COAST confines visiting mobile code with restrictive functional, resource, and communication capabilities, defining what the mobile code can do, and limiting its resource consumption and communication powers. The visiting code has only the powers granted to it and nothing more,” said Gorlick. For example, if a program is not explicitly granted a capability to read a sensitive database, then it will not be allowed access to it.

COAST denotes individual programs with capability uniform resource locators (CURLs), which are the carriers for communication across a network. “These CURLs are cryptographic structures that are impossible for any other computation to guess, counterfeit, or modify without being caught,” said Gorlick. Programs within COAST rely on communication by introduction. For example, program X cannot communicate with program Y unless it holds a CURL explicitly allowing the communication. This communication by introduction helps to ward off denial of service attacks, eliminates spurious communications, hinders theft of service, detects the misuse of communications, assigns blame, and revokes communication privileges as needed. CURLs also enforce communication constraints, including the total number of times that X can communicate with Y, the maximum rate and bandwidth at which X can communicate with Y, and the span of time during which the communication is permitted. The mechanisms of mobile code can enforce complex communication constraints that implement temporal, spatial, and collaborative requirements.

Controlling the propagation of capability among programs is critical to minimizing the risks of cyberattack.

Gorlick and Taylor have now implemented Motile, a mobile code programming language consistent with the precepts of COAST, and Island, a decentralized, peering infrastructure capable of safely executing mobile Motile programs. Motile/Island resolve the conundrum of safe mobile code by repeatedly applying capability security at all levels of a given infrastructure. The same capability security that protects Island against errant or deliberately malicious Motile programs also protects the infrastructure itself from cyberattacks. Both the Motile compiler and the Island infrastructure are written in Scheme, a programming language with a long history in mobile code experimentation. The Island infrastructure is the service-oriented architecture equivalent of service providers and consumers; a single “island” can simultaneously act as both a provider and consumer.

“By embedding specific mobile code into CURLs, we can ensure that whenever program Y attempts to exploit a capability designated for program X, its effort will fail. Controlling the propagation of capability among programs is critical to minimizing the risks of cyberattack, and mitigating the likely damage of an attack is a core element of our ongoing research,” said Gorlick.

Gorlick and Taylor are also exploring how COAST can be used in a variety of space settings and domains, including for the distribution and manipulation of high-bandwidth satellite telemetry. COASTcast, a demonstration application implemented with Motile/Island, permits users to view, share, and manipulate real-time, high-definition satellite video streams. Their research suggests that COAST-based architectures are feasible in domains characterized by multiple, continuous data streams, including for satellite ground stations.

“One distinguishing characteristic of the COAST architectural style is its complete indifference to the actual location or precise formulation of mobile computations. In particular, Motile/Island can execute on a broad scale of platforms and in a variety of network environments, from embedded single-board computers to a cloud of virtual machines,” said Gorlick.

Gorlick is also working with Larry Miller, principal engineering specialist, Software Engineering Subdivision, and David Wangerin, Computer Systems Research Department, to consider new spacecraft designs in which all spaceborne services, including payloads, are implemented in the COAST architectural style. The researchers have found that mobile code has the potential to simplify many troublesome aspects of spaceborne architectures, including autonomous fault recovery, hot updates (updating onboard software without pausing or rebooting), and processor failure.

For example, automated responses can simply abandon a processor if it fails, and by using mobile computations, reconstitute the services elsewhere within the spacecraft, on another nearby spacecraft, or perhaps (real-time constraints permitting) on the ground. A testbed infrastructure based on low-cost, single-board computers is now being constructed for prototyping common satellite bus subsystems such as thermal control, electrical power distribution, and attitude control. Here, the challenges include adapting COAST to the rigors of spaceborne systems, recasting traditional embedded designs for mobile computations, and ensuring the effective and timely execution of mobile computations on resource-constrained platforms.

What do all these topics have in common? They were all supported by the Aerospace Technical Investment Program (ATIP).

Aerospace considers it vitally important to stay on the cutting edge of research and technology, and ATIP is one way it accomplishes that goal. ATIP is an internal company program that allows Aerospace scientists and engineers to complete research and development on a variety of important topics that might otherwise be neglected.

“The ATIP program serves multiple objectives, from innovation to supporting a responsive technical staff,” said Sherrie Zacharius, vice president of Technology and Laboratory Operations. “Leveraging technology and innovation to help our customers overcome today’s and tomorrow’s challenges is critical and ATIP investments play a crucial role.”

Mark Goodman, principal director of Strategic Planning at Aerospace, agreed on the importance of ATIP.

“The ATIP program is important because it is a key source of innovative ideas at the company, and because it gives the technical staff the chance to work with advanced technologies in their fields,” he said.

Types of ATIP Projects

Independent Research and Development (IR&D): IR&D projects are targeted research and development projects that often lead to the invention of novel technologies or new technical capabilities. IR&D projects vary in length, but are typically less than four years.

Engineering Methods (EM): Like IR&D, EM projects are shorter-term, focused development efforts. The emphasis in this case is on the development of software and engineering tools.

Sustained Experimentation and Research for Program Applications (SERPA): The goal of the SERPA program is to carry out research in broad technology topic areas that are continually advancing and are of critical, ongoing importance to the corporation and the national security space enterprise.

Long-Term Capability Development (LTCD): LTCD projects focus on the development and/or maintenance of technical capabilities and tools for which there is a sustained corporate need.

ATIP is managed by the Research and Program Development Office (RPDO), with Principal Director Dr. Randy Villahermosa at the helm. Villahermosa, with the help of senior scientist Dr. Terence Yeoh, has the exciting, yet challenging, task of sorting through competing research proposals and generally managing the ATIP budget.

“We are stewards of this precious corporate resource and see it as both a privilege and a responsibility. We’re never satisfied when it comes to finding ways to nurture and support research and development at Aerospace,” Villahermosa said.

In order for ATIP to be as effective as possible, it consists of four different programs (see sidebar). This portfolio of options allows ATIP to be both responsive and far-looking.

RPDO is currently administering about 300 different ATIP projects involving approximately 700 Aerospace employees.

One of those employees is Dr. Eugene Grayver, a senior engineering specialist who is working on an IR&D project trying to send a strong signal and receive a weak signal simultaneously on the same antenna at the same frequency.

“This is a very exciting [project] because the expected result has been considered impossible for many years,” he said. “I’ve been thinking about this problem for a few years now and it started to seem less ‘impossible’ than some sources claim.”

Grayver considers ATIP essential. “Having employees work on these ‘out-there’ projects … keeps our skill current and exposes us to the latest and greatest developments from industry and academia,” he said. “We have to be able to see novel solutions to the problems faced by our customers and the contractors.”

Dr. Allyson Yarbrough, a principal engineer, is working on an IR&D project assessing a commercial capacitor for possible use in spacecraft. She also expressed the importance of ATIP.

“Our ATIP program is an incubator for innovation,” she said. “Four of the five of the patents I have earned have been associated with an ATIP project.”

An individual ATIP project can certainly be valuable and interesting. However, ATIP is also valuable on a larger scale. An example of this is the propulsion capabilities at Aerospace. Dr. Tom Curtiss, director of the Propulsion Science Department, explained that his department has benefited greatly from ATIP.

SERPA money has been used to build up the electric propulsion, chemical propulsion, and multipaction facilities. IR&D funding is also used to look at new technologies. Most recently, ATIP has supported the development of a new Propulsion Research Facility. The propulsion labs are used to support DOD programs (e.g., AEHF and WGS), as well as civil and commercial programs. Clearly, they would not be what they are today without the contributions of ATIP.

Of course, ATIP is not the only way for employees to do research and development work at Aerospace.

Nonetheless, ATIP plays an important role at Aerospace, supporting innovation and enhancing the company’s technical reputation. It also provides an important outlet for scientists wishing to do research.

“This isn’t just about allocating funds and reviewing proposals,” Villahermosa said. “It is about people wanting to pursue these endeavors that mean a lot to them.”

If you’re interested in reading more about those endeavors, check out these articles featuring projects that had ATIP support:

The Vapor Cell Atomic Clock

John Coffer, Jeremy Milne (in back), and James Camparo stand in front of their laser-pumped, rubidium-atom, vapor-cell-clock test bed.

Advanced atomic clocks, suitable for space deployment, must allow for extended periods of autonomous constellation operation, which enhances system robustness. Additionally, advanced atomic clocks can lower a mission control station’s workload. Air Force space programs that depend on precise timekeeping, such as GPS, Milstar, and Advanced EHF (AEHF), place constraints of size, weight, power, and environmental sensitivity on the spacecraft atomic frequency standards.

James Camparo, Electronics and Photonics Laboratory, said, “The specific objective of this effort is to develop prototypes of advanced rubidium vapor-cell and cesium atomic-beam spacecraft clocks, and to aim the development of these prototypes toward improving performance while reducing the overall size, weight, and power of the clock.” The development of these prototypes is designed to help solve the scientific and engineering problems confronting next-generation spacecraft clocks. Those working on this effort also include John Coffer and He Wang, both of the Photonics Technology Department.

The operation of an atomic clock requires the creation of a population imbalance between two atomic states connected by a microwave transition: the greater the imbalance, the better the frequency stability of the clock. In current rubidium clocks, such as those used for GPS, the population imbalance is created by optical pumping with a discharge lamp. For these devices, fractional population imbalances of ~0.1 percent are typical. Theoretical work conducted by Camparo and his team has shown that the population imbalance could be increased by nearly two orders of magnitude using a diode laser. Additionally, efforts are underway to use coherent (laser-induced) atomic excitation processes to generate atomic clock signals without a population imbalance. These efforts are aimed at chip-scale atomic clocks and take advantage of a phenomenon called coherent population trapping (CPT). While most research organizations focus on ground-based standards, The Aerospace Corporation’s laser-pumped rubidium clock activities concentrate on compact devices suitable for space applications. Two significant problems in this area include understanding the origin of excess noise in laser-pumped clock signals (and developing means for its mitigation), and creating means for smart-clock technology (i.e., a clock that senses and corrects perturbations that could lead to frequency instability).

Michael Huang adjusts a diode laser used to generate an atomic clock signal. In this experiment, the microwave signal is superimposed on an optical carrier, a technology that has allowed atamomic clocks to reach “chip-scale” dimensions.

In the cesium-beam clocks used in GPS, Milstar, and AEHF, a population imbalance between atomic states is achieved by passing an atomic beam through state-selecting magnets. These magnets transmit less than 1 percent of the atoms in the beam. Previous studies prepared by The Aerospace Corporation showed that 100 percent of the beam could be used if magnetic state selection was replaced with laser optical pumping. In addition to increasing clock signal, optical state preparation uses the clock’s cesium supply efficiently, increasing clock lifetime. Though laser-pumped beam clocks in many other laboratories are large instruments in carefully controlled environments, the efforts at The Aerospace Corporation focus on compact, lightweight devices suitable for spacecraft use.

A second major application of lasers in cesium beam clocks relates to atomic momentum manipulation. Using lasers to slow the speed of atoms (i.e., longitudinal cooling) increases the time that the atoms spend in a microwave cavity, thus narrowing the clock transition’s line shape. Transverse cooling results in the beam’s collimation and “brightening,” thus improving the clock’s signal-to-noise ratio. A significant technological problem addressed in this area is the creation of a cold, continuously operating (as opposed to pulsed) atomic-beam clock for use onboard spacecraft.

Camparo said, “Over the years, MOIE atomic clock investigations have provided the basis for continuous technical support to the Air Force and national security space programs. This support has primarily been to on-orbit anomaly resolution, assistance in manufacturer clock development efforts, and simulations of system-level timekeeping.”

In the coming year, The Aerospace Corporation’s research team will continue to operate its atomic-clock flight simulation test bed for Milstar/AEHF rubidium atomic clocks. In particular, this will include exercising the rubidium clock under stressing conditions and developing means to mimic the behavior of a mixed Milstar/AEHF constellation. Also investigated is the operation of RF-discharge lamps that produce the atomic signal in the rubidium clocks flown on Milstar, AEHF, and GPS satellites. These investigations have shown that RF power variations in the lamp’s circuitry primarily affect the lamp’s operation via heating of the rubidium vapor within the lamp. This may have implications in explaining anomalous on-orbit clock frequency jumps observed for a number of GPS satellites. The team also continues to examine integrity monitoring for the GPS system, where the clock autonomously senses that a problem has occurred and sets the satellite’s navigation message to nonstandard code. While the second harmonic signal from the rubidium clock is used as a status-of-health indicator, it is not understood how this signal depends on various clock parameters; research is aimed at addressing that question. Finally, the team constructed a Monte Carlo simulation of AEHF system timekeeping and used it to verify the contractor’s ability to meet certain system-level requirements.

Advanced Visible and Infrared Focal-Plane Sensors

Space-based electro-optical (EO) imaging systems collect vast quantities of data across various spectral regimes from a wide range of orbital altitudes. These systems range in size and complexity from units as small as consumer cameras to structures as large as NASA’s Webb infrared (IR) telescope/observatory with its 20-foot-diameter primary mirror. At the heart of an EO system, focal-plane imaging chips convert optical data into electronic analog (and eventually digital) signals for each pixel.

An Aerospace study, “Advanced Visible and Infrared Sensors,” has been investigating characteristics of these devices—in particular, signal, noise, and image quality. Funded by the Mission Oriented Investigation and Experimentation (MOIE) program, the study has examined how those properties are analytically modeled, as well as their experimental characterization. The experimental work is particularly important in diagnosing anomalies and design errors and in describing the devices’ fundamental imaging properties, thus providing feedback for design improvement.

Terry Lomheim, distinguished engineer in the Sensor Systems Subdivision, explained that “Visible and IR focal-plane devices are complex, mixed-mode (analog and digital) light-sensing integrated circuits (ICs). The most familiar ones—charge-coupled devices (CCD) and complementary metal-oxide semiconductor (CMOS) focal planes designed for detecting light in visible wavelengths—are part of cell phone cameras, camcorders, and digital still cameras. They consist of single (monolithic) silicon IC chips with numerous pixels wherein light enters through the frontside of the device.” Lomheim is the team’s principal investigator, and his coinvestigators are Jonathan Bray and Bruce Lambert of the EO Device Evaluation Lab and Jerry Johnson and Jeff Harbold of the Visible and IR Sensor Systems Department.

One motivation behind the project, Lomheim noted, is the fact that lower payload mass, power, and volume result in lower sensor-system life-cycle costs: “Smaller payload mass and power level increase compatibility with commercial spacecraft buses, for instance, and allow the use of lower-cost launch systems,” he said. “Improved radiation hardness may allow the use of orbital altitudes that are associated with higher space radiation dose levels, but are more cost-effective in terms of overall sensor constellation architecture.” Moreover, he said, advances in payload signal processing can reduce the cost of ground systems. “Visible and IR camera systems that collect images in many spectral bands, measure changes in the polarization of light, or operate at extremely low light levels all might enhance the information-extraction ability—and therefore the utility—of space EO sensor missions.”

The DOD and NASA have used advanced versions of these devices for several decades, and the architectures are maturing, with higher detection efficiencies, improved sensitivity, higher frame rate (the rate at which unique images are consecutively produced), larger pixel formats, and on-chip analog-to-digital conversion (ADC). On-chip ADC enables these devices to operate in a “photons in, bits out” manner.

Advanced devices include extremely thin silicon imagers that collect light through the backside for enhanced, as well as hybrid imagers. In the hybrid imagers, a grid of light-sensing pixels is mated to a corresponding grid of pixel unit cells inside a readout integrated circuit (ROIC). These pixel unit cells process the signal photocurrent, converting it to signals in the voltage domain. Each one contains a photocurrent/charge-to-voltage conversion preamplifier with a minimum of three transistors.

The wide variety of EO camera applications dictates a wide diversity of focal-plane requirements for parameters such as line rate, frame rate, dynamic range, linearity, operability, noise/sensitivity levels, and radiation hardness. As a result, a broad range of operating characteristics is needed, one that includes distinctly different focal-plane pixel unit cell electronics, multiplexing circuits, numbers of ADCs, operating modes, and operating temperatures. Aerospace has been examining how to optimize focal-plane designs to meet the appropriate signal-to-noise and image-quality requirements despite the limitations of the detecting material technologies and the IC manufacturing process.

In the area of advanced signal and noise modeling, Lomheim’s team has concentrated on focal plane arrays (FPAs) with built-in ADC capability, novel unit cell ROICs, circuits optimized for processing multispectral and hyperspectral data, and new detector technologies that span the visible to longwave IR region. Special pixel unit cells capable of wide dynamic range and low noise may further enhance these applications.

In 2009, the team also concentrated on perceptive signal and noise models on focal-plane devices that use smaller photolithographic design features. These devices will be manufactured using 0.18 micron CMOS design rules and must function at cryogenic temperatures. This represents a new operating regime for the key transistors in the mixed-mode pixel unit cell circuits.

Aerospace is also studying image-quality measurements. CCD visible focal-plane technology—the workhorse for advanced imaging systems since the mid- to late 1970s—is gradually being replaced by frontside- and backside-illuminated monolithic CMOS and hybrid silicon PIN visible focal-plane approaches. (A PIN photodiode serves as a light-receiving element to convert an optical signal into an electrical one.) New system applications of interest include panchromatic and multispectral sensors that require large-area, high-frame-rate two-dimensional arrays also capable of ultrawide dynamic range (i.e., full sunlight to night imaging). Key figures of merit for these focal planes include the modulation transfer function (MTF) or, equivalently, the point spread function (PSF); noise floor; well capacity; and uncorrected pixel gain and offset nonuniformity effects. Precise characterization of these parameters for a high-performance visible focal plane requires precise multicolor calibration of the optical system. Image quality is sensitive to spatial noise effects, which are determined empirically by nonuniformity and nonlinearity characterization over the pixel dynamic range. When the low end of this dynamic range covers lunar illumination, simulation in a laboratory setting requires optical setups involving multiple light sources and extreme “light tightness.”

In another image-quality measurement activity, Aerospace’s MTF and spot-scan characterization capabilities have been refined to enable precision-staring pixel spot-scanning over a wide range of spectral wavelengths. In this technique, a small spot of light is generated and moved around a pixel for diagnostic purposes. The Aerospace effort involved a confocal microscopic setup aimed at detailed pixel inspection in support of the spot-scanning work. The work of Lomheim’s team improves the corporation’s ability to cover these new measurement regimes to support SMC and other customers developing large, small-pixel visible/IR focal planes for an ultrawide dynamic range.

The MOIE project has scrutinized the process of modeling the focal plane sensors, with productive results. Understanding the properties of new imaging devices is vital to the design and planning of imaging systems, and one way Aerospace is achieving this understanding is through modeling the spectral MTF and PSF characteristics of the latest focal-plane pixel designs. The MTF, a numeric value, characterizes the response of the array to increasing spatial detail in the scene being observed. The PSF describes a system’s response to a point source, like a star. Such modeling will provide crucial design guidance in the development of these large arrays.

Aerospace-developed narrowband Er:YAG laser seed source. An Er:YAG crystal is configured in a nonplanar ring oscillator (NPRO) geometry to achieve narrowband output at 1645 nanometers with a line width less than 1 megahertz. The output of the NPRO will seed a larger Q-switched laser to generate high-peak-power narrowband pulses for eye-safe LIDAR applications. The observable green emission, derived from an optical upconversion process, traces the infrared optical path within the NPRO resonator.

In CMOS visible imager MTF and PSF modeling, Aerospace has refined numerical two-dimensional Fourier transform methodology for converting empirical spot-scan–generated PSF data to a precision MTF description of a pixel response. This has proven successful and has clarified certain effects, thought to have been data anomalies, as real physical effects in the pixel response. The effort employed a new process involving the mapping of multipixel data into a single effective pixel grid. This allows much shorter data-collection times and avoids data uncertainties associated with systematic drifts and slow instabilities in the spot-scanning optical setup. The effort also demonstrated how model development directly affects experimental research work and vice versa.

The Aerospace sensor project has completed a significant upgrade to its experimental color-dependent spot-scan capability. The updated configuration includes additional diagnostic tools that more completely characterize the operation of the system and a confocal microscope fitted into the optical system for more precise determination of spot focus. The new configuration permits acquisition of highly accurate and repeatable wavelength-dependent pixel response data with time-reduction factors as high as 100.

Aerospace used two independent experimental techniques to derive wavelength-dependent MTF data for two CMOS imagers: a tilted-knife-edge method, with an Offner relay optical reimaging system, and the spot-scanning method described above. These techniques quantified the impact of design and manufacturing variations on the color-dependent MTF characteristics of the imagers. Specific diffusion-related and pixel circuitry layout effects were precisely correlated to the measured spectrally dependent MTF degradation.

This MOIE project’s improved, efficient MTF/PSF laboratory characterization capability has enabled the detailed color-dependent characterization of a frontside-illuminated CMOS imager (developed by JPL) using precision spot-scanning and corresponding/confirming tilted-knife-edge MTF characterization. Lomheim described the imager’s electronics: “This CMOS imager has a spacing between adjacent photodiode pixels of 9 microns. Its photodiodes are formed between an n well and p epitaxial layer, characterized by a lower doping level and hence a much deeper depletion depth that would prevail for typical cell phone camera CMOS imagers. For this device, the photodiode area is inscribed toward the center of a pixel pitch and surrounded by pixel electronics and an opaque contact along one direction and pairs of overlying metal lines along the orthogonal direction.”

This type of detailed pixel-level examination of the relationship between the device manufacturing layer parametrics and the imager’s EO imaging capability is essential to improving this technology and guiding it toward the future goals and requirements of Aerospace customers.

Lasers For Space Applications

In 1971, Aerospace performed its first illumination of a Defense Support Program satellite in orbit to calibrate the sensor on board. The illumination from the ground was accomplished with a hydrogen fluoride (HF) laser, which emits light near 3 microns. For the next 25 years, this laser was used for all Aerospace satellite illuminations and became the cornerstone of Aerospace’s laser beacon effort. Aerospace’s success led to an increasing demand for this capability, as well as the desire to illuminate satellites from multiple ground sites. This prompted the need to develop a more reliable, transportable, and user-friendly replacement for the HF laser. By the mid-1990s, an Aerospace Mission Oriented Investigation and Experimentation (MOIE) effort began for this purpose, and led to the development and implementation of two solid-state 3-micron sources—an Er:YAG laser and an optical parametric oscillator (OPO). World-record output power and efficiencies were achieved with both devices.

Ongoing research involves the evaluation and development of new laser technologies for improving defense capabilities in remote sensing and satellite sensor calibration.

“Our most recent laser development efforts have focused on a 3-micron wavelength-agile source for remote detection of toxic chemical species; a narrowband eye-safe 1.6-micron laser for various light detection and ranging (LIDAR) applications, including cloud, wind, and plume detection; and a 4.5-micron laser source for national security space applications,” said Todd Rose, principal investigator of the project and laboratory manager in the Photonics Technology Department (PTD). Coinvestigators from PTD include DaWun Chen, senior scientist, and Steven Beck, department director.

“Frequency-agile laser sources are useful for remote sensing applications that use differential absorption LIDAR, or DIAL, to detect trace chemical species in the gas phase. DIAL can be used to track plumes of toxic industrial chemical vapor formed by accidental or adversary-caused release near populated areas or other areas of interest. Detection of multiple species in a timely manner requires laser systems whose frequency (color) can be tuned quickly and accurately to select spectral absorption features of target gas species,” Rose said. The team is working on a rapidly tunable 3-micron DIAL source, which is based on a nonlinear optical approach called difference frequency generation and optical amplification. “The goal of this effort is to demonstrate a 10-watt wavelength-agile system using an available high-power 37-kilohertz-repetition-rate Nd:YAG laser pump and commercially available telecom tunable laser diodes,” Rose said.

For defense applications, a tunable OPO is being developed to provide output near 4.5 micron. This device will be pumped with a 20-watt, 1.9-micron thulium fiber laser and will generate midwave-infrared output via a nonlinear optical process similar to difference frequency generation. A second approach using a pulsed holmium YAG laser to pump and OPO is also being pursued. Other LIDAR applications, such as the characterization of winds in the vicinity of aircraft or target identification on a battlefield, necessitate the use of eye-safe sources. A compact narrowband Q-switched (pulsed) Er:YAG laser is being constructed for this purpose. A key component of this laser system is a new Aerospace-developed nonplanar ring oscillator (NPRO) that provides narrowband seed light at 1.6 micron. This is the first demonstration of an NPRO operating in this important eye-safe wavelength region.

The effect of plasma-treatment duration on the formation of surface carboxyl content and the resultant shear strength.

The improvement in adhesive strength achieved through various treatments relative to the strength achieved through abrasion.

Plasma Treatment of Composite Adhesive Bonds

The low density of fiber-reinforced composites—along with their adjustable high stiffness and strength—makes them the material of choice for many space applications; however, these materials are susceptible to bond failures caused by deficiencies in surface-preparation techniques. The most common preparation technique relies on mechanical roughening (often sanding), which uses abrasion to remove surface contaminants and increase roughness. However, the abrasion can damage the reinforcement plies of advanced composites, reducing effective bond strength. In addition, contamination and inconsistencies in surface preparation are problematic. Therefore, the spacecraft community needs a more consistent and reliable process for creating high-performance bonds.

The Aerospace Corporation has been evaluating plasma treatment for surface preparation of composite hardware and has found that this process can address the lack of consistency and reliability in current industry practices. “The atmospheric plasma treatment is noncontacting, requires minimal operator intervention, and can be applied to complex shapes while significantly reducing the risk of physical damage to the composite since the process affects only the outer few nanometers of the treated surface,” said Rafael Zaldivar of the Materials Science Department, lead investigator for the project.

The plasma is generated by a capacitive discharge at atmospheric pressure to produce a uniform high-density mix of ions, electrons, and free radicals. These reactive species are then directed onto a surface. A number of physical processes can occur during plasma treatment: ablation (cleaning by removing low-molecular-weight organic contaminants); etching (affecting the surface morphology of the substrate); crosslinking (interconnection of long-chain molecules); and surface activation (chemical bonding of reactive molecules with the substrate).

“Our initial experimental work has shown that plasma treatment not only enhances the consistency of the mechanical performance of bonded hardware, but also increases strength. Strength increases in excess of 50 percent have been realized. In addition, the fracture toughness of these bonded joints, critical to long-term durability, has also been shown to improve by more than 100 percent compared with conventional preparation methods,” said Zaldivar. Structural-design limits are currently determined by applying statistical margin-to-test results determined from the materials and processes intended for use in flight production. Process changes that deliver a smaller variation of results or higher bond strengths will allow an increase in structural design limits and therefore increase the trade space for vehicle design (size, mass, and power).

A critical aspect of this work has been identifying the mechanism responsible for many of the improvements. Using x-ray photoelectron spectroscopy (XPS), the researchers have identified the distribution of activated groups that are formed as a function of treatment conditions and have correlated them to mechanical performance. In the case of epoxy bonded graphite/epoxy composites, an increase in the concentration of surface carboxyl species translates to an increase in adhesive bond strength.

An atmospheric plasma wand used to treat composite parts prior to bonding.

“These dramatic improvements are a result of enhanced chemical bonding that is now possible at the interfaces between these newly formed carboxyl groups on the surface of the composite and the epoxide groups within the adhesive,” said Zaldivar. “However, not all resin systems used in composites develop the necessary type of functional groups for improvements in strength when treated with plasma. Understanding how the chemical structure of the initial composite material and the plasma treatment conditions combine to result in the necessary type of functional groups is paramount for tailoring our interfacial reactions. Tailoring of composite interfaces not only potentially increases the capabilities of current systems, but also opens a wide array of possibilities for new materials systems to be used in composite hardware.”

Many resin systems available for composite manufacture today have not been used for space applications, primarily because of drawbacks associated with their poor bonding behavior. Aerospace has recently developed a process to modify bonding surfaces of these materials to make them more susceptible to plasma treatment improvements. For example, polycyanurate-based composites, which are commonly used in national security space structures, do not show the magnitude of improvement that some of the epoxy matrices do after plasma treatment. Zaldivar said that by modifying the critical bonding interfaces, the concentration of the active species responsible for bond strength can be controlled locally and increased by more than 300 percent from that of an unmodified system.

“Plasma surface preparation techniques may be able to lower costs and improve reliability, average strength, and consistency,” said Zaldivar. “Contractors are likely to embrace the technology when it becomes widely available, but if historical precedence holds, they may do so without an understanding of the underlying chemistry and physics of the adhesive bond enhancement mechanisms. The level of understanding that will result from this work is important if the space industry is to move to any sort of qualified atmospheric plasma process.”

Preventing Radio-Frequency Breakdown in Satellite Components

Both military and commercial satellites rely on radio-frequency (RF) systems for communication and navigation payloads. The RF power demand for these systems has continued to grow with increasing user needs and higher available satellite power. Global Positioning System (GPS) III and the Mobile User Objective System (MUOS) are just two examples of satellites with unparalleled RF power requirements at multiple frequencies.

RF-breakdown team members (from left to right) Richard Afoakwa (University of Maryland), Timothy Graves, Abhishek Pathak (UCLA), and William Cox.

With increasing power levels comes increasing risk for RF breakdown within high-power components. RF breakdown is an electrical discharge—such as a plasma or multipactor discharge—that can degrade high-fidelity communication signals and cause physical damage to susceptible components. These discharges can lead to complete loss of essential communication or navigation signals and prevent proper satellite operation. As such, preventing RF breakdown is essential.

In response to this growing risk, The Aerospace Corporation is leading new research into plasma and multipactor breakdown. This program, led by Timothy Graves, Space Materials Laboratory, is pursuing basic research into the underlying phenomenology while helping contractors develop better hardware and testing requirements.

“Aerospace has a unique window into the real-world issues experienced by RF component manufacturers. This allows us to tailor our research programs to solve problems of today and tomorrow through a physics-based understanding of these concerns,” said Graves. “Our goal is to decrease risk through an improved understanding throughout the satellite process. From component design, through extensive ground testing, to on-orbit operation, we depend on the success of these RF components. Our research is providing new ways to improve success in each of these areas.”

Multipactor breakdown is one of the highest concerns for high-power RF component designers today. Also referred to as multipaction, this discharge type can occur when electrons impact material surfaces in resonance with the RF electric field. This resonance depends primarily on three components: the RF voltage (how fast the electron is accelerated), the RF frequency (how long before the electric field changes direction), and the geometry (how far the electron travels before hitting a surface).

As electrons resonantly impact electrode surfaces, the electron density grows through secondary electron emission. The secondary electron yield, defined as the number of emitted electrons per incoming electron, is a fourth parameter for multipactor breakdown, such that the secondary electron yield is greater than 1 to develop the discharge. When these conditions are met, the formation of a large electron density can perturb the RF system and substantially increase the risk of plasma breakdown and component damage.

Richard Afoakwa and William Cox investigate the performance of a new, software-based phase-null system for multipactor detection.

Detecting multipaction in complex devices can be difficult, yet early detection in product development is critical for satellite cost and schedule. In some cases, devices with undetected multipaction in unit-level tests may experience catastrophic failures after integration into the satellite system. To prevent this, Aerospace has characterized various breakdown signatures and developed new diagnostics for improved detection sensitivity.

New software-based phase-nulling diagnostics for multipactor detection have been recently developed at Aerospace using fast analog-to-digital processing to analyze the relationship between forward and reflected power signals. With software, the system monitors for any complex impedance change caused by multipactor or plasma formation. These software-based systems have significant advantages over manually controlled analog devices, including higher stability, improved interpretation, and greater sensitivity.

Multipaction depends on the material surface, which can vary strongly with contamination. These discharges also dynamically change as multipacting electrons impact surfaces, desorb contaminants, and/or form new surface thin films, a process known as multipactor conditioning. Aerospace has performed extensive research into multipactor contamination and multipactor conditioning on various materials, specifically characterizing a new multipactor mode referred to as transient-mode multipactor discharge.

“The transient-mode multipactor discharge forms similarly to a conventional discharge, yet as the electrons remove contaminants and change the secondary electron yield, the multipactor is extinguished,” Graves said. “This transient process can repeat indefinitely under continued contamination until the device is damaged.” Several Aerospace surface science studies are investigating dynamic surface changes with multipactor exposure. Initial results have shown the formation of thin films that can initially improve the voltage threshold for multipaction. Further studies are planned with potential application to surface science and nanotechnology research areas.

Graves credits the success of this program to the diverse scientific backgrounds available at Aerospace. “Our multidisciplinary team includes researchers in RF engineering, plasma physics, materials science, and systems engineering. With experts in each of these areas, we have made strong and unique contributions toward mitigation of RF breakdown.” The program’s team includes participants across many departments, including William Cox, Tom Curtiss, Rostislav Spektor, and Jason Young, all of Electric Propulsion and Plasma Science; Gouri Radhakrishnan and David Witkin, of Materials Science; Jerry Michaelson, of Communication Systems Implementation; and Frank Villegas, of Antenna Systems.

The program has also had strong participation from students at UCLA, Loyola Marymount University, Embry-Riddle Aeronautical University, Purdue University, and the University of Maryland. Graves also cites an “excellent collaboration with many government contractors to work together toward the common goal of better device performance and reliability.”

“Our research will continue to adapt to meet our customer needs. We hope to pave the way for improved computational prediction capability in complex structures, improved device testing with enhanced diagnostics, and, lastly, improved understanding of breakdown phenomenology—toward our main goal of ensuring space mission success.”

Beyond-Next-Generation Access to Space

What will launch systems look like beyond the next generation of spacelift systems? What technology is needed to enable such systems? What are the risks, and how will they be met? What missions will these vehicles perform, and at what operating costs?

Prompted by such questions, The Aerospace Corporation in 2009 began a research program to identify possibilities for the generation of launch system architectures beyond those currently planned or under development, and to identify the technologies that would enable such systems. Options for these beyond-next-generation spacelift systems are being examined for satellite and human spaceflight applications.

The Air Force Spacelift Development Plan provides the architectural blueprint for launch following the EELV program. This plan recommends development of a reusable

A significantly wider set of air-breathing concepts were explored in phase two. Metrics were expanded to include criteria unique to horizontal-takeoff vehicles—basing flexibility and runway operations, high-speed hypersonic cruise, and boost-glide point-to-point responsive global reach, with flight durations of less than 2 hours. Comparison with the phase-one evaluation shows that achieving horizontal takeoff capabilities and associated benefits does pay a price in other metric areas.

booster with expendable upper stages to significantly lower the cost of next generation launch vehicles. Aerospace has helped the Air Force develop a detailed technology road map for operational deployment of this concept. The beyond-next-generation effort builds upon the investment in reusability and operable designs from the Air Force’s plan to evaluate fully reusable two-stage-to-orbit systems and single-stage-to-orbit systems. Today’s technological advances offer the potential to expand the missions and markets for beyond-next-generation systems while reducing the cost per flight and improving turnaround time between flights. The vision for future missions includes routine, highly responsive space access (also referred to as satellite launch on demand), space tourism, and point-to-point passenger/cargo delivery between major cities (e.g., Los Angeles to Tokyo in less than 2 hours).

“The initial phase of the study found that the success of future systems is closely tied to achieving operational efficiencies more characteristic of aircraft than of today’s rockets—for example, turn time, maintenance effort/hours, and mission abort options. Success was not so dependent on dramatic improvements in performance,” said Jay Penn of the Launch Systems Division and principal investigator of the study. Aerospace co-investigators include Greg Richardson, Greg Meholic, Bob Hickman, Joe Tomei, Glenn Law, and Fred Peinemann.

“Success will likely be driven by the flight rate of reliable, reusable launch vehicles that meet the demands and price markets of future missions. Two-stage-to-orbit vehicle architectures and modern engines could satisfy future performance needs, but launch vehicles based on today’s technology would become extremely cost prohibitive in new markets,” said Penn. The costs of maintaining today’s technology—rather than investing in future technologies—would be prohibitive because of low flight rates, major refurbishment needs between flights, and significant failure costs. Most critical to the future is investment in technologies focused on operability that would dramatically alter launch vehicle design approaches and yield fully reusable, low-cost, highly operable space-access platforms. Investments in novel air-breathing propulsion concepts and supporting propulsion technologies offer opportunities to increase system robustness and performance. However, these concepts introduce a new set of design challenges because of their highly integrated engine cycles.

Research has concentrated on three primary areas: the relationship between future spacelift markets and missions, advanced launch-vehicle architectures and performance and operability metrics outside of conventional approaches, and game-changing and emerging technologies. The range of emerging technologies that have been explored includes lightweight structures that use emerging advances in carbon nanotube-based materials, space elevators (a concept in which payloads are lofted to orbit on a carriage that ascends a long, very exotic tether having greatly varying properties along its length), nuclear thermal propulsion, and constant volume combustion devices/pulse-detonation engines that rely on an inherently simpler and more thermodynamically efficient cycle than those used on existing engines.

Ten single-stage-to-orbit vehicle designs were modeled; these encompassed vertical and horizontal launch options using various propellant combinations and diverse rocket and air-breathing propulsion systems. The long list of technologies required for these designs includes passive and active highly operable high-temperature and lightweight thermal protection systems; high-temperature seals and actuators; highly integrated propulsion systems, aerodynamics, and control; advanced vehicle health monitoring; and a range of technologies that foster rapid vehicle turnaround for the subsequent flight. Even with innovative propulsion and material technologies, the single-stage-to-orbit designs either did not meet the performance criteria or resulted in vehicles with large gross liftoff weights and large dry weights.

“The single-stage-to-orbit system performance design was approximately twice the dry weight and gross liftoff weight of a similar capability two-stage-to-orbit design. Thus, even if a single-stage-to-orbit vehicle is successful, it will not yield a better beyond-next-generation solution than a two-stage-to-orbit design using far less aggressive technologies. Based on these findings, our efforts turned to two-stage-to-orbit design solutions,” Penn said.

As with the single stage, the nine two-stage-to-orbit designs studied were sized to deliver 25,000-pound payload lift to low Earth orbit. The designs and systems evaluated included rocket and turbine-based combined-cycle boosters, pulse-detonation engines, an air collection and enrichment system, and magneto-hydrodynamic-augmented propulsion, which allows more total heat to be added to the flow and increases propulsion cycle and power efficiency. Key assessments of the two-stage designs included defining performance parameters and vehicle characteristics for each concept for dry weight, gross liftoff weight, length, ground-to-orbit equivalent specific impulse, and propellant density. Also studied were spacelift and high-speed global-reach needs such as payload/dry weight, orbiter wetted area, orbit flexibility, manufacturing complexity, volumetric sensitivity, and basing flexibility.

“No attempt was made to apply individual weightings because these are highly dependent upon specific mission applications and objectives,” Penn said. “When customer and stakeholder preferences are known, weighting can be easily applied and cumulative scores determined for each concept. Assuming there is a development program for innovative propulsion and materials technologies, all two-stage-to-orbit concepts achieved reasonable gross liftoff weights and sizes. The relative merits of each concept are mission and application dependent,” Penn said.

For example, if flexible access to low Earth orbit is determined to be the most critical future need, then vertical-takeoff, horizontal-landing two-stage-to-orbit solutions appear best. In these designs, the booster stage could be based on the reusable booster system design, and the orbiter stage could be based on either a fully reusable rocket or a higher-performing but more technically advanced rocket-based, combined-cycle powered stage. If hypersonic cruise or dual use as an atmospheric transport or bomber becomes most important, then the horizontal-takeoff combined-cycle and pulse-detonation engine solutions seem most promising. If integration to traditional airport runway operations and air-traffic control is needed, then concepts employing air collection and enrichment systems are most attractive because they have acceptable payload-to-gross-liftoff-weight ratios and can accommodate existing runway limits. These air-collection designs would not have quantity/distance issues because they take off and land with no onboard oxidizer (a feature that the other concepts cannot avoid). In an air collection and enrichment system, the oxidizer is extracted from the air, so there is no large quantity of stored liquid oxygen at takeoff, and therefore no explosive hazard. If a nearer-term air-breathing solution is most appealing, then air collection and enrichment systems also show merit.

The study found that two technologies in particular combined into a vehicle concept showed the most promise when compared with the rocket-based and turbine-based combined-cycle concepts. The first is an air collection and enrichment system that uses a refrigeration-based cycle to extract the oxidizer from the atmosphere during subsonic flight for later use in the trajectory, and the second is a pulse-detonation engine that has a higher cycle efficiency than existing engines and is expected to yield improved installed thrust to weight. The pulse-detonation engine is at a lower state of development than the combined-cycle engines, but is believed to be feasible because of its inherently simpler and more efficient design. Pulse-detonation engines have been developed by hobbyists and are routinely run in academia. Penn believes that this type of vehicle concept—combining the air collection and enrichment technologies with pulse-detonation engines—merits further evaluation.

The study results allow The Aerospace Corporation to offer its customers feedback as they determine future development investments. Penn’s team is working to improve the modeling of pulse-detonation engine systems for spacelift missions and volumetrically efficient “wave-rider” hypersonic aircraft for point-to-point transport. The team will also design reference missions for study—for example, transporting 50 passengers from Los Angeles to Tokyo in less than two hours, or launching a replacement spacecraft to orbit in less than one week after an on-orbit failure. The team will evaluate concepts and system architectures against the requirements of these types of missions. The researchers will also study the environmental effects of nuclear thermal rockets, the near- and far-term benefits of carbon nanotube materials applied to launch vehicles for their unique electrical and structural characteristics, and the use of orbital propellant depots for refueling.

Lasers Probe Atmosphere for Aerosol Characterization

Aerosols are the fine particulates suspended in the air that produce hazy conditions. These small particles play a critical role in climate and weather, and directly affect how much solar energy is retained or reflected by Earth’s atmosphere. Their primary influence is through scattering of solar radiation, but they also can absorb a significant amount of energy, depending on their composition. Aerosols also play a secondary role as the nuclei for condensation of water and other atmospheric species to form fog and clouds.

While the critical role of aerosols in climate and weather is acknowledged, their exact contribution is poorly understood, said Pavel Ionov of Aerospace’s Photonics Technology Department. In fact, he explained, aerosols represent one of the largest sources of uncertainty in climate models because they are incredibly complex and diverse, as are the mechanisms of their creation, transformation, and removal from the atmosphere. In addition, their relatively short lifetimes of one to two weeks lead to incomplete mixing and very complex spatial (especially vertical) distributions.

“Aerosol effects are not limited to weather and climate,” Ionov said. “For instance, they play an important role in atmospheric chemistry and public health. Also of interest to Aerospace’s primary customers is the role aerosols play in degrading space-based imaging—especially hyper-spectral—as well as affecting laser propagation through the atmosphere.”

The Photonics Technology Department has been developing laser-based remote sensing of atmospheric aerosols. This technique, known as a lidar, probes the vertical distribution of aerosols in the atmosphere. The system sends a short pulse of laser light vertically into the atmosphere, and some of the laser light scatters back off of the aerosols and air molecules. The time of arrival of the scattered light provides distance information, and the intensity of the backscattered signal reveals how much aerosol is present.

The primary focus of the Aerospace aerosol lidar program has been calibration and validation of satellite sensors. Ionov, together with Steven Beck of the Electronics and Photonics Laboratory, Leslie Belsma of Environmental Satellite Systems, and Christopher Woods of Radar and Signal Systems, have been working on a research project to improve aerosol optical thickness measurements from space using lidar ground truth data. “Because of our poor understanding of aerosol mechanisms, direct monitoring from space is the only viable way to assess their global weather and climate effects,” explained Beck, who has been working with lidar since its inception.

All Earth-sensing satellite instruments require ground-truth validation once on orbit—satellite data must be compared with known and verifiable data taken at the same time and location by ground or aircraft based sensors. The primary goal of the Aerospace aerosol program is to develop reliable remote sensing techniques that provide such calibration standard while providing as much information about the aerosols as possible. One approach is to use multiple instruments for greater accuracy and the complementary information that they can provide. The Aerospace project also operates a collocated sun photometer. This instrument derives height-integrated aerosol parameters from solar and sky radiance measurements. This combination of active and passive aerosol instruments creates a more reliable and more comprehensive data set than any one of the instruments can provide. The combined data is continuously collected, checked for consistency, and archived. It is then further compared with the aerosol data products of space-based sensors such as MODIS.

In addition to the lidar and sun photometer data, other meteorological data is combined into a comprehensive database. This database serves as a unique resource for exploring the relationships between aerosols and local meteorological conditions. The research team is hoping that this will provide insight into aerosol production and transport mechanisms. This knowledge will also improve accuracy of space-based remote sensing of aerosols. Because of the complexity and variety of aerosols and because satellite measurements of them are indirect, analysis of satellite data is complex and relies on assumptions about likely aerosol compositions.

Improving the scientific understanding of aerosol mechanisms will improve these assumptions and the algorithm accuracy of space-based sensors such as MODIS and VIIRS.

A unique feature of lidar is its remarkable spatial and temporal resolution. The lidar data shown in the accompanying graphic reveals complex local atmospheric dynamics. “It is the study of the mechanisms underlying the aerosol dynamics that is of interest to the research community,” said Beck. The research team is hoping that their measurements will find application in areas that go beyond space sensor calibration. For example, aerosols serve as a convenient marker to visualize atmospheric boundary layer dynamics, which, in turn, is critical for accurate weather modeling and in study of pollution transport.

Building Plasma Specifications for Highly Elliptical Orbits

Space plasma contributes to two distinct spacecraft hazards: surface charging and surface dose. All satellites, in all orbits, have surfaces that charge in response to the space plasma environment. Because differential charging carries an associated discharge risk, satellites must mitigate surface charging. In addition, sensitive satellite surfaces can degrade as a result of the dose accumulated from that same space plasma environment. The environment responsible for surface charging and dose is only well observed near geosynchronous Earth orbit (GEO).

A team of Aerospace scientists, led by Timothy Guild of the Space Science Department, is working to broaden the knowledge of this space plasma environment beyond GEO by using unique instruments in highly elliptical orbits (HEO). The team will develop two plasma specifications specifically tailored to HEO. One will characterize space plasmas that contribute to surface dose, while the other will determine the plasma environment most conducive to surface charging. “These specifications will feed directly into ongoing spacecraft development efforts and aid in evaluating on-orbit anomalies related to surface charging,” Guild said.

An interval of surface charging measured from an Aerospace plasma analyzer in HEO. The charging signatures are the annotated bright lines in the electron and ion spectrograms. The energies represented by these lines with time correspond to the potential of a nearby surface to the spacecraft frame (electron line) and of the spacecraft frame (proton line) relative to the space plasma.

Surface Charging

The electrostatic potential of spacecraft surfaces is a complex function of the net current to those surfaces from the space environment. Low-energy ions and electrons impact the surface and impart their charge, or, depending on the surface and incident species, eject one or more secondary electrons. Ultraviolet photons on the sunlit side remove charge through photoionization. Any surface that charges to a large potential relative to neighboring surfaces poses a potential discharge risk, and requires mitigation.

“One shortcoming of existing surface charging specifications is that they were largely derived from measurements at GEO,” Guild said. “The process of surface charging can be highly localized, even within a few hours of local time at GEO. Previous HEO observations of charging intervals show a strong radial, local time, and geomagnetic activity dependence to the charging likelihood.”

Guild and other members of the team—Joseph Fennell, James Roeder, James Clemmons, and Margaret Chen, also from the Space Science Department—are using these plasma observations in HEO, as well as observations from the Aerospace-developed surface-charging monitors on the NASA TWINS (Two Wide-angle Imaging Neutral-atom Spectrometers) mission to investigate charging intervals in HEO. The charged particle motion in space allows these HEO observations to be mapped along magnetic field lines to other orbits, contributing to charging specifications for orbits from medium Earth orbit out beyond GEO.

Tedlar, a white fluoropolymer film, before (left) and after exposure to the equivalent of 1 year (middle) or 3 years (right) of the GEO space environment.

Surface Dose

The impinging ions and electrons deposit all their energy in the first few mils (1 mil = 0.001 inch) of the spacecraft surface, causing intense radiation damage to thin films and coatings. The surface radiation dose caused by the low-energy plasma dominates for thicknesses below about 1 mil.

“Existing satellite specifications for surface dose are also largely GEO-centric,” Guild said. “In our project, we are developing a surface dose specification for vehicles that traverse HEO magnetic field lines, sometimes flying through a very different plasma environment.” Guild noted that previous Aerospace research showed order-of-magnitude differences between the average omnidirectional hydrogen flux between GEO and GPS orbits. “By combining these three specialized HEO plasma datasets, we will drastically improve our knowledge of the environment in time and space, leading to a more robust and more widely applicable plasma specification for surface dose,” he said.

Current understanding of surface dose and charging, as well as the state-of-the-art specifications of these hazards, have been contributed by The Aerospace Corporation, NASA, and Los Alamos National Laboratory, among other institutions. Aerospace personnel are widely recognized in the fields of spacecraft surface dose and surface charging, and have contributed many of the plasma and surface charging specifications used for spacecraft design.

“Aerospace has designed, built, and operates three plasma analyzers in HEO uniquely suited to providing plasma and surface charging specifications,” Guild said. “Aerospace personnel have the expertise to appropriately interpret these observations and their differences with other empirical models. After developing the plasma and surface charging specifications, Aerospace is also well positioned to include these results into the next-generation radiation specification models and effectively communicate the results to relevant customers via our close involvement with many national space programs,” Guild said.

Securing Computer and Data Networks

Internet and military data networks are under constant assault by a wide range of cyberattacks. One such attack on the networks of a large U.S. defense contractor demonstrates the magnitude of this problem: Its networks were compromised after adversaries replicated the rotating passwords of RSA-SecurID hardware tokens used by its employees. An earlier attack on—and data exfiltration from—the RSA corporate network made the network compromise possible. These sophisticated attacks pose a risk to the networks and computers of The Aerospace Corporation and those of its customers.

Discovering malicious network traffic using novel discriminators.

In many of these types of cyberattacks, adversaries (and their malicious software, also known as malware) will linger in infected computers or networks for long periods of time. By installing a command and control network connection back to their operations center, the adversaries can monitor the activity of victims and plan future malicious activity. These types of attacks have been aptly named advanced persistent threats. Many research organizations and commercial companies, including The Aerospace Corporation, have been investing resources into threat detection, reverse engineering of malware to understand its effects, and techniques to remediate infected computers and networks.

“At Aerospace we are developing improved network traffic analysis techniques to be used in defense of our computers and networks,” said Bob Lindell, senior project leader in the Computer Science and Technology Subdivision. “Rather than working with synthetic datasets, like many security researchers do, we are developing techniques by analyzing real-world Internet traffic generated by Aerospace. We daily process approximately 100 gigabytes of network traffic and compute a set of discriminators that can be used to differentiate network traffic, such as e-mail, Web transactions, plain text, and cipher text. Our ultimate goal is to refine and use these discriminators to separate malicious cyberattack traffic from normal traffic.”

Lindell is the principal investigator of a research team that is developing methodologies that discover and prevent advanced persistent threats. The team includes Joe Bannister and Jim Gillis, also of Computer Science and Technology, and Eric Coe and Nehal Desai, Computer Systems Research Department, and focuses on finding low-profile, stealthy traffic entering and leaving the Aerospace network. Lindell explained that this could be control messages using an interactive backdoor exploited by an adversary to monitor, administer, or stage data files to be processed and later sent to a remote destination. Or, it could be the explicit exfiltration of information itself. “The team does not generally look for gross perturbations of network traffic that might occur during a denial of service because many ways already exist to identify such attacks,” Lindell said. “The fact that these attacks cause users to experience loss of service often suffices to alert system administrators to the problem. Many Aerospace users have been affected by slow network response time during denial-of-service attacks.”

On any given day, Aerospace users make approximately nine million distinct network connections to other computers on the Internet. One idea researchers have is to filter out well-known servers, such as Google, Yahoo, and Facebook, and analyze what remains. However, there are thousands or millions of connections to those servers alone, and so this method does not work well when trying to analyze the data. Approximately 25 percent of all network connections communicate with a unique computer each day. Viewed as a probability distribution, this “long tail” in the distribution of unique Internet destinations makes it nearly impossible to find bad connections to obscure destinations.

The Aerospace team realizes the challenge involved in discovering subtle behavioral differences among encrypted traffic that is sent and received from Internet sources. The team developed a set of discriminators that can measure differences between network traffic that is used for secure online purchases (e.g., Amazon) and Skype, a peer-to-peer protocol used for Internet phone calls (VoIP) and file transfers. Skype is an undocumented commercial protocol that uses encryption, but has subtle characteristics that differ from other secure Web traffic. While not permitted on the Aerospace network, Skype does appear in the network traces from time to time. “Given that it rarely appears in the data, and that its network behavior and protocol are not well understood, it was particularly interesting that a subset of discriminators we have been investigating provided some anomaly detection capability for Skype. We believe additional combinations of discriminators will ultimately detect other types of stealthy anomalous traffic in the datasets we are analyzing,” Lindell said.

In the financial industry, fraud detection is a well-known anomaly detection problem. When people try to embezzle money, their modifications to bookkeeping records are intentionally subtle, with the goal of going undetected. One method used to detect this behavior is Benford’s Law, which is an observation that the leading digits from a sampling of numbers derived from real-world sources of data have a nonintuitive distribution. For example, if the heights of buildings in a city are collected, the leading digit of “1” will occur about 30 percent of the time. Naively, one might have expected each of the digits 1 through 9 to appear about 11 percent of the time, as in a uniform distribution. In the real world, Benford’s Law is usually indicative of an underlying distribution that is log-normal. When people intentionally modify bookkeeping records, most are unaware of this nonintuitive distribution of the leading digit of transaction values. These falsified entries tend to skew the distribution away from a log-normal distribution and can be detected efficiently through the use of Benford’s Law.

“At Aerospace, we attempted a novel application of Benford’s Law to analyze network traffic. Based on our analysis, the amount of data downloaded from the Internet, from each inbound network connection, agrees closely with Benford’s Law. Intuitively and in retrospect this is not surprising, but in over 35 years of network experimental research it had never before been documented. Similar to other real-world data, such as the heights of buildings, or the lengths of rivers, the file sizes of information stored in the Internet also display a log-normal distribution. While we continue to refine this approach, a Benford’s Law-based network detector thus far does not appear to be sensitive enough to detect low-profile, stealthy attacks, such as remote backdoors,” Lindell said.

Beginning this year, the team started exploring how machine learning algorithms can be used to cluster and discriminate between different types of network flows. “Our goal for next year is to further develop this technique to detect unwanted interactive traffic that may be traversing our network path to the Internet. While other researchers are focused on signature-based methods, or are looking at detecting the next worm spread, we will continue to focus our research efforts in the area of detecting the advanced persistent threat of stealthy backdoor exfiltration methods,” Lindell said.

Flight-Cyber Defense

Most of the subsystems on spacecraft function via some degree of software or computer control, and thus are susceptible to cyberattack. In an effort to understand and combat such threats, researchers at The Aerospace Corporation and the Air Force Space and Missile Systems Center have been developing a spaceflight processing testbed. The goal is to investigate cyber threats to national security space systems, identify vulnerabilities, and develop defensive techniques.

The research team, led by Todd Kaiser of the Software Systems Analysis Department with coinvestigators Daniel Balderston of Software Systems Analysis and John Nilles of Cyber Engineering, faces a set of unprecedented challenges in developing ways to safeguard spacecraft from cyber-threats along with the terrestrial systems they rely upon. “The number of potential susceptibilities is exceptionally large, almost without limit,” Balderston said.

The flight cyber defense testbed.

For example, national security spacecraft are often built with common bus designs or architectures. Because these common designs are used on multiple missions, the potential for discovery and exploitation of vulnerabilities can jeopardize an entire spacecraft fleet. “We are exploring inherent vulnerabilities in these common elements, such as LEON and RAD-750 processors, operating systems, MIL-STD-1553 buses, and spacewire data links,” Balderston said.

The trustworthiness of commercial components is another issue. “As the number of commercial-off-the-shelf flight components and vendor suppliers—particularly lower-tier foreign suppliers—increases, there are concerns of malicious code being embedded in processing boards, software, or digital electronic components, which could become the source of a cyberattack,” Balderston said. “The research team is assessing common spacecraft and payload components to understand the risks and countermeasures or other mitigations to address these risks. These risks could become widespread, and potentially pose a fleet-wide vulnerability.”

Spacecraft fault management is a significant area of concern. Researchers are exploring the possibility of implementing affordable autonomous response capabilities that would not require major changes to the system architecture. “We are exploring how to react to a space system anomaly. A key challenge is differentiating whether the anomaly is due to a natural fault or a cyber event. This may drive requirements for additional telemetry or enhanced ground procedures to support cyber anomaly resolution,” Balderston said.

The flight cyber testbed consists of two main segments: the test unit and the testbed infrastructure. Tests have so far focused on generic real-time operating systems, processors, and common flight data buses, including a single onboard computer system with a MIL-STD-1553 bus and a generic field-programmable gate array (FPGA). Other items tested include an Air Force Research Laboratory plug-and-play avionics computer, a Boeing Colony II CubeSat, and an Air Force space-vehicle emulator that includes bus-flight software.

The testbed infrastructure offers overall test monitoring and control. Together with the test unit, it functions as a tabletop satellite, with sensors and actuator models, an optional payload model, and a communications subsystem. It provides a common environment for testing and analysis that supports elements of all units being tested, thereby reducing costs when new units are added.

The team is also investigating software tools and techniques for static and dynamic attack detection, remediation, and countermeasure to determine their efficacy at the time of attack as well as during the various phases of system development, operation, and maintenance. A selected set of onboard mitigations and countermeasures are being prototyped to research trade-offs of implementing operational cyber-resilient elements on the spacecraft. “One key premise of the test effort is the assumption that unauthorized access to the spacecraft has occurred,” Balderston said. “We are researching how the attack could have been detected and what affordable countermeasures, mitigations, or responses could be taken to operate through an attack in the future, or how to otherwise maintain mission effectiveness.”

The research also involves prototyping an onboard sensor system for space situational awareness that could be applied to the current fleet of space vehicles. The intent is to provide an initial capability for space systems to sense cyber-related events and either take simple autonomous action or report the events to the ground for further assessment. “We want to explore elements and trade-offs of a more cyber-resilient space vehicle architecture that enhances overall mission success and will continuously adapt to evolving threats,” Balderston said.

This flight-cyber initiative has created an important capability for space segment vulnerability testing as well as a platform to develop affordable mitigations to cyberattacks. It provides empirical vulnerability and countermeasure data to programs and organizations that develop and support national security space systems. “Although flight cyber defense is still in its infancy, immediate contributions to national security space systems in all phases of acquisition are expected from this research effort, including on spacecraft already launched and those in operations. We hope this effort will stimulate awareness and collaboration among subject matter experts and stakeholders in related technology domains, and ultimately lead to development of affordable cyber resilient flight systems that ensure mission success,” Balderston said.

Cyberspace Command and Control and Battle Management

Fast and reliable situational awareness is vital to support command decisions. This is true in space as well as in the cyber domain. One challenge for cyber operations is to generate timely situational awareness across a geographically distributed enterprise. Considerable research has been done in distributed computing—but applying these advances to cyberspace command and control has proved difficult for national space systems.

The distributed cyber command and control battle management testbed architecture supporting attack speed situational awareness. The commander’s domain receives updates from all local domains and correlates those inputs to generate a global situational awareness picture.

Information assurance efforts have typically focused on developing technologies to detect and analyze a cyberattack after it has occurred, with less consideration on performance in a distributed computing environment. Meanwhile, cyberattacks have exposed the shortcomings of relying exclusively on information assurance—a defense-in-depth strategy—and a singular defensive computer network for many space systems. Defense-in-depth has been somewhat effective in securing systems, but has not been adequate in addressing the operational realities of cyberattacks, which must be confronted in real time. The current strategy lacks a battlespace view, which must include the full spectrum of cyber operations.

John Sarkesain, senior engineering specialist in the Cyber Engineering Department, is leading a research project at The Aerospace Corporation exploring cyberspace situational awareness to support cyber command and control and battle management (C2/BM) for space systems. “One critical need in this area is better correlation and distribution of cyber- sensor data in near real-time to generate accurate distributed situational awareness across an enterprise,” he said. Sarkesain is working with team members Jandria Alexander and Meredith Hennan of the Cyber Engineering Department and Donald Lanzinger of Network Systems to develop a testbed that would allow investigation of promising methods and technologies.

Initial research has focused on developing DOD architectural framework operational and system models. The team modeled five geographically distributed domains, each analogous to a distributed area of responsibility or networked computing environment. One domain serves as the resident area of responsibility for the cyber commander, and the other four are peers. They perform similar and related kinetic missions at a tactical level, or they represent different combatant commanders at different global operational levels. The battlespace for cyber operations is almost always globally defined, so near real-time partnering and information sharing is a critical C2/BM requirement.

The framework is modeled after a simple distributed correlation architecture that detects an intrusion and correlates it locally to generate a local situational awareness picture. It shares each local picture with the commander’s domain as it is generated. Key architectural patterns employed include peer processing, distributed data-space sharing, and publish-and-subscribe messaging.

To analyze the performance of the distributed and local correlation, the team has developed a metrics model that measures the time it takes to generate distributed near real-time cyberspace situational awareness pictures during an attack. Specifically, it measures the time it takes to generate local situational pictures at local nodes while simultaneously measuring the time it takes to generate a global situational awareness picture from inputs from the local nodes.

Various attack sequences and scenarios can be applied to assess system response to different types of intrusions. These might include a series of attacks at a particular domain or a simultaneous attack at several domains. Standard open-source traffic sensors and packet sniffers are used to identify attacks and attack sequences.

Other project work includes specifying critical properties of the testbed using Temporal Logic of Behaviors (TLB), a theoretical construct of mathematical logic that can be used to formally specify, reason, and prove distributed systems and their paradoxical behaviors.

“We believe the paradoxical and nondeterministic behaviors of distributed systems have significant implications for distributed cyber C2/BM, information assurance, and distributed cyber situational awareness. We plan to use TLB to specify critical properties for distributed correlation, for which we have defined informal specifications using the DOD architectural framework. We are trying to better understand distributed behaviors and correctness,” Sarkesain said. For example, TLB can be used to explain the paradoxical behaviors of distributed systems caused by the uncertainty of global states. “If we can better understand these distributed system paradoxes as they apply to global states, we may be able to generate more timely and reliable cyber situational awareness and improved cyber C2/BM,” Sarkesain said.

“We are at a disadvantage in trying to design and implement cyber solutions in a distributed environment where we may not fully understand its behaviors, so we are trying to learn how C2/BM applications behave in the battlefield. We also believe information assurance, which, ideally, should be operationally managed through the cyber C2/BM processes, must be designed and implemented to support distributed real-time cyber operations. Current information assurance implementations appear to fall short of this requirement,” Sarkesain said.

Eventually, a cyberspace C2/BM system will be required to manage and conduct full-spectrum operational testing. Distributed real-time operational and systems architectures that have complementary relationships may offer the best path forward.

“We have observed that a globally distributed C2/BM system architecture that must meet high-performance requirements may be best complemented or congruent with specific distributed operational architectures. We are exploring cyber C2/BM operational-system architecture combinations to better understand their relationships, so as to develop better C2/BM solutions for cyber operations,” Sarkesain said.