Trident Scholar Abstracts 2009

Actual in-flight atmospheric radiation exposures of commercial air travelers and naval flight personnel were compared to the radiation exposure that is predicted by commercially available radiation codes for similar circumstances. Data for the study was collected with a unique portable Tissue Equivalent Proportional Counter (TEPC) system called HAWK. The HAWK system was carried on various commercial and military aircraft flights, where it measures lineal radiation energy, absorbed dose, and dose equivalent based on ICRP-60 recommendations. Post-flight the data produced by HAWK was analyzed both to recover dose rate as well as a total dose for the flight’s duration.

Over 20 hours of flight data was obtained on commercial aircraft in addition to several experiments on military and private aircraft. Flights were conducted across the continental United States, ranging from Massachusetts to California. Military flights were conducted on an EA-6B Prowler from the Navy’s VX-23 Squadron, located in Patuxent River, Maryland. Commercial aircraft altitudes reached a ceiling of approximately 38,000 feet; typical military operations were accomplished at approximately 25,000 feet. Data that was collected in flight included dose (in Grays), dose equivalent rates (in Sieverts per minute) , and corresponding GPS location data (geomagnetic latitude, longitude, and altitude). Recorded GPS data was then entered into several commercially available radiation codes for assessing atmospheric radiation risk. These codes included CARI-6M, developed by the U.S. Federal Aviation Administration (FAA), EPCARD developed by Germany’s GSF – National Research Center for Environment and Health, and EXPACS, developed by Japan’s Atomic Energy Agency. Analysis of radiation prediction code outputs produced several conclusions. First, radiation doses on the EA-6B Prowler and commercial aircraft at similar altitudes relate closely to each other, suggesting little effects of aircraft shielding and no influence from electronic equipment. Second, pilots, aircrew, and frequent fliers may exceed the recommended one milli-Sievert per year limit depending on their destinations and duration at high altitudes. Finally, commercial radiation risk codes provide a conservative and accurate method to predict and estimate the radiation risk of naval pilots and aircrew.

Astronomical observations show that there is an area of excess stars located towards and to the left of the Galactic center, as we see it from Annapolis, indicating an asymmetry in the thick disk of the Milky Way called the Hercules Thick Disk Cloud. This project examined the types and motions of stars in this excess area and attempt to explain the cause of their existence.

Initial observations in 1996 above the galactic plane showed a 30% excess in the number of faint blue stars in Quadrant 1 (to the left of the Sun-Center line) versus Quadrant 4 (to the right). This project required a research trip to the SMARTS 1.0 m telescope near La Serena, Chile, in order to acquire observations of the center of the Galaxy. These new observations were combined with previously observed data, then were reduced into a stellar catalog organized by magnitude and color. Data mining techniques were used to identify stars that appear to be part of the excess. Color and magnitude indicated the type and distance to the excess stars. The data were compared to a galactic model in order to better study the galactic asymmetry and the ranges of color, magnitude, and spatial motion of the excess stars.

At the beginning of the project, the Hercules Thick Disk Cloud had three possible origin theories: 1) the fossil remnant of a galactic merger, 2) the interaction of the thick disk or inner halo stars with the stellar bar, or 3) a previously unidentified triaxial distribution in the thick disk or inner halo. Final examination of the data showed that the excess is confined to the upper left quadrant, suggesting that a galactic merger is the most likely cause of the asymmetry.

There will still be a great deal more to study concerning the types and motions of excess stars, as well as further improvements and refinements to the galactic model as understanding of the Milky Way grows.

Performance and Analysis of Vortex Oxidizer Injection in a Hybrid Rocket Motor

A hybrid rocket motor is a type of rocket motor where fuel is placed in a combustion chamber as a solid, then a gaseous or liquid oxidizer is injected into the chamber during ignition. When the solid fuel and oxidizer come in contact and are ignited, the surface of the fuel burns and the gases produced by the combustion develop thrust. Hybrid motor development is a valuable area of research because hybrid motors are more efficient than solid motors, simpler and cheaper than liquid motors, and much safer and environmentally friendly than both.

Hybrid rocket motor performance is dictated by the rate at which the fuel burns. Fuel burn rate can be increased by increasing oxidizer flow speed over the burning fuel surface. This is because flow over the burning surface creates shear stress which facilitates fuel and oxidizer mixing. One method for improving shear stress and thus burn rate is to induce an oxidizer vortex in the combustion chamber. The subject of this research is a method for inducing vortical flow that combines vortex and axial oxidizer injection within a cylindrical interior-burning fuel grain. A hybrid motor test stand has been developed to test both axial and vortex oxidizer flow configurations as well as any combination of the two. The apparatus is capable of measuring thrust, oxidizer flow rate, and chamber pressure. This, along with physical measurements of fuel grains, allows the determination of fuel burn rate, combustion efficiency, and specific impulse, all key rocket performance parameters. The apparatus is also equipped with millisecond scale combustion analyzers to measure the gases in the combustion products, to include CO, CO2, NOx, and unburned hydrocarbons. The high sample rate of these analyzers sheds light on vortex hybrid combustion processes as well as the phenomenon which could lead to combustion instability. Overall, this research is focused on identifying a possible way to increase hybrid rocket performance in order to bring this very safe and efficient type of propulsion to maturity.

Development of an Integrated Robotic Radioisotope Identification and Location System

This project has integrated a commercially available high purity germanium (HPGe) detection system with a robotic base in order to detect the presence of radioisotopes in an enclosed space. The robotic base system operates autonomously. It calculates the bearing of a radioisotope source by slowly rotating at a location while measuring for the presence of a particular isotope by assessing the statistical confidence of detector data. Isotope identification is based on both a gamma spectroscopy library and an isotope identification program. Using a MATLAB script, the system determines the best bearing (direction) to a particular source by finding the statistical confidence parameter that has the maximum value. The robot is moved to another spot in the room to repeat the process. With multiple bearings to the source, the robot then triangulates the position of the source. The search algorithm developed is able to track multiple sources at once, allowing for a whole room to be searched efficiently from only a few readings. During the process, some unexpected wave patterns in the system’s statistical confidence with bearing angle were observed. Also, an extensive characterization of the lower limits of detection of the system was performed.

Design, Synthesis and Testing of Novel Antimalarial Compounds Based Upon a Novel Chemical Lead

Killing over one million people around the world each year, malaria remains a powerful and deadly mosquito borne disease which cripples many regions around the world. Most of the people who are infected with the disease are young children who do not have the resources to obtain treatment. Additionally, members of the American Armed Forces are deployed to many of these regions and are therefore vulnerable to exposure to the disease. While there are many drugs available for treating this disease, the parasite has quickly built up resistance to almost all of them, rendering them ineffective. There is therefore a need to develop new drugs which will be able to kill even those parasites resistant to existing drugs. To succeed in this, drug development process should begin from a chemical structure, a “lead”, which is distinct from those in current treatments.

This project proposed selection of such a “lead” compound. From an initial group of over 10,000 compounds, 20 possessing promising antimalarial properties were selected. From these, one was selected to be the starting point for a new drug development. Selection criteria that was used in the second round of selection included the extent of anti-malarial activity, the ease of synthesis of the compound, the likelihood that the compound is toxic or prone to metabolic transformation.

A series of compounds structurally related to the lead was then designed in order to explore the relationship between chemical structure and anti-malarial activity in this compound class. Each new compound was carefully synthesized using organic reaction techniques followed by purification and then confirmation of structure using modern spectroscopic techniques.

Compounds were then be tested by U.S. Army researchers at the Walter Reed Army Institute of Research in Washington, D.C. in order to determine how effective they are at killing the malaria parasite. Compounds that showed promise will later be explored further in order to establish whether they have the potential of continuing the journey toward becoming a cheap and efficient therapy for people suffering with malaria.

Using Biomechanical Optimization to Interpret Dancers' Pose Selection for a Partnered Spin

Our goal was to determine whether and how expert swing dancers physically optimize their pose for a partnered spin. In a partnered spin, two dancers connect hands and spin as a unit around a single vertical axis. For our purposes, the pose of a couple is determined by the angles of their joints in a two-dimensional plane. These angles were outputs of the optimization and told us the ideal pose for a couple. Analysis included a biomechanical model built in Mathematica and comparisons to live dancers with the use of a motion capture system.

The optimization objective was to maximize angular acceleration by minimizing the resistance to spin, but still producing torque. Input to this problem were the size of the dancers. The model considers only external forces and neglects internal forces. It consists of equations derived from physical principles, such as Newton’s laws and moments of inertia, that govern how people move. Using numerical non-linear optimization we found the specific pose for each couple that maximizes their angular acceleration. Size parameters for each couple are inputs to the model, thus every couple will have a different optimal pose. Each couple’s optimal pose can be compared to the pose they actually assumed for the spin.

To obtain motion capture data we used a system that consisted of four video cameras, reflective balls that could be tracked, and software to integrate the different angles of the cameras. The captured data consisted of the three-dimensional location of each of the marked body joints. We used this data to determine the angles of the joints to calculate the pose the couple assumed. The couple’s actual pose was then entered into the model to calculate a predicted angular acceleration. This predicted acceleration was then compared to the optimal acceleration to determine a fraction of optimal for each couple. We hypothesized that expert swing dancers would obtain a higher percentage of their optimal acceleration than beginners.

How will we continue to keep our troops secure in a hostile and unpredictable environment? Many believe that the future lies in the application of Unmanned Aerial Vehicle (UAV) surveillance to military convoys. This project, in conjunction with the Naval Research Laboratory (NRL), evaluated and modified a current UAV control algorithm to perform a security role for military convoys in urban terrain. The desired end state was to provide the simulated military convoy with constant UAV sensor coverage as the convoy navigated an urban environment.

While the UAV control algorithm had already been shown to be successful in basic simulations, this research used a NRL multi-vehicle simulator to assess the behavior of the same control algorithm under real world conditions. This included using improved vehicle dynamics and real world GPS tracks for convoy routes. The control algorithm was evaluated using performance metrics including the distance between UAVs, distance from each UAV to the convoy, and UAV fuel consumption.

The NRL control algorithm was tested in the simulation of three operational scenarios involving a UAV swarm following a military ground convoy. A Basic Navigation scenario assumed that the military convoy was mechanized and moved at a constant and fast pace, while a Foot Patrol scenario simulated soldiers walking in an urban environment. Lastly, an Obstacles en Route scenario simulated a real world military convoy that varied its speed constantly as it hit road blocks or other obstructions. For this scenario, a blending method was devised to control the UAV swarm with a combination of rectilinear and loitering forms of the algorithm.

Based on the data taken from simulations, the UAV control algorithm was modified to provide effective sensor coverage of the convoy in the scenarios. In addition, several blending strategies were created. One new strategy involving the bearing rate of the convoy relative to the UAVs provided a more secure and low tech form of accurate control than traditional methods. This research identified the limitations of the provided control algorithm, provided vital data necessary for further development of the controller for field tests, and developed a cumulative design process for future NRL control algorithm investigations.

Iris Recognition using Parallel and Sequential Logic in a Reconfigurable Logic Device

Within the last decade, biometrics has opened several new avenues into the automatic verification of personal identity and automatic operator authentication in security systems. Iris recognition, an increasingly popular biometric that measures the colored portion of the eye, demonstrates greater than 99.9% reliability in positively identifying individuals, making it highly desirable for these systems. Current iris recognition algorithms, however, rely on extensive and computationally expensive image processing to segment out the iris from a bitmap image, filter it into information called a template and compare the resulting template to previously enrolled individuals. Thus, most iris recognition algorithms that are implemented in general purpose machines are hardly portable and have proven difficult to deploy. This project seeks to implement a Ridge Energy Direction (RED) algorithm for iris recognition as an independent system that optimizes image processing computations through the use of field programmable logic hardware to reduce computer overhead and dramatically speed up algorithm execution.

A modern field programmable gate-array (FPGA) has the capability to assume nearly any logic function , from simple binary logic to fully functional microprocessors. Building from the ground up, a specialized system was developed around a FPGA to completely support an iris recognition algorithm. Although not at first any faster than a general purpose system such as a desktop computer or laptop, this new system is entirely contained on a single chip, giving it a distinct advantage in deploying hardware for iris recognition.

The primary goal of this research is to introduce the RED algorithm and biometric systems in general to the new development and evaluation environment of a FPGA. Having a fully contained iris recognition system in a single chip will allow exceptional portability. For example, an iris recognition system could be implemented in a small digital camera as a co-processor that could identify people in pictures taken by that camera. Furthermore, since a FPGA-based system can be reconfigured for any algorithm, it opens up the possibility of more development in parallel hardware architectures for biometric algorithms at large.