Autonomous robots are increasingly working alongside humans in a variety of environments. While simple applications in controlled environments work fine with fully autonomous robots and little interaction between human and robot, mission-critical applications in unstructured and uncertain environments require a stronger collaboration between human and robot. An example of such an instance occurs in dismounted military operations in which one or more autonomous robots act as part of a team of... Show moreAutonomous robots are increasingly working alongside humans in a variety of environments. While simple applications in controlled environments work fine with fully autonomous robots and little interaction between human and robot, mission-critical applications in unstructured and uncertain environments require a stronger collaboration between human and robot. An example of such an instance occurs in dismounted military operations in which one or more autonomous robots act as part of a team of soldiers. The performance of the human-robot team depends largely on the interaction between human and robot, more specifically the communication interfaces between the two. Furthermore, due to the complex and unstructured environments in which dismounted military missions take place, robots need to have a diverse skill set. Therefore, a variety of sensors, robot platform types (e.g. wheeled vs legged) and other capabilities are needed. The goal of this research was to understand how robot platform type and visual complexity of the human-robot interface, in particular a Mixed Reality interface, affect cooperative human-robot teaming in dismounted military operations. More specifically, the research objectives were to understand how robot platform type (wheeled vs. legged) impacts the human's perception of robot capability and performance, and to assess how visual complexity of a Mixed Reality interface affects accuracy and response time for an information reporting task and a signal detection task. The results of this study revealed that an increased visual complexity of the Mixed Reality-based human-robot interface improved response time and accuracy for an information reporting task and resulted in a more usable interface. Furthermore, the results indicated that the response time and accuracy for a signal detection task did not differ between high visual complexity and low visual complexity modes of the human-robot interface, which was likely due to a low task load. Users of the interface in high visual complexity mode reported lower perceived workload and better perceived performance compared to users of the interface in low visual complexity mode. Moreover, the findings of this study demonstrated that the unique appearance of a biologically-inspired legged robot was not enough to result in a difference in perceived performance and trust compared to a more traditional- looking wheeled robot. Therefore, there was no basis to conclude that the unique appearance of the legged robot resulted in the user anthropomorphizing the legged robot more than the wheeled robot. Additionally, free response feedback from users revealed that Mixed Reality-based head-mounted displays have the potential to overcome the shortcomings of Augmented Reality-based head-mounted displays and offer a suitable alternative to hand-held displays in dismounted military operations. Finally, this study demonstrated that an increase in visual complexity of a Mixed Reality-based human-robot interface results in improved effectiveness of human robot interaction and ultimately human-robot team performance as long as the additional complexity supports the tasks of the human. Show less

Practically all engineering applications require knowledge of uncertainty. Accurately quantifying uncertainty within engineering problems supports model development, potentially leading to identification of key risk factors or cost reductions. Often the full problem requires modeling behavior of materials or structures from the quantum scale all the way up to the macroscopic scale. Predicting such behavior can be extremely complex, and uncertainty in modeling is often increased due to... Show morePractically all engineering applications require knowledge of uncertainty. Accurately quantifying uncertainty within engineering problems supports model development, potentially leading to identification of key risk factors or cost reductions. Often the full problem requires modeling behavior of materials or structures from the quantum scale all the way up to the macroscopic scale. Predicting such behavior can be extremely complex, and uncertainty in modeling is often increased due to necessary assumptions. We plan to demonstrate the benefits of performing uncertainty analysis on engineering problems, specifically in the development of constitutive relations and structural analysis of smart materials and adaptive structures. This will be highlighted by a discussion of ferroelectric materials and their domain structure interaction, as well as dielectric elastomers’ viscoelastic and electrostrictive properties. Show less

Date Issued

2017

Identifier

FSU_SUMMER2017_Miles_fsu_0071E_14033

Format

Thesis

Title

Driver Behavior in Mixed Connected-Automated and Conventional Vehicle Traffic at a Freeway Merge.

Wireless communication through automated and connected vehicles is an evolving technology. This ameliorates the driving conditions, reduces time spent in traffic and curtails the crash occurrences. One of the most challenging areas, where these interactions can be most useful, are freeway merge ramps. Both the drivers on mainline and the drivers merging would be skeptical about their decisions at this location. The drivers who want to merge to the freeway mainline would seek to find an... Show moreWireless communication through automated and connected vehicles is an evolving technology. This ameliorates the driving conditions, reduces time spent in traffic and curtails the crash occurrences. One of the most challenging areas, where these interactions can be most useful, are freeway merge ramps. Both the drivers on mainline and the drivers merging would be skeptical about their decisions at this location. The drivers who want to merge to the freeway mainline would seek to find an appropriate gap to enter the mainline of the freeway. While the technology of connected and automated vehicles is being promoted, the reality now is that for the foreseeable future, the traffic would not comprise 100% of such connected and automated vehicles. In other words, there will be a mixed traffic of manually-driven and connected/automated vehicles, with various levels of automation in the latter types of vehicles. Capturing the driver behavior at the merge locations into a freeway with such mixed traffic, will be useful in learning and improving safety on the roadways. The Driving Simulator is a useful device in capturing driver behaviors. In this study scenarios are developed in the Driving Simulator which allows mixed traffic on mainline and also observe the driver behaviors from the ramp onto the merge. Overall there were three variations in the mixed traffic flow for the mainline freeway: 0%, 50% and 75% penetration rates. The freeway traffic was generated for the mixed traffic by first developing a mixed probability distribution which assumes exponential distributions for the inter-arrival times of manually-driven vehicles and a constant headway (uniform distribution) is assumed between connected vehicles. The mixed distribution was then used to randomly generate vehicles through Monte Carlo simulation, with assigned headways in the Driving Simulator for the various connected vehicle penetration rates. The subject driver’s speed along the ramp is monitored, as well as the speeds of those vehicles on the freeway. The gaps between freeway vehicles, which were accepted by the subject driver, were recorded for the various situations and scenarios. There were a total of 41 participants, with 29 young drivers (younger than 65 years) and 12 elderly drivers (65 years and older, amongst which 2 were between 55 and 65 years old). Three scenarios were presented to the drivers. The first driving task was to determine headway gap acceptance for the three penetration rates, based on the perception of the subject drivers (without driving). The second test involved the subjects actually driving on the ramp and implementing a suitable gap to merge on the freeway traffic at each ramp. From the data collected, the critical gaps were estimated based on perception. The gaps accepted while driving were also tabulated analyzed. It was observed that the critical gap for the young drivers in 0%, 50%, 75% penetrations rate are 2.9 sec, 1.8 sec, and 1.7 sec respectively. The critical gaps observed for elderly drivers aged over 65 are 3.5 sec, 2.0 sec, and 1.9 sec respectively. Based on an Analysis of Variance (ANOVA), there is no evidence to prove the equality of means for different groups classified by age, gender and driving experience in both perception and actual driving conditions for 0% and 50% penetration rates. It was observed that the headway gaps accepted by young and drivers, both by perception and driving in 0% penetration rate were 2.39 sec and 2.35 sec respectively. The headway gaps accepted by elderly drivers both by perception and driving in 0% penetration rate were 2.4 sec and 2.72 sec respectively. When the ANOVA was performed between the 0% and 50% penetration rates of driving conditions, it was observed that there is a lot of variation in the mean headway gaps accepted. The values of average headway gaps accepted for young drivers were estimated as 2.36 sec and 1.53 sec respectively, in the 0% and 50% penetration rates. For the elderly drivers the average headway gap values observed were 2.72 sec and 1.55 sec respectively, in the 0% and 50% penetration rates traffic. The results also indicated the subject driver acceleration and deceleration behavior at the merge ramp. The results also showed that when the (aggressive) drivers accelerated to match the velocity of mainline traffic and merged in between connected-automated vehicles with the shortest gap, effects were noticed on the mainline traffic, where the main line traffic had to decelerate rapidly. Overall, it was observed that the subject drivers accepted shorter headway gaps as the penetration rates increases. Show less

Date Issued

2017

Identifier

FSU_SUMMER2017_Chityala_fsu_0071N_14121

Format

Thesis

Title

Development of a Simple Microfluidic Device for Characterizing Chemotaxis of Macrophage in Response to Myelin Basic Protein.

Microfluidic devices are widely used for cell-based analysis. There are always needs to develop simpler, more effective and/or less costly devices than the existing ones for this application. A simple microfluidic device has been fabricated and tested for studying chemotaxis of macrophages in this study. The device was made of polydimethylsiloxane bound to a cell culture dish. It consisted of a millimeter-sized cavum and two arrays of straight channels of 5 um in width and 6um height and... Show moreMicrofluidic devices are widely used for cell-based analysis. There are always needs to develop simpler, more effective and/or less costly devices than the existing ones for this application. A simple microfluidic device has been fabricated and tested for studying chemotaxis of macrophages in this study. The device was made of polydimethylsiloxane bound to a cell culture dish. It consisted of a millimeter-sized cavum and two arrays of straight channels of 5 um in width and 6um height and about two millimeters in length. The channels connected the cavum, in which a chemoattractant was loaded, with the surrounding environment, in which the macrophages were cultured. The device was first tested with a known chemoattractant - fetal bovine serum and the chemoattractive property of myelin basic protein (MBP) was then studied using the device. The macrophages were found to migrate towards to the MBP-loaded cavum in larger quantity and greater distance than those in the control samples. The results prove the usefulness of the microfluidic device for chemotaxis assay and indicate that MBP is a chemoattractant for the macrophages. Show less

Azobenzene is a photo responsive polymer which undergoes molecular change under exposure to certain wavelengths of light. This molecular shape change can cause an overall macroscopic shape change in an azobenzene polymer network. This promising photostrictive behavior has broad range of applications in flow control, robotics and energy harvesting applications. The conversion of solar energy directly into mechanical work provides unique capabilities in adaptive structures. In this thesis,... Show moreAzobenzene is a photo responsive polymer which undergoes molecular change under exposure to certain wavelengths of light. This molecular shape change can cause an overall macroscopic shape change in an azobenzene polymer network. This promising photostrictive behavior has broad range of applications in flow control, robotics and energy harvesting applications. The conversion of solar energy directly into mechanical work provides unique capabilities in adaptive structures. In this thesis, stress measurements show that irradiated azo-LCN experience photochemical and thermomechanical stress. Experimental results show that stress response depends highly on the range of pre-stress applied and the threshold pre-stress differs for different polarization directions. Show less

Grid codes impose immunity requirements to the generation systems that are connected to the transmission lines. Immunity refers to the generator’s capability to overcome grid abnormal conditions. One of the requirements is to remain connected during a certain time when a fault, like voltage sag, is presented. During the fault scenario, a generator unit should remain connected for a pre-determined amount of time, and also provide reactive power to support the grid voltage. This is called low... Show moreGrid codes impose immunity requirements to the generation systems that are connected to the transmission lines. Immunity refers to the generator’s capability to overcome grid abnormal conditions. One of the requirements is to remain connected during a certain time when a fault, like voltage sag, is presented. During the fault scenario, a generator unit should remain connected for a pre-determined amount of time, and also provide reactive power to support the grid voltage. This is called low-voltage ride through (LVRT). Initially, LVRT requirements were imposed for large generator units like wind farms connected to the transmission network; however, due to the increased penetration of distributed generation (DG) on the distribution system, new grid codes extend the mentioned capability to generator units connected to the distribution grid. Due to matured photovoltaic (PV) technology and the decreased price of PV panels, PV grid tied installations are proliferating in the utility grids; this is creating new challenges related to voltage control. In the past, DG such as PV were allowed to trip from the grid when a fault or unbalance occurred and reconnect within several seconds (sometimes minutes) once the fault had been cleared. Nevertheless, thanks to high PV penetration nowadays, the same method cannot be used because it will further deteriorate the power quality and potentially end in a power blackout. Different approaches have been considered to fulfill the LVRT requirement on PV systems. A large amount of literature focuses on the control of the grid side converter of the PV installation rather than the control of PV operation during the fault, and most control designs applied to the grid side follow classical control methods. Moreover, the effects of the grid fault on the generator side impose a challenge for controlling the PV systems since the quality of the synthesized converter voltages and currents depends on the dc link power/voltage control. This document proposes a Model based Predictive Control (MPC) for controlling a two stage PV system to fulfill LVRT requirements. MPC offers important advantages over traditional linear control strategies since the MPC cost function can include constraints that are difficult to achieve in classical control. Special attention is given to implementation of the proposed control algorithms. Simplified MPC algorithms that do not compromise the converter performance and immunity requirement are discussed. Show less

Date Issued

2017

Identifier

FSU_SUMMER2017_DiazFranco_fsu_0071E_14045

Format

Thesis

Title

Modeling and Application of Effective Channel Utilization in Wireless Networks.

As a natural scarcity in wireless networks, radio spectrum becomes a major investment in network deployment. How to improve the channel utilization (CU) of the spectrum is a challenging topic in recent research. In a network environment, the utilization of a channel is measured by the effective CU (ECU), i.e., the effective time for transmission or when the medium being sensed busy over its total operation time. However, existing work does not provide a valid model for ECU. We investigate the... Show moreAs a natural scarcity in wireless networks, radio spectrum becomes a major investment in network deployment. How to improve the channel utilization (CU) of the spectrum is a challenging topic in recent research. In a network environment, the utilization of a channel is measured by the effective CU (ECU), i.e., the effective time for transmission or when the medium being sensed busy over its total operation time. However, existing work does not provide a valid model for ECU. We investigate the relationship between ECU and the interference from other wireless transmission nodes in a wireless network, as well as from potential malicious attacking interfering sources. By examining the relationship between their transmission time and co-transmission time ratios between two or more interferers, we propose a new model based on the channel occupation time of all nodes in a network. The model finds its mathematical foundation on the set theory. By eliminating the overlapping transmission time intervals instead of simply adding the transmission time of all interferers together, the model can obtain the expected total interference time by properly combining the transmission time of all individual nodes along with the time when two or more nodes transmit simultaneously. Through dividing the interferers into groups according to the strength levels of their received interference power at the interested node, less significant interfering signals can be ignored to reduce the complexity when investigating real scenarios. The model provides an approach to a new detection method for jamming attacks in wireless networks based on a criterion with combined operations of ECU and CU. In the experiments, we find a strong connection between ECU and the received interference power and time. In many cases, strong and frequent interference is accompanied by a declination of ECU. The descending slope though may be steep or flat. When the decrease of ECU is not significant, CU can be observed with a sharp drop instead. Therefore, the two metrics, ECU and CU when properly combined together, demonstrate to be an effective measurement for judging strong interference. In addition, relating to other jamming detection methods in the literature, we build a mathematical connection between the new jamming detection conditions and PDR, the Packet Delivery Ratio, which has been proved effective by previous researchers. Thus, the correlation between the new criteria and PDR guarantees the validity of the former by relating itself to a tested mechanism. Both the ECU model and the jamming detection method are thoroughly verified with OPNET through simulation scenarios. The experiment scenarios are depicted with configuration data and collected statistical results. Especially, the radio jamming detection experiments simulate a dynamic radio channel allocation (RCA) module with a user-friendly graphical interface, through which the interference, the jamming state, and the channel switching process can be monitored. The model can further be applied to other applications such as global performance optimization based on the total ECU of all nodes in a wireless communications environment because ECU relates one node's transmission as the interference for others using the same channel for its global attribute, which is our work planned for the next step. We would also like to compare its effectiveness with other jamming detection methods by exploring more extensive experiment research. Show less

Date Issued

2017

Identifier

FSU_SUMMER2017_Ng_fsu_0071E_14083

Format

Thesis

Title

Manipulation of Potential Energy Surfaces of Binuclear Platinum Complexes and Their Application as Viscosity Sensor.

Photoinduced structural change (PSC) is a fundamental excited-state dynamic process in chemical and biological systems, e.g. photoinduced flattening distortion of Cu(II) complexes1, PSCs of binuclear Pt (II) complexes2, 3. This process is highly dependent on the configuration of molecular excited-state potential energy surfaces (PESs). However, due to the lack of guidelines and approaches for designing excited-state PESs, precise manipulation of PSC processes is still very challenging. In... Show morePhotoinduced structural change (PSC) is a fundamental excited-state dynamic process in chemical and biological systems, e.g. photoinduced flattening distortion of Cu(II) complexes1, PSCs of binuclear Pt (II) complexes2, 3. This process is highly dependent on the configuration of molecular excited-state potential energy surfaces (PESs). However, due to the lack of guidelines and approaches for designing excited-state PESs, precise manipulation of PSC processes is still very challenging. In this project, a series of rationally designed butterfly-like phosphorescent binuclear platinum complexes were synthesized with well-controlled PESs and tunable dual emissions at room temperature. We demonstrated our capability to manipulate PESs in two ways. First, we introduce the steric bulkiness effect of both cyclometalated ligands and pyrazolate bridging ligands to control the transition energy barrier of PSC process. Based on the Bell-Evans-Polanyi principle, which describe a chemical reaction between two energy minima on the first triplet excited-state PES, we reveal a simple method to engineer the dual emission of molecular systems by manipulating PES and therefore PSC to achieve desired molecular properties. Second, we synthetically control the electronic structure of the cyclometallating ligand and the steric bulkiness of the pyrazolate bridging ligand at the same time to realize the precise manipulation of the PESs. Color tuning of dual emission from blue/red, to green/red and red/deep red have been achieved for these phosphorescent molecular butterflies, which have two well-controlled energy minima on the PESs. The environmentally dependent photoluminescence of these molecular butterflies enabled their application as self-referenced luminescent viscosity sensor. Show less

Date Issued

2017

Identifier

FSU_SUMMER2017_Zhou_fsu_0071N_13904

Format

Thesis

Title

Leg Specialization Control: Deriving Control from the Perspective of Limb Function.

Many leg controllers and gaits have been designed directly with lower level parameters. This approach can lead to very high performance gaits, but can also lead to platforms highly tuned for one particular application with drastically reduced performance elsewhere. Through the Leg Specialization (LSC) gait strategy presented here, an alternative approach is demonstrated. Designing controllers from the perspective of limb function allows for adaptation to various environments, and here has... Show moreMany leg controllers and gaits have been designed directly with lower level parameters. This approach can lead to very high performance gaits, but can also lead to platforms highly tuned for one particular application with drastically reduced performance elsewhere. Through the Leg Specialization (LSC) gait strategy presented here, an alternative approach is demonstrated. Designing controllers from the perspective of limb function allows for adaptation to various environments, and here has produced a high performing gait capable of running on a variety of surfaces. Show less

Internet of Things (IOT) systems are becoming a popular concept of every smart system. Many people intend to develop various IOT systems which could be smart socket can be controlled remotely and tracking the electricity consumption to save energy or a security system for home which combines several sensors and cover a big area. The goal of this thesis was to introduce a method to construct an IOT system that can monitor different parameters. The design of this project also focused on... Show moreInternet of Things (IOT) systems are becoming a popular concept of every smart system. Many people intend to develop various IOT systems which could be smart socket can be controlled remotely and tracking the electricity consumption to save energy or a security system for home which combines several sensors and cover a big area. The goal of this thesis was to introduce a method to construct an IOT system that can monitor different parameters. The design of this project also focused on wireless interaction in order to make the system more perceptual. The design of the system was modified several times which include a changing from using Ethernet to Wi-Fi. Ultimately, it provides an effective method for monitoring a building system which could be the temperature, humidity, photo intensity, or the movement of objects, etc. The final design fulfills the fundamental goals and there is a visualization web page for the IOT system which both includes a real time data monitoring and a real time charting. This thesis will give a thorough overview of how to build an own IOT system. Show less

Water conservation, wastewater treatment regulations, and the use of reclaimed/reuse water supplies have been on a collision course since society’s demand began outstripping the supply of fresh water. As potable water demand has risen, engineers have looked toward Waste Water Treatment Plants (WWTP) to alleviate the stress placed upon aquifers and surface water sources. Direct Potable Reuse (DPR), Indirect Potable Reuse (IPR) and Reuse/Reclaimed systems all conserve water; however, they also... Show moreWater conservation, wastewater treatment regulations, and the use of reclaimed/reuse water supplies have been on a collision course since society’s demand began outstripping the supply of fresh water. As potable water demand has risen, engineers have looked toward Waste Water Treatment Plants (WWTP) to alleviate the stress placed upon aquifers and surface water sources. Direct Potable Reuse (DPR), Indirect Potable Reuse (IPR) and Reuse/Reclaimed systems all conserve water; however, they also unintentionally conserve pollutants. The widespread use of WWTP effluent conservation requires additional treatment options such as Activated Carbon treatment to further treat plant effluent. Powdered Activated Carbon (PAC) has shown promise as a treatment method to reduce pollutants but challenges remain in effectively applying PAC to a wastewater stream. Of particular concern is the application of PAC to existing facilities in which the existing hydraulic profile does not allow the use of large sedimentation tanks normally associated with PAC use in potable water applications. Cloth Media Filtration (CMF) is an existing treatment process that has seen significant saturation into the WWTP market in the United States since being introduced in 1991. While mostly targeted at tertiary treatment, alternate processes such as primary filtration and storm water treatment are now being pursued. It is suspected that CMF will capture and retain Powdered Activated Carbon (PAC) in which the two processes could be combined in order to produce an energy friendly and cost competitive approach to pollutant reductions. This research examines the feasibility of application of PAC into existing hydraulic profiles by using inline injection and its quick removal by Cloth Media Filtration (CMF). One of the most challenging aspects of PAC usage is its removal, which can be facilitated by a commercial CMF. A bench sized cloth media filter was constructed and then operated in a side stream manner with a real-world wastewater treatment train. The results show excellent performance of the designed CMF. The removal of two commercially available PACs was more than 70% within a short time using the existing hydraulic conditions of the plant. Additionally, using the backwash rates and solids removal rates, it was determined that CMF performs as an acceptable means of removal for PAC in a WWTP. Show less

Date Issued

2017

Identifier

FSU_SUMMER2017_Madden_fsu_0071N_14114

Format

Thesis

Title

Experimental Study of Controlled Surface Imperfection Effects on Vortex Asymmetry of Conical Bodies at High Angles of Incidence.

At high angles of attack, asymmetric vortices are formed on the leeward side of flight vehicles with pointed forebodies due to the random surface imperfections near the forebody apex. These vortices induce adverse side forces and yaw moments. The forces generated are too large to be controlled using conventional control surfaces and can result in flight instability and loss of control. Although many studies have reported that random surface imperfections trigger vortex asymmetry, there is a... Show moreAt high angles of attack, asymmetric vortices are formed on the leeward side of flight vehicles with pointed forebodies due to the random surface imperfections near the forebody apex. These vortices induce adverse side forces and yaw moments. The forces generated are too large to be controlled using conventional control surfaces and can result in flight instability and loss of control. Although many studies have reported that random surface imperfections trigger vortex asymmetry, there is a lack of understanding of how these imperfections directly correlate to the varying side force with roll orientation. The present study is aimed at gaining a better insight into the underlying flow physics of vortex asymmetry. This is accomplished by performing flow field measurements using Particle Image Velocimetry and force measurements using a six-component strain gage balance on an unpolished and a highly-polished 12° semi-apex angle cone at subsonic speeds. Measurements were carried out with and without the implementation of controlled surface imperfections. All experiments were performed at a fixed Reynolds number of 0.3 × 10^6 based on the base diameter of the cone model. The force measurements indicate that the vortices caused by the random surface imperfections are highly dependent on the magnitude of surface roughness. The results show that the side force was significantly reduced and was relatively less dependent on roll orientation for the polished cone. Flow field results show that the ratio of imperfection height to the local cross-flow boundary layer thickness was observed to be critical in influencing the vortex location and growth. Furthermore, the region of incipient boundary layer separation was highly sensitive to the controlled imperfections. Show less

One of the main goals of robotics research is to give physical platforms intelligence, allowing for the platforms to act autonomously with minimal direction from humans. Motion planning is the process by which a mobile robot plans a trajectory that moves the robot from one state to another. While there are many motion planning algorithms, this research focuses on Sampling Based Model Predictive Optimization (SBMPO), a motion planning algorithm that allows for the generation of trajectories... Show moreOne of the main goals of robotics research is to give physical platforms intelligence, allowing for the platforms to act autonomously with minimal direction from humans. Motion planning is the process by which a mobile robot plans a trajectory that moves the robot from one state to another. While there are many motion planning algorithms, this research focuses on Sampling Based Model Predictive Optimization (SBMPO), a motion planning algorithm that allows for the generation of trajectories that are not only dynamically feasible, but also efficient in terms of a user defined cost function (specifically in this research, distance traveled or energy consumed). To accomplish this, SBMPO uses the kinematic, dynamic, and power models of the robot. The kinematic, dynamic, and power models of a skid-steered robot are dependent on the type and inclination of the terrain over which the robot is traversing. Previous research has successfully used SBMPO to plan trajectories on different inclinations and terrain types, but with the terrain type and inclination being held constant over the trajectory. This research extends the prior work to plan trajectories where the terrain type changes over the trajectory and where the robot has the option to go over or around hills, situations extremely common in real world environments encountered in military and search and rescue operations. Furthermore, this research documents the design and implementation of a 3D visualization environment which allows for the visualization of the trajectory generated by the planner without having a robot follow the trajectory in a physical environment. Show less

High-speed impinging jets are often generated by the propulsive systems of aerospace launch vehicles and tactical aircraft. In many instances, the presence of these impinging jets creates a hazard for flight operations personnel due to the extremely high noise levels and unsteady loads produced by fluid-surface interaction. In order to effectively combat these issues, a fundamental understanding of the flow physics and dominant acoustic behavior is essential. There are inherent challenges in... Show moreHigh-speed impinging jets are often generated by the propulsive systems of aerospace launch vehicles and tactical aircraft. In many instances, the presence of these impinging jets creates a hazard for flight operations personnel due to the extremely high noise levels and unsteady loads produced by fluid-surface interaction. In order to effectively combat these issues, a fundamental understanding of the flow physics and dominant acoustic behavior is essential. There are inherent challenges in performing such investigations, especially with the need to simulate the flowfield under realistic operational conditions (temperature, Mach number, etc.) and in configurations that are relevant to full-scale application. A state-of-the-art high-temperature flow facility at Florida State University has provided a unique opportunity to experimentally investigate the high-speed impinging jet flowfield at application-relevant conditions. Accordingly, this manuscript reports the findings of several experimental studies on high-temperature supersonic impinging jets in multiple configurations. The overall objective of these studies is to characterize the complex relationship between the hydrodynamic and acoustic fields. A fundamental parametric investigation has been performed to document the flowfield and acoustic characteristics of an ideally-expanded supersonic air jet impinging onto a semi-infinite flat plate at ambient and heated jet conditions. The experimental program has been designed to span a widely-applicable geometric parameter space, and as such, an extensive database of the flow and acoustic fields has been developed for impingement distances in the range 1d to 12d, impingement angles in the range 45 degrees to 90 degrees, and jet stagnation temperatures from 289K to 811K (TTR=1.0 to 2.8). Measurements include point-wise mean and unsteady pressure on the impingement surface, time-resolved shadowgraphy of the flowfield, and fully three-dimensional near field acoustics. Aside from detailed documentation of the flow and acoustic fields, this work aims to develop a physical understanding of the noise sources generated by impingement. Correlation techniques are employed to localize and quantify the spatial extent of broadband noise sources in the near-impingement region and to characterize their frequency content. Additionally, discrete impingement tones are documented for normal and oblique incidence angles, and an empirical model of the tone frequencies has been developed using velocity data extracted from time-resolved shadowgraphy together with a simple modification to the conventional feedback formula to account for non-normal incidence. Two application-based studies have also been undertaken. In simulating a vertical take-off and landing aircraft in hover, the first study of a normally-impinging jet outfitted with lift-plate characterizes the flow-acoustic interaction between the high-temperature jet and the underside of an aircraft and documents the effectiveness of an active flow control technique known as `steady microjet injection' to mitigate high noise levels and unsteady phenomena. The second study is a detailed investigation of the jet blast deflector/carrier deck configuration aimed at gaining a better understanding of the noise field generated by a jet operating on a flight deck. The acoustic directionality and spectral characteristics are documented for a model-scale carrier deck with particular focus on locations that are pertinent to flight operations personnel. Show less

Date Issued

2017

Identifier

FSU_SUMMER2017_Worden_fsu_0071E_13997

Format

Thesis

Title

Characterization of the Flow-Field for Dual Normally Impinging Axi-Symmetric Jets.

In this study, the flow and acoustic field characteristics of dual high-speed axi-symmetric impinging jets will be examined. Initially, the short takeoff and vertical landing (STOVL) facility was redesigned by adding a second jet to the existing model there by achieving a dual jet configuration. This modified facility was designed to simulate aircraft hover in proximity to the ground. Emphasis is placed on the complex behavior of the jets as the nozzle pressure ratio (NPR) is varied to... Show moreIn this study, the flow and acoustic field characteristics of dual high-speed axi-symmetric impinging jets will be examined. Initially, the short takeoff and vertical landing (STOVL) facility was redesigned by adding a second jet to the existing model there by achieving a dual jet configuration. This modified facility was designed to simulate aircraft hover in proximity to the ground. Emphasis is placed on the complex behavior of the jets as the nozzle pressure ratio (NPR) is varied to produce over-expanded, ideally-expanded and under-expanded jet flows. Two nozzle configurations were chosen to simulate dual impinging jets: 1) two converging nozzles (Mach design, Md = 1.00) and 2) a converging nozzle (Md = 1.00) and a converging-diverging (CD) nozzle (Md = 1.50). The experimental results described in this thesis include shadowgraph flow visualization, surface pressure measurements, and near-field acoustic measurements. Shadowgraph flow visualization was used to observe the acoustic field and the coupling between dual jets for various NPR combinations. Mean surface pressure measurements were obtained for impinging jet configurations which analyzed the jet behavior for ground plane separations ranging from x/D = 2 to 10. These measurements provided information regarding the footprint of the flow-field, particularly the fountain flow behavior. It was found that there is a shift in the fountain flow region which occurs when the NPR of one jet was substantially higher than the supplementary jet. Unsteady pressure measurements and near-field acoustic measurements investigated the presence of a feedback loop that occurs for both free and impinging jets, under certain conditions. The presence of tones, either screech or impingement, was clearly evident from the spectral peaks in the near-field noise spectra. When such tones are present, the corresponding flow-field images show strong acoustic waves. Show less

Date Issued

2017

Identifier

FSU_SUMMER2017_Harmon_fsu_0071N_14049

Format

Thesis

Title

Analysis of Optimization Processses for Solid State Fabrication of Olivine Cathode Materials.

Lithium ion battery discovered since the 1980s has become pivotal to our energy needs. With the need for a shift to renewable energy and increased use of portable devices, energy storage has become a very important aspect of modern day life and technology. In the thesis, optimization techniques for solid state calcination of lithium olivine batteries are characterized and analyzed. A brief introduction into lithium ion battery is discussed, the chemistry and physics of the materials is... Show moreLithium ion battery discovered since the 1980s has become pivotal to our energy needs. With the need for a shift to renewable energy and increased use of portable devices, energy storage has become a very important aspect of modern day life and technology. In the thesis, optimization techniques for solid state calcination of lithium olivine batteries are characterized and analyzed. A brief introduction into lithium ion battery is discussed, the chemistry and physics of the materials is studied in details. Emphasis is placed on the olivine structure, industrially utilized synthesis method and the performance of olivine lithium ion batteries are also discussed in details. Olivine structure LiFePO₄ (LFP) was synthesized via solid state processes, using Li₂CO₃, NH₄H₂PO₄ and FeC₂O₄·H₂O and C₁₂H₂₂O₁₁ as precursor materials. The effects of calendaring in terms of charge/discharge capacity, cycle life performance, surface morphology, and ac impedance was analyzed. The resulting LFP electrode was divided in part, Part A was left as is and Part B was calendared. The calendared electrode exhibited lower impedance under electrochemical impedance test. The calendared electrode also exhibited a higher discharge capacity of about 130 mAh/g at 0.1C compared to the as-is electrode with discharge capacity of about 120mAh/g. Olivine structure LiMnPO₄ (LMP) was also synthesized via solid state processes, using Li₂CO₃, NH₄H₂PO₄, MnCO₃ and C₁₂H₂₂O₁₁ as precursor materials. Comparison of the carbon addition process was done by adding sucrose to the initial precursor mix and carbon black at the later stages of fabrication. The 3 step carbon addition exhibited the highest specific capacity of about 72mAh/g, 1 step carbon addition possessed the least capacity of about 45mAh/g, while the 2 step process had a capacity of about 65mA/g. Show less

Carbon nanotubes (CNTs) are known to exhibit outstanding mechanical, electrical, thermal, and coupled electromechanical properties. CNTs can be employed towards the design of an innovative strain sensor with enhanced multifunctionality due to their load carrying capability, sensing properties, high thermal stability, and outstanding electrical conductivity. All these features indicate the prospect to use CNTs in a very wide range of applications, for instance, highly sensitive resistance-type... Show moreCarbon nanotubes (CNTs) are known to exhibit outstanding mechanical, electrical, thermal, and coupled electromechanical properties. CNTs can be employed towards the design of an innovative strain sensor with enhanced multifunctionality due to their load carrying capability, sensing properties, high thermal stability, and outstanding electrical conductivity. All these features indicate the prospect to use CNTs in a very wide range of applications, for instance, highly sensitive resistance-type strain/force sensors, wearable electronics, flexible microelectronic devices, robotic skins, and in-situ structural health monitoring. CNT-based strain sensors can be divided into two different types, the individual CNT- based strain sensors and the ensemble CNT-based strain sensors e.g. CNT/polymer nanocomposites and CNT thin films. In contrast, to individual CNT-based strain sensors with very high gauge factor (GF) e.g. ~3000, the ensemble CNT-based strain sensors exhibit very low GFs e.g. for a SWCNT thin film strain sensor, GF is ~1. This research discusses the mechanisms and the optimizing principles of a SWCNT thin film piezoresistive sensor, and provide an experimental validation of the numerical/analytical investigations. The dependence of the piezoresistivity on key parameters like alignment, network density, bundle diameter (effective tunneling area), and SWCNT length is studied. The tunneling effect is significant in SWCNT thin films showing higher degrees of alignment, due to greater inter-tube distances between the SWCNTs as compared to random oriented SWCNT thin films. It can be concluded that SWCNT thin films featuring higher alignment would have a higher GF. On the other hand, the use of sparse network density which comprises of aligned SWCNTs can as well intensify the tunneling effect which can result to a further increase in the GF. In addition, it is well-known that percolation is greatly influenced by the geometry of the nanotubes e.g. bundle diameter and length. A study on the influence of bundle diameter of SWCNTs on the piezoresistivity behavior of mechanically drawn SWCNT thin films showed the best performance with an improved GF of ~10 when compared to the randomly oriented SWCNT thin films with GF of ~1. The non-linear piezoresistivity of the mechanically drawn SWCNT thin films is considered to be the main mechanism behind the high strain sensitivity. Furthermore, information about the average length and length distribution is very essential when examining the influence of individual nanotube length on the strain sensitivity. With that in mind, we use our previously developed preparative ultracentrifuge method (PUM), and our newly developed gel electrophoresis and simultaneous Raman and photolumiscence spectroscopy (GEP-SRSPL) to characterize the average length and length distribution of SWCNTs respectively. Show less

A significant responsibility of officials involved in transportation planning is ensuring people's accessibility to critical facilities such as multi-modal terminals and emergency shelters. This challenging task depends on the available transportation infrastructure as well as the overall population, traffic, roadway and regional characteristics. Such planning takes on additional complexity when aging populations are considered because any extra time they incur reaching these facilities can... Show moreA significant responsibility of officials involved in transportation planning is ensuring people's accessibility to critical facilities such as multi-modal terminals and emergency shelters. This challenging task depends on the available transportation infrastructure as well as the overall population, traffic, roadway and regional characteristics. Such planning takes on additional complexity when aging populations are considered because any extra time they incur reaching these facilities can be especially confounding in light of their potential health and other safety concerns. As such, there is a need for state/federal transportation plans to have a transportation assessment component that specifically focuses on the accessibility of aging people ('the aging population' can be thought of as those people aged 65+ in this study) to critical facilities. To accomplish this goal, this study first describes a Geographical Information Systems (GIS)-based methodology for measuring the aging population-focused accessibility to multi-modal facilities in Florida. Spatially detailed population block- and county-based accessibility scores are calculated with respect to key intermodal facility types (airports, bus stations, and railway and ferry stations), and visually assessed via GIS maps. Second, a spatial optimization model is presented which focuses on maximizing the accessibility of aging populations to the emergency shelters. For this purpose, a p-median optimization model is proposed in order to minimize the transportation cost (travel time or roadway network distance costs between the origins –centroids of population blocks- and destinations –emergency shelters) in the transportation network, and therefore providing maximum accessibility for aging adults to the emergency shelters. In this context, different transportation costs are used: (a) roadway network distance, (b) free flow travel time, and (c) congested travel time. This model is also extended towards a capacitated p-median model with hubs, which makes it possible to conduct an extensive evaluation of possible intermediate hub locations that can have a significant effect on the accessibility of those shelters. The knowledge obtained from this accessibility analysis can successfully contribute to the development of more reliable aging population-focused transportation plans, as the analysis points to specific areas where accessibility could be improved as well as those candidate locations that can be serve as additional emergency shelters and intermediate hubs. Show less

The State of Florida acquires over 300 cutaway buses every year. The increasing popularity of such buses raised concerns about passenger safety and overall crashworthiness of this transportation mode. Dimensions of the cutaway buses and their two-stage manufacturing process made them exempted from safety standards which were developed for smaller passenger cars as well as for large coaches. To fill this gap, cutaway bus manufacturers try to demonstrate the strength of their bus roof... Show moreThe State of Florida acquires over 300 cutaway buses every year. The increasing popularity of such buses raised concerns about passenger safety and overall crashworthiness of this transportation mode. Dimensions of the cutaway buses and their two-stage manufacturing process made them exempted from safety standards which were developed for smaller passenger cars as well as for large coaches. To fill this gap, cutaway bus manufacturers try to demonstrate the strength of their bus roof structures by using FMVSS 220 standard, which follows conservative quasi-static load tests for school buses in the US. However, more advanced, dynamic based safety standard - Regulation 66, was developed in Europe. It is based on a dynamic rollover test which more closely resembles an actual rollover accident. A cutaway bus is placed on a tilt table 800 mm above a concrete slab. The bus is tilted until it falls and impacts the concrete deck and the deformation of the sidewalls is measured in order to check if there is any intrusion into a so called 'survival space'. This standard was endorsed by 44 countries through the United Nation resolution. However, the Regulation 66 standard does not specify all the parameters regarding the rollover test. From multiple tests it can be observed that the friction between the vehicle and the concrete slab which is being impacted by the bus has an influence on the outcomes of the experiment and has great contribution to either a positive or negative assessment of the crashworthiness of a tested vehicle. This Master thesis focuses on the friction parameters between the impacting cutaway bus and a concrete slab used in the Regulation 66 standard. Due to dynamic nature of the experiment, the impact of the bus exerts a high normal force on the concrete slab. Together with an uneven and non-standard geometry of the elements in contact with the concrete deck the standard coefficient of friction found in the literature or obtained using standard tests may not hold. The proper assessment of this coefficient is important since many rollover tests are carried out numerically using Finite Element Methods. The use of numerical analysis reduces the cost of an expensive full scale rollover test. However, it requires verified and validated parameters in order to consider the results trustworthy. The experimental part of this thesis consists of designing and carrying out experiments to evaluate the coefficient of friction for an impacting cutaway bus and a concrete slab. The results from the experiments are incorporated into an explicit computer code LS-DYNA, which is used for numerical analysis of the cutaway buses. The final outcome of this thesis will be validating the coefficient of friction used in the Finite Element Analysis which will lead to improvement of the Finite Element models and will be used to check the influence of the coefficient of friction on vehicle structure deformation (Deformation Index) during rollover accidents. Show less

Underwater vehicles suffer from reduced maneuverability with conventional lifting appendages due to the low velocity of operation. Circulation control offers a method to increase maneuverability independent of vehicle speed. However, with circulation control comes additional noise sources, which are not well understood. To better understand these noise sources, a modal-based prediction method is developed, potentially offering a quantitative connection between flow structures and far-field... Show moreUnderwater vehicles suffer from reduced maneuverability with conventional lifting appendages due to the low velocity of operation. Circulation control offers a method to increase maneuverability independent of vehicle speed. However, with circulation control comes additional noise sources, which are not well understood. To better understand these noise sources, a modal-based prediction method is developed, potentially offering a quantitative connection between flow structures and far-field noise. This method involves estimation of the velocity field, surface pressure field, and far-field noise, using only non-time-resolved velocity fields and time-resolved probe measurements. Proper orthogonal decomposition, linear stochastic estimation and Kalman smoothing are employed to estimate time-resolved velocity fields. Poisson's equation is used to calculate time-resolved pressure fields from velocity. Curle's analogy is then used to propagate the surface pressure forces to the far field. This method is developed on a direct numerical simulation of a two-dimensional cylinder at a low Reynolds number (150). Since each of the fields to be estimated are also known from the simulation, a means of obtaining the error from using the methodology is provided. The velocity estimation and the simulated velocity match well when the simulated additive measurement noise is low. The pressure field suffers due to a small domain size; however, the surface pressures estimates fare much better. The far-field estimation contains similar frequency content with reduced magnitudes, attributed to the exclusion of the viscous forces in Curle's analogy. In the absence of added noise, the estimation procedure performs quite nicely for this model problem. The method is tested experimentally on a 650,000 chord-Reynolds-number flow over a 2-D, 20% thick, elliptic circulation control airfoil. Slot jet momentum coefficients of 0 and 0.10 are investigated. Particle image velocimetry, unsteady pressure and phased-acoustic-array data are acquired simultaneously in an aeroacoustic wind-tunnel facility. The velocity field estimation suffers due to poor correlation with the unsteady pressure data, especially in the 0.10 momentum coefficient case. The prediction without slot jet blowing matches single microphone measurements within 0-10 dB over the frequency range of interest while the prediction with the jet active is quite poor and differ from measurements by as much as 35 dB. Suggestions for improvement of the proposed method are offered. Data from the acoustic array are then investigated. Single microphone spectra are obtained, and it is shown that background noise is significant. In order to circumvent this problem, beamforming is employed. The primary sources of background noise are from the tunnel collector and jet/sidewall interaction. DAMAS is employed to remove the effects of the array point spread function. Spectra are acquired by integrating the DAMAS result over the source region. The resulting DAMAS spectral levels are significantly below single microphone levels. A scaling analysis is performed on the processed array data. With a constant free-stream velocity and a varying jet velocity the data scale as M6. If momentum coefficient is held constant and free-stream velocity is varied the data scale as M7. Show less

This dissertation describes the propagation of near atmospheric nitrogen gas that rushes into a liquid helium cooled vacuum tube after the tube suddenly loses vacuum. The loss-of-vacuum scenario resembles accidental venting of atmospheric air to the beam-line of a superconducting radio frequency particle accelerator and is investigated to understand how in the presence of condensation, the in-flowing air will propagate in such geometry. In a series of controlled experiments, room temperature... Show moreThis dissertation describes the propagation of near atmospheric nitrogen gas that rushes into a liquid helium cooled vacuum tube after the tube suddenly loses vacuum. The loss-of-vacuum scenario resembles accidental venting of atmospheric air to the beam-line of a superconducting radio frequency particle accelerator and is investigated to understand how in the presence of condensation, the in-flowing air will propagate in such geometry. In a series of controlled experiments, room temperature nitrogen gas (a substitute for air) at a variety of mass flow rates was vented to a high vacuum tube immersed in a bath of liquid helium. Pressure probes and thermometers installed on the tube along its length measured respectively the tube pressure and tube wall temperature rise due to gas flooding and condensation. At high mass in-flow rates a gas front propagated down the vacuum tube but with a continuously decreasing speed. Regression analysis of the measured front arrival times indicates that the speed decreases nearly exponentially with the travel length. At low enough mass in-flow rates, no front propagated in the vacuum tube. Instead, the in-flowing gas steadily condensed over a short section of the tube near its entrance and the front appeared to `freeze-out'. An analytical expression is derived for gas front propagation speed in a vacuum tube in the presence of condensation. The analytical model qualitatively explains the front deceleration and flow freeze-out. The model is then simplified and supplemented with condensation heat/mass transfer data to again find the front to decelerate exponentially while going away from the tube entrance. Within the experimental and procedural uncertainty, the exponential decay length-scales obtained from the front arrival time regression and from the simplified model agree. Show less

Traditional light emitting diodes (LEDs) involve a complicated device structure with multiple layers stacked over one another. Such a complex, multilayered architecture restricts the application of diverse fabrication techniques. Earth-abundant organometal halide perovskites (Pero) have been well astounded for their appealing optoelectronic properties, low cost and solution processability which make them ideal candidates for large size photovoltaic and LED applications. The objective of this... Show moreTraditional light emitting diodes (LEDs) involve a complicated device structure with multiple layers stacked over one another. Such a complex, multilayered architecture restricts the application of diverse fabrication techniques. Earth-abundant organometal halide perovskites (Pero) have been well astounded for their appealing optoelectronic properties, low cost and solution processability which make them ideal candidates for large size photovoltaic and LED applications. The objective of this thesis work is to fabricate Pero LED with uniform surface morphology, eliminating the multilayers with the help of Pero/Polyethylene oxide (PEO) composite thin film. Because of the simplicity in device architecture, this novel approach has the potential to surpass all the conceivable troubles involved in the fabrication of Pero LEDs. Preliminary results show a working device achieved by spin coating a thin film of Pero/PEO composite on ITO/glass serving as a bottom electrode and with In/Ga as the top electrode. Furthermore, fully printable and flexible Pero LEDs can be developed from this approach which can be scaled to large commercial roll to roll manufacturing. Show less

Lithium-ion Capacitor’s (LICs) performance is greatly influenced by the operating temperature. Many cell design factors such as electrolyte formulation and electrode material composition can determine such performance. The standards for today’s commercial LIC do not reach temperatures needed for extreme temperature applications. Research was completed to develop other electrolytes for wide temperature range applications and along the way side effects of lithium plating and stripping were... Show moreLithium-ion Capacitor’s (LICs) performance is greatly influenced by the operating temperature. Many cell design factors such as electrolyte formulation and electrode material composition can determine such performance. The standards for today’s commercial LIC do not reach temperatures needed for extreme temperature applications. Research was completed to develop other electrolytes for wide temperature range applications and along the way side effects of lithium plating and stripping were explored in anode materials. Metrics for performance that were used for LICs were capacity, capacitance and ESR, cycle life retention, and electrochemical impedance spectroscopy (EIS). Wide temperature range electrolytes were developed from 70°C to -40°C and lithium plating in different anode materials was mitigated. Show less

Date Issued

2016

Identifier

FSU_2016SU_Cappetto_fsu_0071N_13438

Format

Thesis

Title

Performance Analysis of Distributed Control Algorithms Using a Hardware in the Loop Testbed.

The benefits of a smart grid system greatly depends on the efficient implementation of the power delivery system utilizing the data communication infrastructure. This makes it necessary to have a co-simulation platform to test the enabling technology. The Hardware in the loop testbed (HIL-TB) at the Center for Advanced Power Systems (CAPS) is a cyber-physical testbed that provides a real time co-simulation platform for testing the smart grid operations and control. Due to the inherent... Show moreThe benefits of a smart grid system greatly depends on the efficient implementation of the power delivery system utilizing the data communication infrastructure. This makes it necessary to have a co-simulation platform to test the enabling technology. The Hardware in the loop testbed (HIL-TB) at the Center for Advanced Power Systems (CAPS) is a cyber-physical testbed that provides a real time co-simulation platform for testing the smart grid operations and control. Due to the inherent complexity involved in initializing and running the individual components of the HIL-TB, the testbed is typically inaccessible and is mostly used for demonstrating only a single test scenario. As the test setup involves manual intervention, the idea of repeatability is lost. The aim of this thesis is to address the above raised concerns related to HIL-TB. The objective is to develop a methodology to perform comprehensive testing and analysis of distributed control algorithms that are developed for smart grid power systems. In order to quantify the effect of the algorithm on the underlying power system, it is necessary to develop metrics. This also allows the comparison of various algorithms and assess the effect on different feeder configurations. To verify the system level functionality for different operating conditions, the factors affecting the system performance are determined. The values for these factors needs to be chosen intelligently to maximize the accuracy and minimize the number of experiments. The HIL-TB validation framework presented in this thesis is built based on the principles of design of experiments. The framework provides a platform for the assessment of the control algorithms that would help de-risk the effects of the new techniques on the power system. Show less

Asymmetric jets are becoming more prevalent and may offer significant advantages over traditional axisymmetric nozzles for propulsion as well as fluidic mixing applications. The purpose of this work is two fold: 1) to investigate the effect nozzle exit geometry has on jet development and far field radiated noise of M = 0:9 jets and 2) to study the effect various levels of screech tone self excitation has on jet evolution and the production of streamwise vorticity. Three converging nozzles of... Show moreAsymmetric jets are becoming more prevalent and may offer significant advantages over traditional axisymmetric nozzles for propulsion as well as fluidic mixing applications. The purpose of this work is two fold: 1) to investigate the effect nozzle exit geometry has on jet development and far field radiated noise of M = 0:9 jets and 2) to study the effect various levels of screech tone self excitation has on jet evolution and the production of streamwise vorticity. Three converging nozzles of various exit geometry (rectangular, elliptic, and round) were utilized to perform the first study, while a supersonic rectangular nozzle was employed to complete the second. All asymmetric nozzles in this work had an aspect ratio of 4:1. To study the flow field features, two dimensional streamwise particle image velocimetry (PIV) as well as three component PIV at select cross planes was performed. Far field acoustic measurements were acquired for the converging nozzles to determine the differences exhibited in the radiated exhaust noise from the major and minor axes of the asymmetric jets compared to the round jet. In comparing the effect exit geometry has on the development of a M = 0:9 jet, it was determined that the shear layers in the major and minor axes developed at similar rates, however, the jet half width in the minor axis exhibited a larger growth rate than the major axis. It was also determined that neither of the asymmetric sonic jets exhibited the axis-switching phenomenon within the measurement domain. Significant streamwise vorticity is noted on the low speed side of the shear layer for the asymmetric jets in the corner regions and areas of small curvature. Moreover, this streamwise vorticity was observed to significantly effect the jet half width in the major axis of the elliptic jet. Acoustic results reveal that there is a strong dependence on frequency range concerning the amount of energy propagated to the far field for each different jet and axis. At low frequencies, the round jet is louder than both axes of the asymmetric jets at polar angles larger than 110°. As the investigated range of frequencies is increased, the primary direction of propagation of noise shifts towards sideline angles for all jets and axes. At the highest range of frequencies investigated, the minor axis of the asymmetric jets produced more noise compared to the equivalent round jet while considerably less noise is produced at polar angles of about 120° – 130° in the major axis direction. Overall sound pressure levels (OASPL) show that the OASPL from the rectangular jet in the plane containing the major axis is lower than the equivalent round jet for aft quadrant angles; the main contributor to the overall reduction is from the highest frequency components. In order to determine the impact screech tone amplitude has on jet development, flow field characteristics of a moderate aspect ratio supersonic rectangular jet were examined at two overexpanded, a perfectly expanded, and an underexpanded jet conditions. The underexpanded and one overexpanded operating condition were of maximum screech, while the second overexpanded condition was of minimum screech intensity. The results show that streamwise vortices present at the nozzle corners along with vortices excited by screech tones play a major role in the jet evolution. The location of streamwise vortex amplification in cases of screech is strongly tied to the downstream shock cell number and the traditional source of the screech tone. All cases except for the perfectly expanded operating condition exhibited axis switching at streamwise locations ranging from 11 to 16 nozzle heights, h, downstream of the exit. The overexpanded condition of maximum screech showed the most upstream switch over, while the underexpanded case showed the farthest downstream. Both of the maximum screeching cases developed into a diamond cross sectional profile far downstream of the exit, while the ideally expanded case maintained a rectangular shape. The overexpanded minimum screeching case eventually decayed into an oblong profile. Show less

Wind energy has become one of the most important and thriving renewable energy resources in the world. Transforming the kinetic energy of wind into electric power is more environmentally friendly than traditional processes such as the combustion of fossil fuels. It provides independence from the limited fossil fuels reserves by using an unlimited resource. In order to develop a wind power facility, it is important to develop an initial wind resource assessment to guarantee the selected site... Show moreWind energy has become one of the most important and thriving renewable energy resources in the world. Transforming the kinetic energy of wind into electric power is more environmentally friendly than traditional processes such as the combustion of fossil fuels. It provides independence from the limited fossil fuels reserves by using an unlimited resource. In order to develop a wind power facility, it is important to develop an initial wind resource assessment to guarantee the selected site will be profitable in terms of electric energy output. Several countries lack developed wind atlases that indicate a rough estimate of wind resource in their territories, which is an obstacle for inexpensive wind resource evaluations. In order to perform site evaluations generally an anemometer must be put in place to take wind measurements. This process is costly and time consuming since at least a year of data must be observed. The quality of wind resource depends on several geographic and atmospheric characteristics such as: air density, site location, site topography, wind speed and direction. This study was conducted to provide an initial wind resource assessment on three locations in Venezuela which do not have previous evaluations: Cerro Copey, Punta de Piedras and Los Roques. The assessment was done remotely based on the national meteorological service meteorological observations; wind resource and turbine power output uncertainties were taken into account. The wind assessment was done through Monte Carlo simulations mathematically considering several uncertainties with emphasis on surface roughness for vertical extrapolation. The results exhibit wind energy potential of the three sites and a throughout wind resource characterization of the site with the most potential: Cerro Copey. Show less

Date Issued

2016

Identifier

FSU_2016SP_VasquezMaldonado_fsu_0071N_13137

Format

Thesis

Title

The Modular Multilevel Converter and Fault Current Management in Medium Voltage DC System of an Electric Ship.

The Modular Multilevel Converter (MMC) is a potential candidate for power conversion in a Medium Voltage DC System (MVDC) based electric ship. One of the major advantages of utilizing MMC in an MVDC environment is the capability of limiting DC side fault current and fast re-start process because re-energizing of the MMC cells is not necessary. The MMC cells have various configurations e.g. half-bridge, full-bridge. The full-bridge MMC is more suitable for the MVDC system and fault current... Show moreThe Modular Multilevel Converter (MMC) is a potential candidate for power conversion in a Medium Voltage DC System (MVDC) based electric ship. One of the major advantages of utilizing MMC in an MVDC environment is the capability of limiting DC side fault current and fast re-start process because re-energizing of the MMC cells is not necessary. The MMC cells have various configurations e.g. half-bridge, full-bridge. The full-bridge MMC is more suitable for the MVDC system and fault current handling. However, the modeling, control, coordination in a multi-MMC system, and fault handling of a full-bridge MMC based MVDC system is still not fully investigated and understood. This thesis focused on the key issues of the full-bridge MMC controls and modeling in an MVDC environment and the fault current limiting using multiple MMCs. The fundamental characteristics of the MMC topology are also discussed. Followed by the single MMC control design, the MMC control scheme for the MVDC system is designed to adapt to the capability of having a fast and controllable DC voltage and current. To decrease the complexity of the MMC circuit, two simple averaged models of MMC are proposed. To verify the accuracy of the averaged models, the simulation results are compared with the results from Controller Hardware in The Loop (CHIL). The results of the comparison show that the proposed two types of averaged models predict the steady state values with very a good accuracy. For studying the behavior of a multi-MMC based MVDC system under DC side fault scenarios, an MVDC test system is proposed in this work. For comparison purposes, the real-time system model and off-line model are developed respectively. The off-line MMC model uses the individual IGBT component from the MATLAB/Simulink/SimPowerSystem software package whereas the real-time model is built using the library provided by OPAL-RT. The multi-cell circuit which has many nodes is simplified as a two-node voltage source and an equivalent resistance in series connection by applying the Th\'evenin equivalent. This thesis also discusses the challenges of determining the sampling time and how to group the MVDC system component models so that it is able to run in a multi-core real-time simulator. Besides the modeling of the MVDC system components (e.g. the MMCs, the loads), a fault current limiting strategy is also proposed in this work. This thesis put forward an operation mode for the multi-MMC system in a way that only one MMC is allowed to run in voltage controlled mode and the other MMCs are required to run in power controlled mode. By employing this operation mode, the fault current can be limited in the case of a DC side fault scenario. And no operation mode switching is needed as this operation mode also works for normal operation. The proposed fault current limiting strategy also contains the sequence of the converter actions. Five simulation cases are designed to test the proposed fault handling strategy. The simulation results show that the peak fault current is related to the operation conditions e.g. the pre-fault load current carried by MMCs, and the MMC control has some effect on mitigating the peak fault current. The proposed fault current limiting strategy is able to limit the fault current to a certain level in an MVDC system made up of single MMC, two MMCs and four MMCs with different loads conditions. Show less

A major challenge to the study of the structure-property relationship of carbon nanotube (CNT) networks is to characterize the complex nanostructure with complicated nanoscale contacts and pore structures. An image-based characterization methodology was proposed to extract CNT network information directly from scanning electron microscope (SEM) images of various CNT thin films to characterize critical topological factors including bundle size, diameter, and orientation from the CNT networks.... Show moreA major challenge to the study of the structure-property relationship of carbon nanotube (CNT) networks is to characterize the complex nanostructure with complicated nanoscale contacts and pore structures. An image-based characterization methodology was proposed to extract CNT network information directly from scanning electron microscope (SEM) images of various CNT thin films to characterize critical topological factors including bundle size, diameter, and orientation from the CNT networks. This approach provided high-fidelity and fast analysis of CNT network structures with low false positive rate (FPR) of ~3% and ~90% accuracy in most of our case studies. We applied the new approach to study different networks of multi-walled carbon nanotube (MWNT), single-walled carbon nanotube (SWNT), MWNT-SWNT mixed, and stretched MWNTs with different CNT alignments, which revealed the electrical conductivity-structure relationships of MWNT networks. On the other hand, controlling the transfer of electrical and mechanical properties of nanotubes into nanocomposites remains one of the major challenges due to the lack of adequate measurement systems to quantify the variations in bulk properties while the nanotubes were used as the reinforcement material. One-way analysis of variance (ANOVA) on thickness and conductivity measurements were conducted. By analyzing the data collected from both experienced and inexperienced operators, we found some operation details users might overlook that resulted in variations, since conductivity measurements of CNT thin films are very sensitive to thickness measurements. In addition, we demonstrated how issues in measurements damaged samples and limited the number of replications resulting in large variations in the electrical conductivity measurement results. Based on this study, we proposed a faster, more reliable approach to measure the thickness of CNT thin films that operators can follow to make these measurement processes less dependent on operator skills. Show less

Date Issued

2015

Identifier

FSU_2016SU_Li_fsu_0071E_13311

Format

Thesis

Title

Evaluation of Florida Vehicle Classification Table.

Creator

Masaki, Jaqueline Eliabu, Moses, Ren, Ozguven, Eren Erman, Sobanjo, John Olusegun, Florida State University, College of Engineering, Department of Civil and Environmental... Show moreMasaki, Jaqueline Eliabu, Moses, Ren, Ozguven, Eren Erman, Sobanjo, John Olusegun, Florida State University, College of Engineering, Department of Civil and Environmental Engineering Show less

Abstract/Description

Accurate vehicle classification data is fundamental to pavement design and road safety analysis. In addition, vehicle classification data is important for Florida Department of Transportation’s Transportation Statistics Office’s clients including consultants, researchers, designers, and planners who use the data to perform various analyses. In the mid-1980s, the Federal Highway Administration (FHWA) developed a standardized vehicle classification system which was designed to meet the needs of... Show moreAccurate vehicle classification data is fundamental to pavement design and road safety analysis. In addition, vehicle classification data is important for Florida Department of Transportation’s Transportation Statistics Office’s clients including consultants, researchers, designers, and planners who use the data to perform various analyses. In the mid-1980s, the Federal Highway Administration (FHWA) developed a standardized vehicle classification system which was designed to meet the needs of many traffic data users. This resulted in the FHWA 13-category classification rule set presently used for most Federal reporting requirements. Furthermore, this serves as the foundation for most State vehicle classification reporting efforts.The Florida Department of Transportation (FDOT) uses the FHWA F-Scheme to classify vehicles throughout the state highway system. This scheme relies mainly on the number of axles and the axle spacing, but on some Weigh-in-Motion (WIM) sites, vehicle weights are also used to improve classification. This thesis evaluates the performance of the Florida vehicle classification table of non-WIM sites using video data as the ground truth.This thesis has two main parts. Part I compares the performance of different data recorders that use FDOT vehicle classification table for WIM and non-WIM sites in classifying vehicles and evaluate the misclassification rates for each recorder. Part II evaluates the accuracy of the Florida vehicle classification table, determines the sources of misclassification, describes the changes recommended in the classification table to improve the classification accuracy, proposes and validates the improved vehicle classification table. This report will be of interest to Florida Department of Transportation and consultants, researchers, engineers, designers, and planners who require accurate vehicle classification information for planning, designing and maintenance of transportation infrastructures. Show less

Cracking is a primary distress on flexible pavements in Florida. Therefore, it is necessary to evaluate the crack resistance of proposed asphalt mixtures on Florida Department of Transportation (FDOT) projects. A comprehensive literature review was conducted on the evaluation of reflective cracking of HMA mixtures. Mechanisms of reflective cracking, crack models, and crack resistance evaluation was reviewed based on research studies performed by other researchers. The ability of the overlay... Show moreCracking is a primary distress on flexible pavements in Florida. Therefore, it is necessary to evaluate the crack resistance of proposed asphalt mixtures on Florida Department of Transportation (FDOT) projects. A comprehensive literature review was conducted on the evaluation of reflective cracking of HMA mixtures. Mechanisms of reflective cracking, crack models, and crack resistance evaluation was reviewed based on research studies performed by other researchers. The ability of the overlay test for characterizing the cracking-resistance performance of asphalt mixtures was methodically reviewed. Available information, such as test procedures, results, and findings, were collected and examined. The variability and effects of different factors on the overlay test, such as test setup, opening width, sample thickness, asphalt binder, and Reclaimed Asphalt Pavement (RAP) materials, were also evaluated. Cracking performance of common Florida asphalt mixtures were evaluated using laboratory Overlay Test (OT). A test procedure based on Tex-248-F test method was developed to accommodate the Florida test methods on asphalt mixtures. Nine standard mix designs for traffic level C & E, which included SP-12.5, SP-9.5, and SP-4.75 mix designs, were selected to conduct the Overlay Test. Granites, which were from different sources, were used as the aggregate in the mixtures. In addition, the mixtures were prepared using both virgin asphalt binder (PG 67-22) and polymer modified asphalt (PMA) binder (PG 76-22). The effects of material characteristics, polymer modified binder, and RAP on the crack resistance of Florida asphalt mixtures were investigated. Additionally, a lower maximum opening displacement, 0.0125 inch, was tried out on one type of mixture (SP-12.5 with 20% RAP) to determine the significance of displacement rate on the crack resistance of the Florida asphalt mixture. Three replicate samples were tested for each type of mixture. The test results had a good agreement on the three replicate samples. The coefficients of variation (COV) were less than 20%. It was found that granite from different aggregate source did not have a strong influence on the test results, while the aggregate size did have a significant effect. SP-9.5 mixtures had the best cracking performance compared to SP-12.5 and SP-4.75 mixtures. Considerable effects were found on the asphalt binder and RAP. Crack resistance of Florida asphalt mixtures was significantly improved if PG 76-22 PMA binder was used instead of PG 67-22 virgin asphalt binder. However, the crack resistance was reduced when 20% RAP was included in the mix designs. Fracture mechanics analysis was conducted on the overlay test results based on the Paris' Law. Besides of fracture properties A and n, crack indexes A' and n', which can be easily obtained from the overlay test load reduction curve, were introduced to evaluate the crack resistance of asphalt mixtures. The correlation relationships between the crack indexes and the fracture properties were developed. It was found that the asphalt mixtures with greater n'/n values had better crack resistance than the asphalt mixtures with lower n'/n. The computed fracture properties can be compared to the results from the other tests, such as IDT. The laboratory test results can also be compared to the field observations to better predict the cracking performance of asphalt mixtures in the field. Some criteria based on the laboratory test results can be adopted into the design guide to evaluate the cracking performance of the asphalt mixtures. Show less

There has been a growing interest in electrochemical storage devices such as batteries, fuel cells and supercapacitors in recent years. This interest is due to our increasing dependence on portable electronic devices and on the high demand for energy storage from the electric transport vehicles and electrical power grid industries. As we transition towards cleaner renewable fuel sources such as solar, wind, tidal, etc. our dependence on energy storage devices will continue to grow. Li-air... Show moreThere has been a growing interest in electrochemical storage devices such as batteries, fuel cells and supercapacitors in recent years. This interest is due to our increasing dependence on portable electronic devices and on the high demand for energy storage from the electric transport vehicles and electrical power grid industries. As we transition towards cleaner renewable fuel sources such as solar, wind, tidal, etc. our dependence on energy storage devices will continue to grow. Li-air offers much higher energy density than all other batteries based on electrochemical storage. However, these batteries currently suffer from a number of issues such as a low cyclability and a reduced practical energy density compared to the theoretical energy density. The deposition of lithium peroxide on the surface of the cathode is one of the main causes for the low practical specific capacity of lithium-air batteries with organic electrolyte. Electrochemical impedance spectroscopy (EIS) has been used in the past to extract physical parameters such as chemical diffusion coefficient, effective diffusion coefficient, Faradaic reaction rate, degradation and stability of an electrochemical device. In this dissertation, a physics based analytical model is developed to study the EIS of Li-air batteries, in which the mass transport inside the cathode is limited by oxygen diffusion, during charge and discharge. The model takes into consideration the effects of double layer, Faradaic processes, and oxygen diffusion in the cathode, but neglects the effects of anode, separator, conductivity of the deposit layer, and Li-ion transport. The analytical model predicts that the effects of Faradaic impedance can be hidden by the double layer capacitance. Therefore, the dissertation focuses separately on two cases: 1) the case when the Faradaic process and the double layer capacitance are separate and can be observed as two different semicircles on the Nyquist plot and 2) the case when the Faradaic process is shadowed by the double layer capacitance and shows up as only one large semicircle on the Nyquist plot. A simple expression is developed to extract physical parameters such as the values of the diffusion coefficient of oxygen and Faradaic reaction rate from experimental impedance spectrum for each of the two cases. The diffusion coefficient can be determined by using the resistances (real impedance intercept on the Nyquist plot) of both the semicircles for the first case and by using the combined resistance for the second case. Once, the effective oxygen diffusion coefficient is estimated, it can be used to estimate the value of the reaction constant. This method of extracting the values of the diffusion coefficient and reaction constant can serve as a tool in identifying an effective electrolyte or cathode material. It can also serve as a noninvasive technique to identify and also quantify the use of the catalyst to improve the reaction kinetics in an electrochemical system. Finally, finite element simulations are used to validate the analytical models and to study the effects of discharge products on the impedance spectra of Li-air batteries with organic electrolyte. The finite element simulations are based on the theory of concentrated solutions and the complex impedance spectra are computed by linearizing the partial differential equations that describe the mass and charge transport in Li-air batteries. These equations include the oxygen diffusion equation, the Li drift-diffusion equation, and the electron conduction equation. The reaction at the anode and cathode are described by Butler-Volmer kinetics. The total impedance of a Li-air battery increases by more than 200% when the response is measured near the end of the discharge cycle as compared to on a fresh battery. The resistivity of the deposition layer significantly affects the deposition profile and the total impedance. Using electrolytes with high oxygen solubility and concentrated O2 gas at high pressures will reduce the total impedance of Li-air batteries. Show less

In numerous applications involving high dimensional data, certain subspace techniques such as principal components analysis (PCA) may be utilized in feature extraction. Often, PCA can reduce the dimensionality while retaining most of the significant information of the original data. This can be beneficial not only for representation of the data more compactly (compression), but also for transforming the data into a more useful form for applications involving feature extraction and... Show moreIn numerous applications involving high dimensional data, certain subspace techniques such as principal components analysis (PCA) may be utilized in feature extraction. Often, PCA can reduce the dimensionality while retaining most of the significant information of the original data. This can be beneficial not only for representation of the data more compactly (compression), but also for transforming the data into a more useful form for applications involving feature extraction and classification. Relatively recent developments with PCA extend conventional principal components analysis to newer variants of PCA which appear particularly useful in computer vision and image applications: (1) two dimensional PCA ("2D PCA"), and (2) bidirectional or bilateral two dimensional PCA ("B2DPCA", "Bi2DPCA", or "(2D)² PCA"). The latter category includes an iterative version which is an example of coupled subspace analysis or "CSA"; the non-iterative version is known as projective Bi2DPCA. In this thesis, these PCA variants are considered as special cases of the more general CSA. Theoretical advantages of 2D PCA and bidirectional PCA over conventional PCA should arise from the fact that significant information about the spatial relationship between image pixels may be discarded in conventional PCA as the image is represented by a large column vector, whereas 2D PCA and bidirectional PCA techniques can preserve more of this information by representing the image as a matrix rather than a long vector. The problems of small sample size, and curse of dimensionality are also alleviated to some extent, particularly in the cases of B2DPCA and iterated CSA. Some of these PCA variants have been proposed in various image recognition applications recently, including biometric identification using iris texture, face images, and palm prints, and categorization of wood species based on wood grain texture to name a few examples. So, while much focus has been placed on feature extraction methods such as use of Gabor wavelets or similar techniques for some applications such as iris recognition, some subspace techniques, including some of these PCA variants, have shown promise in conjunction with image preprocessing techniques for removal of uneven background illumination and contrast enhancement. In this thesis, the image application of biometric iris recognition is chosen as the means of evaluating potential advantages of these newer PCA variants, including CSA, in the context of feature extraction and classification. The rich texture information of these images, and the utilization of effective image registration techniques, yields images which are well suited for this purpose. As the primary focus of this thesis, these PCA variants are evalulated using closed set identification test mode, and are compared using Euclidean distance single nearest neighbor classifier; images are preprocessed using top-hat filtering and contrast limiting adaptive histogram equalization (CLAHE). Use of multiple test (probe) images is considered, and the impact on performance is considered also for training image sets with 2, 3, and 4 sample images per class. Concurrently, the application of iris image recognition is addressed in detail. Other applications for which these PCA variants and preprocessing techniques may be beneficial are discussed in the concluding section. Show less

To transfer the incredible properties, including ultrahigh tensile strength, Young's modulus, and electrical conductivity of an individual carbon nanotube (CNT) into composite applications, the constituent nanotubes need to possess adequate alignment, interfacial bonding and a high CNT volume fraction. Direct incorporation of the CNT films, or buckypaper, materials into carbon fiber laminated structures to manufacture hybrid composites is an effective approach to utilize the lightweight,... Show moreTo transfer the incredible properties, including ultrahigh tensile strength, Young's modulus, and electrical conductivity of an individual carbon nanotube (CNT) into composite applications, the constituent nanotubes need to possess adequate alignment, interfacial bonding and a high CNT volume fraction. Direct incorporation of the CNT films, or buckypaper, materials into carbon fiber laminated structures to manufacture hybrid composites is an effective approach to utilize the lightweight, conductive and nanostructured nature of dense CNT networks for multifunctional applications of structural carbon fiber composites. This work studied the microstructure-property relationships of CNT networks when orientation is induced. The mechanical stretching method is shown to be scalable and effective for ultra-high alignment. A manufacturing technique of applying a viscous resin treatment before the stretching procedure is shown to allow up to 80% stretching strain and a resultant alignment fraction of 0.93. The resin acts as an effective load transfer media to substantially enhance the ductility for high stretching strain. The alignment characterization is carried out through Raman spectroscopy and X-ray diffraction methods that reveal the graphitic crystal structure of the film. The load transfer mechanisms and failure modes of aligned CNT composites are explored through high concentration CNT reinforced nanocomposites. Atomic resolution transmission electron microscopy (TEM) analysis reveals unusual CNT crystal packing and permit the observation of interesting structural features of the CNTs and their assemblages, including collapse, flattened packing, preferred stacking, folding and twisting phenomena, as well as CNT pullouts from bundles and the resin matrix. The intimate surface-to-surface contact areas between aligned and flattened nanotubes, driven by van der Waals interactions, give rise to a high density packing of the flattened CNTs in the nanocomposite, resembling a graphitic crystal material. Molecular dynamics (MD) simulations were performed through collaboration to model the packing structure and understand the dependence of density on the relative content of flattened nanotube and void space. Macroscopic modeling predictions illustrate how the alignment and volume fraction of the encompassed CNTs affect the stiffness of the overall composite. CNT thin films were integrated into carbon fiber (CF) prepreg composites to create hybrid composite materials with high CNT content through industry standard autoclave fabrication processing. Resin bleeding along the through-thickness direction was inhibited due to extra-low permeability, nano/micro dual-scale flow characteristics and high resin absorbing capacity of the CNT thin film in hybrid composites. CNT swelling effects and resin starvation phenomena are studied in relation to the amount and orientation of the CNT laminates. The flexural three-point bending results of the random and aligned CNT/CF hybrids exhibit an increased resistance to catastrophic failure even under repeated loading parameters as compared to the CF control samples. The dramatic improvements in both in-plane and through-thickness electrical conductivities demonstrate potential for both structural and multifunctional applications of the resultant hybrid composites. Show less

With the advent of nanotechnology, nanomaterials have drastically improved our lives in a very short span of time. The more we can tap into this resource, the more we can change our lives for better. All the applications of nanomaterials depend on how well we can synthesize the nanoparticles in accordance with our desired shape and size, as they determine the properties and thereby the functionality of the nanomaterials. Therefore in this report, it is focused on how to extract the shape of... Show moreWith the advent of nanotechnology, nanomaterials have drastically improved our lives in a very short span of time. The more we can tap into this resource, the more we can change our lives for better. All the applications of nanomaterials depend on how well we can synthesize the nanoparticles in accordance with our desired shape and size, as they determine the properties and thereby the functionality of the nanomaterials. Therefore in this report, it is focused on how to extract the shape of the nanoparticles from electron microscope images using image segmentation more accurately and more efficiently. By developing automated image segmentation procedure, we can systematically determine the contours of an assortment of nanoparticles from electron microscope images; reducing data examination and interpretation time substantially. As a result, the defects in the nanomaterials can be reduced drastically by providing an automated update to the parameters controlling the production of nanomaterials. The report proposes new image segmentation techniques that specifically work very effectively in extracting nanoparticles from electron microscope images. These techniques are manifested by imparting new features to Sliding Band Filter (SBF) method called Gradient Band Filter (GBF) and by amalgamating GBF with Active Contour Without Edges method, followed by fine tuning of μ (a positive parameter in Mumford-Shah functional). The incremental improvement in the performance (in terms of computation time, accuracy and false positives) of extracting nanoparticles is therefore portrayed by comparing image segmentation by SBF versus GBF, followed by comparing Active Contour Without Edges versus Active Contour Without Edges with the fusion of Gradient Band Filter (ACGBF). In addition we compare the performance of a new technique called Variance Method to fine tune the value of μ with fine tuning of μ based on ground truth, followed by gauging the improvement in the performance of image segmentation by ACGBF with fine tuned value of μ over ACGBF with an arbitrary value of μ. Show less

Date Issued

2015

Identifier

FSU_2015fall_Allada_fsu_0071N_12975

Format

Thesis

Title

Characterization of Sapphire: for Its Material Properties at High Temperatures.

There are numerous needs for sensing, one of which is in pressure sensing for high temperature application such as combustion related process and embedded in aircraft wings for reusable space vehicles. Currently, silicon based MEMS technology is used for pressure sensing. However, due to material properties the sensors have a limited range of approximately 600°C which is capable of being pushed towards 1000°C with active cooling. This can introduce reliability issues when you add more parts... Show moreThere are numerous needs for sensing, one of which is in pressure sensing for high temperature application such as combustion related process and embedded in aircraft wings for reusable space vehicles. Currently, silicon based MEMS technology is used for pressure sensing. However, due to material properties the sensors have a limited range of approximately 600°C which is capable of being pushed towards 1000°C with active cooling. This can introduce reliability issues when you add more parts and high flow rates to remove large amounts of heat. To overcome this challenge, sapphire is investigated for optical based pressure transducers at temperatures approaching 1400°C. Due to its hardness and chemical inertness, traditional cutting and etching methods used in MEMS technology are not applicable. A method that is being investigated as a possible alternative is laser machining using a picosecond laser. In this research, we study the material property changes that occur from laser machining and quantify the changes with the experimental results obtained by testing sapphire at high-temperature with a standard 4-point bending set-up. Keywords: Sapphire, Bayesian analysis, thermomechanics, alumina Show less

The Navy has proposed to use a shipboard power system operating at medium voltage direct current to distribute power for their all-electric ship. The power is generated by electric machines as alternating current and requires power electronic rectifiers to output direct current. Power electronics converters are needed to convert the direct current to alternating current for ship propulsion and service loads. An increase in the use of fast switching power electronics is expected in future... Show moreThe Navy has proposed to use a shipboard power system operating at medium voltage direct current to distribute power for their all-electric ship. The power is generated by electric machines as alternating current and requires power electronic rectifiers to output direct current. Power electronics converters are needed to convert the direct current to alternating current for ship propulsion and service loads. An increase in the use of fast switching power electronics is expected in future ships. The increased voltage rise time on switches is known to produce unwanted high frequencies with corresponding wavelengths of the same order of magnitude as the length of the ship hull. These high frequency transients can cause the ship system to couple with the surrounding ship hull causing adverse effects. The amount of high frequency content and the impact it has on the ship system performance is difficult to calculate with current models. Increased voltage and performance requirements for power electronics has led to advancements in switching frequencies into the 10s to 100s of kilohertz and increased voltage edge rates. The faster switching corresponds to higher frequency responses from the shipboard power system. Research has shown that high frequency content in electrical power systems is responsible for parasitic coupling and ultimately damage to the equipment. Electric machines, for instance, have increased winding and iron losses, overvoltages at the terminals, and even bearing currents via shaft voltages. The Navy is interested in simulating ship systems to test their electromagnetic compatibility before implementing or committing to a specific design. There are numerous techniques used to acquire machine parameters that have been proven to be useful in modeling electric machine behavior. The approaches were considered by the amount of proprietary information needed to acquire accurate results, the complexity of the modeling methods, and the overall time it takes for implementation. A majority of system simulations gravitate towards simple solutions for machine behavior which require assumptions to be made that deviate from the actual machine behavior. Exact inner dimensions, winding layouts, end winding dimensions, insulation thickness, and other information are proprietary and often not accurate representations of the physical machine once built. It is time consuming to obtain an accurate working model when assumptions are made or when detailed computer aided design models are needed to calculate machine response quantities. The research modeling approach put forth in this paper is not aimed at capturing the steady-state behavior of the machine. It is shown that a detailed understanding of the motor may not be necessary to accurately model the high frequency effects. It is the transient behavior at non-operating frequencies that need to be modeled correctly to develop new models of shipboard power systems for grounding research. The frequency dependent information is most useful to determine frequencies of interest that other modeling techniques are less likely to capture and point out. Previously suggested measurement techniques have been considered useful in determining parameters of machines but are not always accurately implemented without in-depth knowledge of the motor that may be proprietary. Lumped-parameter models are based on extracting information at transitional frequencies or looking at the slope of a variable over a frequency range. These models tend to be over simplified representations of the component by averaging the parameters for given ranges. In reality a machine's impedance varies with all frequencies. Lumped parameter based models typically over simplify the grounding behavior of the machine by not varying the impedance as a function of frequency. The technique used in this research is based on scattering parameters, a way of determining the terminal behavior of the machine without the knowledge of the actual inner workings of the machine. The inverse scattering technique uses steady-state stimuli to calculate reflection and transmission coefficients of system components allowing the device to be considered as a black box. This can be understood as electrical snapshots of how the machine would respond when subjected to a range of spectral content. The approach could have a significant impact on the modeling of ground interactions with machines. The machine can now be measured and characterized with no prior knowledge of the machine. The measurements are placed in simulation software in the typical measurement configurations used in other approaches to extract parametric data. It was discovered that these different configuration setups could now be measured in software without the need to physically reconfigure the machine's wiring for each measurement. This modeling approach was coined 'virtual measurement modeling.' To the best of the author's knowledge there are not any known techniques for fast model prototyping of electric machines which cover a broad range of frequencies with high accuracy. This thesis will present a possible solution for consideration in future models developed for grounding studies. This approach outlines a promising technique that can be easily implemented with high accuracy and reproducibility. The technique was derived from inverse scattering theory and was implemented on electric machines for characterizing high frequency behaviors. Show less

Date Issued

2015

Identifier

FSU_2015fall_Breslend_fsu_0071N_12834

Format

Thesis

Title

A Statistical Analysis of Effects of Test Methods on Spun Carbon Nanotube Yarn.

Carbon nanotube (CNT) fibers are very promising materials for many applications. Strong interactions among individual CNTs could produce a dense yarn results in exceptional properties. These properties are used in the application of high-performance reinforcement for composites. As the reinforcement, the primary function is to provide outstanding load bearing capability. Currently literatures use a variety of measurement techniques and gauge lengths that have not been uniform for CNT yarn... Show moreCarbon nanotube (CNT) fibers are very promising materials for many applications. Strong interactions among individual CNTs could produce a dense yarn results in exceptional properties. These properties are used in the application of high-performance reinforcement for composites. As the reinforcement, the primary function is to provide outstanding load bearing capability. Currently literatures use a variety of measurement techniques and gauge lengths that have not been uniform for CNT yarn tests. The need for a standardized testing method for characterization is necessary in generating reproducible and comparable data for CNT yarn or fiber materials. In this work, the strength of CNT fibers was characterized using three different types of tensile test method: the film and fiber test fixtures from dynamics mechanic analysis (DMA), and TS 600 tensile fixture. Samples that underwent the film and TS 600 tensile fixture were attached with a thick paper tabbing methodology based on ASTM standard D3379. As for the fiber fixture was performed with the test material attached directly to the fixture based on the fiber test instruction from TA Instrument. The results of the three different methods provided distinct variance in stress, strain, and modulus. A design of experiment (DoE) was established and performed on the DMA film fixture as determined from the preliminary experiment. The DoE was successful in quantifying the critical parameters' ranges that attributed to standard deviation of average stress. These parameters were then tested on 30 more samples with an improved additive manufactured tab. The results significantly decreased all mechanical testing parameters' standard deviations. Most importantly, the results prove the probability of a valid gauge break increased to more than 400%. Show less

This dissertation is mainly focused on the investigation of cathode in Li-air batteries using organic electrolyte and the development of high-rate rechargeable Li-air flow batteries. A Li-air battery using organic electrolyte with an air electrode made with a mixture of carbon nanotube (CNT) and carbon nanofiber (CNF) is utilized to investigate the capacity limitation effects of cathode using a multiple-discharge method. Scanning electron microscopy (SEM) images show that the discharge... Show moreThis dissertation is mainly focused on the investigation of cathode in Li-air batteries using organic electrolyte and the development of high-rate rechargeable Li-air flow batteries. A Li-air battery using organic electrolyte with an air electrode made with a mixture of carbon nanotube (CNT) and carbon nanofiber (CNF) is utilized to investigate the capacity limitation effects of cathode using a multiple-discharge method. Scanning electron microscopy (SEM) images show that the discharge product mainly forms at the air side of cathode due to low oxygen solubility and diffusivity in the organic electrolyte. This inhomogeneous distribution of discharge product indicates that the Li-air cell falls short of the maximum capacity of air electrode. Electrochemical impedance spectra (EIS) demonstrated that during discharge at high current density (1 mA/cm2) pore blocking is the major factor that limits capacity; however, during discharge at low current density (0.2 mA/cm2) both pore blocking and impedance rise contribute to the capacity limitation. It's been confirmed that cathode is the dominant limitation to the discharge capacity. Also, the gradient porosity structure of cathode is able to increase the capacity based on the weight of carbon, but the electrolyte loading needs to be optimized to achieve high energy density of cell. A novel rechargeable Li-air flow battery is demonstrated. It consists of a lithium-ion conducting glass-ceramic membrane sandwiched by a Li-metal anode in organic electrolyte and a carbon nanofoam cathode through which oxygen-saturated aqueous electrolyte flows. It features a flow cell design in which aqueous electrolyte is bubbled with compressed air, and is continuously circulated between the cell and a storage reservoir to supply sufficient oxygen for high power output. It shows high rate capability (5 mA/cm²) and renders a power density of 7.64 mW/cm² at a constant discharge current density of 4 mA/cm². Adding RuO² as a catalyst in the cathode, the battery showed a high round-trip efficiency (ca. 83%), with the overpotentials of 0.67 V between charge and discharge at a current of 1 mA/cm². A Li-air flow battery using graphite as anode is also demonstrated for several cycles. Show less

Date Issued

2014

Identifier

FSU_migr_etd-9156

Format

Thesis

Title

Consensus-Based Distributed Control for Economic Dispatch Problem with Comprehensive Constraints in a Smart Grid.

Over the past few decades, the smart grid technology has been developed rapidly due to its main features of more involvement of customers and abilities to accommodate all renewable energy and distributed storages. More importantly, it offers an improved reliability, power quality and self-healing capability. However, there are many problems and challenges associated the development of smart grid. For example, the economic dispatch problem (EDP) in a smart grid has become more complex and... Show moreOver the past few decades, the smart grid technology has been developed rapidly due to its main features of more involvement of customers and abilities to accommodate all renewable energy and distributed storages. More importantly, it offers an improved reliability, power quality and self-healing capability. However, there are many problems and challenges associated the development of smart grid. For example, the economic dispatch problem (EDP) in a smart grid has become more complex and challenging due to special characteristics of smart grid. For example, one of the major characteristics of smart grids is plug-and-play due to its accommodation of distributed energy. Economic dispatch is the short-term determination of the optimal output of a number of electricity generation facilities, to meet the system load, at the lowest possible cost, subject to transmission line loss and generation constraints. In short, EDP is an optimization problem and its aim is to reduce the total operation cost. Various mathematical and optimization methods have been developed to solve EDP in power systems. Most of the conventional methods collect global information and process commands in a centralized controller. In a smart grid, it's expensive and unreliable for these conventional centralized methods to achieve a minimum cost when generating a certain amount of power within certain power constraints. There are several reasons why it's not suitable to use centralized methods for EDP in a smart grid. First of all, the centralized controller requires a high level of connectivity to collect all the information among power generators. A failure or error may impair the effectiveness of the centralized controller. Secondly, the topologies of the smart grid and the communication network are likely to be variable in a smart grid. Therefore, a small change in the smart grid may lead to reconfiguration of the centralized algorithm. Thirdly, the centralized controller is not able to accommodate the plug-and-play characteristic of smart grid. In this work, we propose a distributed controller based on consensus algorithm to solve the EDP in a smart grid. The consensus algorithm is based on graph theory in the area of communication. Compared with the centralized method, the distributed algorithm features advantages of less information requirement, robustness, and scalability. In order to present a more practical scenario of EDP, a quadratic cost function and comprehensive constraints are assumed in the problem definition. It's assumed that the valve point effect of the generation unit is negligible. Different from the centralized approach, the proposed algorithm enables each generator to collect the mismatch between power demand and power generations in a distributed manner. The mismatch power is used as a feedback for each generator to adjust its power generation. In order to implement the consensus algorithm, the incremental cost of each generator is selected as the consensus quantity and will converge to a common value eventually. Simulation results of different case studies are provided to show the effectiveness of the proposed algorithm. Effect of power constraints, communication topology and generator dynamic on the convergence and iteration speed of proposed algorithm is also examined. These case studies are simulated and analyzed in Matlab/Simulink. The convergence speed and total generation cost of proposed algorithm are also compared with the conventional algorithms such as lambda iteration method and particle swarm optimization. The consensus algorithm has a better combined performance of convergence and total generation cost compared to lambda iteration method and particle swarm optimization. In order to validate the consensus algorithm, an IEEE 14 bus system with the proposed algorithm is established in PSCAD/EMTDC and verified by comparing with the analytical results. Show less

Date Issued

2014

Identifier

FSU_migr_etd-9153

Format

Thesis

Title

On the Properties and Mechanisms of Microjet Arrays in Crossflow for the Control of Flow Separation.

By utilizing passive and active methods of flow control, the aerodynamic performance of external and internal components can be greatly improved. Recently however, the benefits of applying active flow control methods to turbomachinery components for improved fuel efficiency, reduced engine size, and greater operational envelope has sparked a renewed interest in some of these flow control techniques. The more attractive of these, is active control in the form of jets in cross flow. With their... Show moreBy utilizing passive and active methods of flow control, the aerodynamic performance of external and internal components can be greatly improved. Recently however, the benefits of applying active flow control methods to turbomachinery components for improved fuel efficiency, reduced engine size, and greater operational envelope has sparked a renewed interest in some of these flow control techniques. The more attractive of these, is active control in the form of jets in cross flow. With their ability to be turned on and off, as well as their negligible effect on drag when not being actuated, they are well suited for applications such as compressor and turbine blades, engine inlet diffusers, internal engine passages, and general external aerodynamics. This study consists of two parts. The first is the application of active control on a low-pressure turbine (LPT) cascade to determine the effectiveness of microjet actuators on flow separation at relatively low speeds. The second study, motivated by the first, involves a parametric study on a more canonical model to examine the effects of various microjet parameters on the efficacy of separation control and to provide a better understanding of the relevant flow physics governing this control approach. With data obtained from velocity measurements across the wide parametric range, correlations for the growth of the counter-rotating vortex pairs generated by these actuators are deduced. From the information and models obtained throughout the study, basic suggestions for microjet actuator design are presented. Show less

Power Electronic-based Distribution Systems (PEDS) can provide excellent features such as load regulation, high power factor, and transient performance; especially in the large scale grids which are highly penetrated with the renewable energy resources, as well as innovative Power Electronic-based Components (PECs) such as Solid State Transformers (SSTs), Fault Isolation Devices (FIDs), machine drives, and inverters. Conversely, they are prone to exhibit negative impedance instabilities due... Show morePower Electronic-based Distribution Systems (PEDS) can provide excellent features such as load regulation, high power factor, and transient performance; especially in the large scale grids which are highly penetrated with the renewable energy resources, as well as innovative Power Electronic-based Components (PECs) such as Solid State Transformers (SSTs), Fault Isolation Devices (FIDs), machine drives, and inverters. Conversely, they are prone to exhibit negative impedance instabilities due to the regulated output voltage, high power factor and constant-power nature of the individual components in the system. Therefore, small-signal and large-signal stability assessments of the PEDS play a prominent role in the different stages of systems analyses such as preoperational (design), operational, and post-operational stages. Herein, various stability analysis techniques, along with their pros and cons, are described. This work proposes to develop a novel "real time" stability analysis criterion and technique to assess small-signal stability of the PECs in the contemporary distribution systems. This will consist of a new small-signal stability criterion as well as appropriate technique to assess small-signal stability of the PECs based on the proposed criterion. The proposed criterion is developed based on d-q impedance measurement technique and Nyquist criterion. The advantages of the proposed criterion and technique include the capability to be developed for real-time applications, the simplicity of development on software and hardware, and the use of a powerful algorithm to address small-signal stability of the PEDS, etc. The primary contribution of this work is the real-time stability analysis methodology; more specifically, the capability of the proposed criterion and technique to be implemented in a real-time platform. The parallel perturbation of source and load is one of the key features of the proposed method that enables real-time capability. In addition, the proposed stability criterion, based on impedance measurement and Nyquist stability criterion, contributes higher accuracy in small-signal stability assessments of the systems by providing a complete Nyquist contour of the system's return-ration matrix. Ultimately, this yields lighter computational loads, faster computation times, and more accurate evaluation of the system's stability in a way that enables the assessment of the relative and absolute stability of the PEDS. Another advantage of the proposed technique is that it takes part of the system's nonlinearities into account by perturbing the systems with chirp signal and in a range of frequencies, instead of exclusively fundamental frequency. Hardware development and experimental implementation also is presented in this work. In the experimental implementation section of the proposed work, an Impedance Measurement Unit (IMU) is developed via Power Hardware in the Loop (PHIL) experiment and measures source and load impedances in real-time. Subsequently, the proposed stability criterion is implemented on the real time digital simulator (RTDS) and by utilizing information from the developed IMU, small-signal stability of the test bed is investigated in real-time. Show less

The market for solar energy has been expanding rapidly worldwide. However, due to the weather conditions, photovoltaic (PV) systems generally have considerable power variations, which include voltage fluctuations and frequency variations. As a result, the connected power systems may experience adverse effects from the fluctuating power generated by the PV system. The intermittent power generation of a solar farm can perturb the supply and demand balance of the whole power system. For... Show moreThe market for solar energy has been expanding rapidly worldwide. However, due to the weather conditions, photovoltaic (PV) systems generally have considerable power variations, which include voltage fluctuations and frequency variations. As a result, the connected power systems may experience adverse effects from the fluctuating power generated by the PV system. The intermittent power generation of a solar farm can perturb the supply and demand balance of the whole power system. For stability, a power network requires a spinning reserve, which increases with the growth of PV installations and inevitably degrades the efficiency of power generation. Therefore, mitigating the adverse effects on the grid from an intermittent PV source is expected to be essential for increasing the penetration level of PV systems. Recently, battery energy storage system (BESS) has been seen as a promising solution to help PV integration, due to the flexible real power control of the batteries. Unfortunately, this technique has not been applied extensively due to the high cost of batteries. If chosen, battery storage needs to be designed methodically, which is critical for the owners of PV. Firstly in this dissertation, an original sizing strategy is proposed for a dispersed BESS in distribution feeders with distributed PV systems. The main functions of the dispersed BESS are overvoltage reduction and peak-load shaving. The benefits and cost analysis of the installed dispersed BESS are conducted. Under high penetration level of PV systems, to assess the effect of the dispersed BESS on overvoltage reduction, the proposed cost-benefit analysis uses the work stress of voltage regulation devices as a reference. The factors of load shifting, peaking power generation, as well as dispersed BESS costs and an estimation of lifetime are considered in the annual cost calculation. In particular, lithium iron phosphate (LiFePO4) batteries and lead-acid batteries have been selected to demonstrate the proposed method on the modified GE distribution feeders. The economic analysis of these two types of battery can determine the lower cost battery type and the cost-effective size design for the dispersed BESS on different locations in the distribution system under high PV penetration level. Secondly, this dissertation proposed a method to optimize the design of a centralized BESS capacity and the energy management system (EMS) based on a utility revenue analysis for a large-scale PV plant application. The battery storage, which is controlled by the EMS, aims to enhance the integration into the grid of a large PV plant by shaping the fluctuating PV plant output into a relatively constant power and supporting the peak load. LiFePO4 batteries and lead-acid batteries are used to demonstrate the proposed method in a utility model. The lifetime and the systematic performance of these two types of battery are compared. Furthermore, the change in utility revenue caused by the installed battery storage can be calculated and maximized based on the proposed method to determine the optimal design of BESS capacity and EMS for a large PV power plant application. These two proposed methods can offer insights into the performance and economic analysis of a BESS in PV applications for project designers and business stakeholders. With the help of the developed methods, BESS designs can be optimized for any PV application with the necessary changes according to practical application. Finally, the scope of future work is discussed. Show less

Magnetic resonance imaging (MRI) provides an effective approach to track the labeled pluripotent stem cell (PSC)-derived neural progenitor cells (NPCs) for neural transplantation and neurological disorder treatments. However, labeling the thawed cells after cryopreservation can be limited by the inefficient intracellular labeling and variations in labeling efficiency. Therefore, cryopreservation of the pre-labeled cells can provide uniform cell population and operational convenience for the... Show moreMagnetic resonance imaging (MRI) provides an effective approach to track the labeled pluripotent stem cell (PSC)-derived neural progenitor cells (NPCs) for neural transplantation and neurological disorder treatments. However, labeling the thawed cells after cryopreservation can be limited by the inefficient intracellular labeling and variations in labeling efficiency. Therefore, cryopreservation of the pre-labeled cells can provide uniform cell population and operational convenience for the following in vitro and in vivo investigations. In this study, the feasibility of cryopreserving PSC-derived NPC aggregates labeled with micron-sized particles of iron oxide (MPIO) was investigated. The NPC aggregates derived from embryoid body formation were labeled in suspension with different concentrations of MPIO in the range of 0-100 ug Fe per mL. The results indicated that intracellular MPIO incorporation was retained after cryopreservation (70-80% labeling efficiency), which did not significantly affect cell recovery, proliferation, cytotoxicity and neural lineage commitment. MRI analysis was performed in the phantom tissue environment using cell layers with different MPIO exposures separated by agarose gels. The results showed comparable detectability for the MPIO-labeled cells before and after cryopreservation indicated by T2 and T2* relaxation rates. These findings indicate the feasibility of cryopreserving MPIO-labeled PSC-derived NPC aggregates for potential cell banking toward various in vitro and in vivo cell tracking studies. Show less

Power system designers have more creative flexibility than ever before due to improvements in power electronics technology. The invention of the silicon (Si) insulated gate bipolar transistor (IGBT) in the 1980s was a major improvement over commonly used Si MOSFETs, for higher power applications, and thyristors, because they provided faster switching capabilities. New developments in silicon carbide (SiC) semiconductors are causing a similarly disruptive effect to the industry because of... Show morePower system designers have more creative flexibility than ever before due to improvements in power electronics technology. The invention of the silicon (Si) insulated gate bipolar transistor (IGBT) in the 1980s was a major improvement over commonly used Si MOSFETs, for higher power applications, and thyristors, because they provided faster switching capabilities. New developments in silicon carbide (SiC) semiconductors are causing a similarly disruptive effect to the industry because of higher possible switching frequencies than Si IGBTs and the ability to create 10 kV devices with switching frequencies beyond 20 kHz. Higher breakdown voltage and faster switching enable converter designs with higher power densities (watts per cubic meter) that interface with higher voltage systems. These two factors along with decreasing costs of Si IGBTs and low voltage SiC MOSFETs make the increased use of power converters throughout a distributed power system possible. Power converters with a regulated output draw a constant input power from a distribution system. While constant power loads have a nonlinear relationship between input voltage and load current, linear systems theory historically dominated their analysis. The negative admittance model is often used with input filter parameters to create linear models of constant power loads suitable for small-signal stability analysis. However, systems with limited generation capacity and large constant power loads are susceptible to large-signal instability. Therefore, system stability analysis must include nonlinear models of system components to form an analytical, large-signal stability metric. We used the Volterra Series to model nonlinear responses of constant power loads through Volterra kernel measurement. A switch-mode power converter was designed to synthesize large-signal perturbations to measure frequency domain Volterra kernels of 380 VDC loads up to 5 kW. We measured the first and second order kernels of a 3 kW, 380 VDC constant power load from 0.1 Hz to 1000 Hz and verified significance of the second order kernel. Show less

Wind induced damage is observed in different types of civil engineering structures. There are several methods in use to predict damage. Researchers and stakeholders rely on these methods to quantify damage, which helps to schedule maintenance and to estimate financial loss. These damage prediction methods utilize the knowledge on properties of the wind or the wind load resistance of the material that constitutes the structure. However, recently, researchers have pointed out several... Show moreWind induced damage is observed in different types of civil engineering structures. There are several methods in use to predict damage. Researchers and stakeholders rely on these methods to quantify damage, which helps to schedule maintenance and to estimate financial loss. These damage prediction methods utilize the knowledge on properties of the wind or the wind load resistance of the material that constitutes the structure. However, recently, researchers have pointed out several shortcomings in these approaches. One such shortcoming is the inability of these methods to address the uncertainty in the data. A typical method for damage prediction rely on the accuracy of statistic of the wind load or the material property used in the analysis. If uncertainty exists in the data, then the statistic obtained from the data will give overconfident inferences. As a result the final predicted damage, will be biased and will not reflect the uncertainties involved in the actual data. In this research, an approach is proposed to enhance the damage prediction model. In order to address the uncertainties in damage prediction, the approach integrates monitored data and existing knowledge, which gives probabilities of damages rather than a single number. The advance in sensors and wireless technologies has enabled much easier access to high-quality monitored data. The monitored data can be used to enhance the accuracy of damage prediction. While using monitored data, the proposed approach also seeks to fully utilize existing damage prediction models. These models provide a strong framework based on theories of mechanics and knowledge gained from many years of research. In order to integrate existing damage models and additional monitored data, a Bayesian approach is adopted. The Bayesian approach provides a sound framework for integrating the existing model and the additional data. In the Bayesian approach the existing model is termed as the prior. The prior is systematically updated using additional monitored data, in order to provide the posterior. In this research two case studies are considered. These are complete sealant failure of three tab asphalt shingles under wind load and fatigue damage of slender structures due to turbulence and wind structure interaction. In case of asphalt shingles, wind vulnerability is determined using a sensor based strength monitoring and integrating the existing data. The sealant in the shingle, helps to resist the wind load acting on the shingle. After installation of asphalt shingle, the sealant deteriorates over time and loses bond with the shingle. Consequently the wind uplift capacity is reduced and larger area of the shingle is exposed to higher wind load. A complete failure of sealant due to the wind load acting on it is defined as the failure of the shingle. A sensor mechanism is proposed to monitor the deterioration of the sealant and wind vulnerability of the asphalt shingle. Existing knowledge and monitored data is integrated to estimate the uplift capacity and the wind load acting on the shingle. The vulnerability of the shingle at each wind speed is expressed in terms of the sensor reading. MC simulation is carried out to determine the failure contour on the roof and fragility curves of roof at different ages. It is observed that, the fragility curve for a 2% area of roof failure at 100 mph for a 10 year old roof from this study compares well with the results of fragility of roof cover from Cope, 2014. In case of long span bridges, the wind data from existing and monitored data are integrated to determine the possible statistic of wind data and damage is predicted using this data. Accuracy of fatigue damage prediction depends on the accuracy of the wind speed and direction statistic. Conventional approaches rely on initial wind statistics only, which result in a single fatigue damage value. The proposed approach systematically updates the prior and wind statistic using the monitored data of wind for one year. This is used to determine the possible values of wind speed and direction statistic at the location. Fatigue analysis provides the probability distribution of different fatigue damage values. A long span bridge and long span beam were studied using the conventional and proposed approach. For the long span bridge, the fatigue damage from conventional approach is 0.002 and the mean fatigue damage from proposed analysis is 0.002. For the long span beam it is 0.392 and 0.397 respectively. The results from the proposed approach will give the designers and retrofitters a comprehensive view of the possible values of damage at any location on the bridge, thus helping in planning a maintenance task. Show less

All modern day landfills contain a series of perforated pipes installed beneath the waste whose purpose is to collect all liquid which drains through the cell. This system is called the leachate collection system and its primary purpose is to drain any liquid toward a central location where it is pumped and then treated, discharged, or recirculated. It has been discovered that certain landfills see a buildup of precipitates within the system which leads to clogged pipes and buildup of... Show moreAll modern day landfills contain a series of perforated pipes installed beneath the waste whose purpose is to collect all liquid which drains through the cell. This system is called the leachate collection system and its primary purpose is to drain any liquid toward a central location where it is pumped and then treated, discharged, or recirculated. It has been discovered that certain landfills see a buildup of precipitates within the system which leads to clogged pipes and buildup of leachate head on top of the landfill. The formation of the precipitates is linked to the chemical and biological make-up of the leachate generated within the landfill. In order to better understand this clogging process and thus be able to prevent it in future landfills, the chemical and biological characteristics of leachate as well as landfill design must be examined. It is now known that ash content within the waste will lead to greater clogging. This is due to the fact that ash contributes greater amounts of the calcium necessary for biofilm to grow within the drainage media. While one solution to this problem is the monofilling of ash residue in separate landfills, many operators still choose to combine MSW and ash. Since no law exists prohibiting the later it is the goal of this research to design a model which may be used by landfill operators to foresee clogging potential of their landfill and thus prevent it. The main objective of this study is to use a "film growth approach" to simulate clogging in Florida landfills. The change of hydraulic properties and porosities of leachate drainage materials due to calcium carbonate buildup will be predicted using Florida specific leachate composition data and leachate generation data for typical landfills operated in different micro-climates of the state. The results of this investigation will be used to examine the adequacy of the current design methodology of leachate collection systems in the state of Florida. The findings of this study will then be used to estimate the service life of LCSs in different regions of the state. The study was conducted in four stages. The first stage consisted of a literature review of previous laboratory and field tests of LCSs. It also took into account all available FDEP databases of leachate quality and quantity. The second stage aimed at modeling calcium carbonate growth within an LCS based on results obtained in the first stage. The third stage consisted of an analysis of LCS clogging results as applied to model landfills which represented typical landfills throughout the state of Florida. The performance of these model landfills and LCSs was evaluated to see what kinds of changes are noticeable in the leachate quality and quantity over the lifetime of the landfill. Clogging of drainage media was the main focus of this stage because this clogging is the biggest contributor to LCS failure. Finally the adequacy of design of LCSs in model landfills was examined and adjusted as needed based on results obtained in stages 1-3. It was also possible to estimate the service life of existing and future LCSs to make sure that no leachate ever escapes the landfill and contaminates the groundwater. Show less

Electrical discharge is a commonly used method to produce ions and radicals that can be used for degrading compounds as well as for chemical synthesis. Previously, the application of electrical discharges has been studied in liquids such as water and alcohols to produce hydrogen peroxide and hydrogen and to destroy organic compounds in the water and gas phases. Recently low power gas-liquid electric discharges have been employed to increase efficiency for hydrogen peroxide and oxidized... Show moreElectrical discharge is a commonly used method to produce ions and radicals that can be used for degrading compounds as well as for chemical synthesis. Previously, the application of electrical discharges has been studied in liquids such as water and alcohols to produce hydrogen peroxide and hydrogen and to destroy organic compounds in the water and gas phases. Recently low power gas-liquid electric discharges have been employed to increase efficiency for hydrogen peroxide and oxidized products for synthesis or degradation. Determination and analysis of the intermediate radicals produced in the plasma has not been studied intensively for discharges at the gas-liquid interface such as aerosol sprays and thin liquid films. According to theoretical models based on reaction kinetics in plasma these radicals such as hydroxyl radicals play an important role in formation of hydrogen peroxide. However, there may be excess hydroxyl radicals formed and not involved in the formation of hydrogen peroxide. The main goal of this work is to characterize and identify key intermediate radicals and their reaction pathways in the liquid phase, gas phase, and at the interface of these aerosol droplets and thin film surfaces using various gases and liquid feeds. Show less

The study of user-behavior and decision-making dynamics in transportation network are vital in modeling and simulation of user interactions. Different users access transportation network in order to accomplish different activities. Such activities can be regular commuting, transit services, commercial taxicabs, deliveries, long distance trips, logistics or fleet services, etc. While the world is becoming increasingly urbanized reliable and cost effective movement of people and goods is... Show moreThe study of user-behavior and decision-making dynamics in transportation network are vital in modeling and simulation of user interactions. Different users access transportation network in order to accomplish different activities. Such activities can be regular commuting, transit services, commercial taxicabs, deliveries, long distance trips, logistics or fleet services, etc. While the world is becoming increasingly urbanized reliable and cost effective movement of people and goods is important for the productivity and economic growth at large. Urbanization and population growth have created the shift in how travel activities are tied to the economy. In today's economy, businesses and individuals are looking for ways of making their fiscal resources and workforce more efficient. However, traffic congestion dampens the efficiency and prosperity by imposing additional operating costs, slowing mobility and causing wastage of time and by hindering efficient metropolitan services such as deliveries, public safety and maintenance. Traffic congestion in the United States in 2011 for instance, caused urban commuters to travel 5.5 billion hours more and to purchase an extra 2.9 billion gallons of fuel (enough to fill Superdome in New Orleans, two times) for a congestion cost of $121 billion. In larger cities and in busy expressways, traffic infrastructures are already operating at near or full capacity. With today's shrinking budgets, often no funding is available to rebuild or expand an aging public transportation infrastructure, making it crucial to devise ways to optimize the performance of existing transportation assets. Since the recurring congestions in large metropolitan areas are mainly due to predictable behavioral activity scheduling, traffic management efforts should be geared towards behavior analysis and modeling. Modeling behavior and decisions, pertinent to route choice and activity scheduling dynamics are crucial for capturing microscopic and mesoscopic nature of traffic flow patterns. In this research, the focus is placed on the development of multi-agent transportation demand estimation and simulation framework to be used by the public entities for performance optimization of existing transportation network and scenario evaluation of new investments. The framework employs several mathematical and statistical methods for the derivation of sampling distributions of users' (i.e., agents') behavior and travel characteristics for the initial network demand generation. The processes of deriving sampling distributions of agents' behavior and travel characteristics largely rely on the quantity, quality and resolution of the available data of the region under study. Travel characteristics/travel surveys data from South East Florida Regional Planning Model (SERPM) region and the National Household Travel Survey (NHTS) data contained individuals' travel characteristics such as origin, destination, departure and arrival time, chain of activities and tours within the trip. These are micro-information needed for the derivation of household and individual agent's travel behavior. The data was processed to develop probability distributions for groups of agents with similar travel behavior, given the agents' household characteristics. In a similar fashion, with agents' household characteristics given, the logit models for agents' activity and locations choices were developed. Besides behavior simulation and demand estimation, the developed framework included an ad-on module for lane choice and pricing approaches applicable to dynamic high occupancy toll (HOT) lanes pricing. The reinforcement learning (RL) approach was used for updating the optimal pricing strategy in a given traffic condition. The pricing controller was configured to start with a predefined base price at a given traffic level, and then in the process of learning, it varies the price in accordance with the acceptable price levels at a given level of service (LOS). In this way, the pricing controller learns the states in which a higher price is more beneficial and those in which a lower price is more beneficial, and then adjusts the parameters of the pricing function to minimize the difference between the current computed price and the posted price. The framework was tested and validated for the scenario based on the data from SERPM region. The scenario was simulated in Multi-Agent Transport Simulation (MATSim). In MATSim, the simulation is constructed around the notion of agents that make independent decisions about their actions. Each traveler of the real system is modeled as an individual agent. Generally, the observation of network traffic evolution from the simulation showed the expected traffic patterns for both morning peak and afternoon peak traffic. One of the most important aspects of travel behavior is the characterization of travel activities by trip duration. The distribution of travel activities by trip duration is the reflection of user behavior in the study area. This determines the expected users departing, en-route, stuck, and arriving to their destinations at a particular time interval. In this research, the simulation results show that network users in our case consist mainly of regular commuters (≥ 20%) whose trips take about 15 minutes. As any other research study, there are some limitations with this work. Due to lack of relevant data, transit use and other modes other than personal vehicle were not considered. Future directions for this research include the inclusion of other data sources and optimization of the demand estimation framework in order to scale-down the computation cost. In addition to the reduction of computation cost, focus will be on development and implementation of modules for simulating dynamic toll pricing on high occupancy toll lanes and assessing the effects of social media information exchange among the agents on mobility. Show less

The prospect of SG is green, power efficient, and economical to its customers. Many emerging innovations have reached a consensus that the traditional power grids need to be combined with modern data networks, in order to establish a new platform that supports distributed renewable energy devices, electrical measuring sensors, and intelligent energy management and control systems, etc. For example, an energy management system is proposed to connect data aggregators with renewable energy... Show moreThe prospect of SG is green, power efficient, and economical to its customers. Many emerging innovations have reached a consensus that the traditional power grids need to be combined with modern data networks, in order to establish a new platform that supports distributed renewable energy devices, electrical measuring sensors, and intelligent energy management and control systems, etc. For example, an energy management system is proposed to connect data aggregators with renewable energy devices in the network area. A wireless sensor network is used to provide the communications between SG data centers and consumers, and manage residential energy with an optimization-based scheme. In SG, the stability of an energy management scheme becomes heavily dependent on accurate real-time communications among intelligent energy management agencies in residential homes, micro-grids, and main grids. Within a large-scale distributed (or centralized) smart grid (SG), the communication network is designed to connect multiple power management systems and collect data from hundreds or thousands of power sensors over a wide geographical area. One dominant feature of innovated SG communication network is that one power device is coupled with a single Ethernet or non-Ethernet communication agent to exchange control state or management information with others. Generally speaking, an intelligent agent helps its corresponding power device negotiating with other peers to dynamically form an ad-hoc group through the data network infrastructures. Then, many meaningful power management algorithms are operated in the logical group. The grouping topology recognized by a specific agent needs to reform when the participating group members can not satisfy the demand from operating power management algorithms. Upon our ad-hoc ideas, the main problem arises: in a changed group, the networking size, traffic load, queueing effect and security requirement are varied, so an agent experiences different communication cost over group reforming. We define such inevitable difference as communication inconsistency of SG ad-hoc grouping. If the timeout parameters of communication control are set statically in grouping procedures, the inconsistency definitely triggers the timeout, crashes the group and aborts the running cycle of power management algorithms very often. Thus in this work, an adaptive timing solution is developed for connecting distributed intelligent agents in ad-hoc manner to greatly enhance the flexibility and performance of grouping algorithms in SG communication network. A timing adaptive grouping (TAG) protocol is proposed to make every distributed agent capable of adjusting its operational timing configurations (OTCs) in pace with the changing of ad-hoc groups, so that prevents the harmfulness of communication inconsistency to the stability of grouping procedures. More specifically, we first develop a set of queueing model to describe the network traffic of various power management applications among distributed agents in connection with different scale of ad-hoc grouping topologies. Second, the security cost of SG communications is modeled, estimated and validated with various grouping agents' characteristics. Third, based on the network grouping model including both queueing and security cost, we analyze the ad-hoc delay performance and illustrate that the model can be used to predict the average operating delays of networking agents. Fourth, based on the delay parameters derived from previous modeling, the TAG protocol is developed with our Smart Timing Adaptive (STA) algorithm to facilitate each distributed agent dynamically judging variant ad-hoc grouping conditions. Finally, we have implemented a validation testbed with the capabilities of integrated real-time communication and power exchange to demonstrate the ad-hoc grouping operation of SG power management applications. Due to the ripple effect of inconsistent communication delays among the ad-hoc SG groups with dynamic changing topology, the network performance becomes a major concern to support power management applications. To deal with that, in a large NSF project of Future Renewable Electric Energy Delivery and Management (FREEDM), we implement a SG prototype, called the FREEDM Hardware-in-the-loop (HIL) testbed. The so-called Distributed Grid Intelligences (DGIs) act as the distributed intelligent agents in the SG prototype which can group specific peers to exchange power load among power demands and supplies. There are also many other existing works contributing a variety of platforms to integrate power and communication systems. But, in our FREEDM project, we build a SG testbed, which is a new platform that combines an HIL power system and a real-time communication system. The power system devices are managed by the DGIs that are connected into the communication networks. The DGIs act as intelligent energy management agencies for the power system, while information nodes for the communication networks. The DGI instances are coded on embedded computer boards with processing and communication capabilities. A DGI represents its power device to communicate with other DGI instances or DGI nodes. DGIs being connected in LAN and WAN may be grouped together to meet the power demand and supply requirement. A DGI group may cover a LAN, or a LAN and WAN simultaneously, depending on the location of DGI nodes. When electrical faults isolate a section from the power system, in communicational sense, the section is still connected to and can exchange the information of grid states with other sections in the power system. The real-time and HIL features of the testbed are reflected in the design of both power and communication systems. To implement the concept of HIL in the power system, some power devices are implemented by real-world electrical hardware, while other devices are simulated in the Real Time Digital Simulator (RTDS) platform. To implement the concept of HIL in the communication system for the DGIs, the DGI LANs are implemented by Ethernet switches, while the DGI WAN is simulated in real-time by OPNET, a network simulator program. Within OPNET, there is a system-in-the-loop (SITL) interface that interprets DGI traffic between real packet formats and simulated formats. Show less