► Machine learning (ML) has become one of the most powerful classes of tools for artificial intelligence, personalized web services and data science problems across fields.…
(more)

▼ Machine learning (ML) has become one of the most powerful classes of tools for artificial intelligence, personalized web services and data science problems across fields. Within the field of machine learning itself, there had been quite a number of paradigm shifts caused by the explosion of data size, computing power, modeling tools, and the new ways people collect, share, and make use of data sets. Data privacy, for instance, was much less of a problem before the availability of personal information online that could be used to identify users in anonymized data sets. Images, videos, as well as observations generated over a social networks, often have highly localized structures, that cannot be captured by standard nonparametric models. Moreover, the “common task framework” that is adopted by many sub- disciplines of AI has made it possible for many people to collaboratively and repeated work on the same data set, leading to implicit overfitting on public benchmarks. In addition, data collected in many internet services, e.g., web search and targeted ads, are not iid, but rather feedbacks specific to the deployed algorithm. This thesis presents technical contributions under a number of new mathematical frameworks that are designed to partially address these new paradigms. • Firstly, we consider the problem of statistical learning with privacy constraints. Under Vapnik’s general learning setting and the formalism of differential privacy (DP), we establish simple conditions that characterizes the private learnability, which reveals a mixture of positive and negative insight. We then identify generic methods that reuses existing randomness to effectively solve private learning in practice; and discuss weaker notions of privacy that allows for more favorable privacy-utility tradeoff. • Secondly, we develop a few generalizations of trend filtering, a locally-adaptive nonparametric regression technique that is minimax in 1D, to the multivariate setting and to graphs. We also study specific instances of the problems, e.g., total variation denoising on d-dimensional grids more closely and the results reveal interesting statistical computational trade-offs. • Thirdly, we investigate two problems in sequential interactive learning: a) off- policy evaluation in contextual bandits, that aims to use data collected from one algorithm to evaluate the performance of a different algorithm; b) the problem of adaptive data analysis, that uses randomization to prevent adversarial data analysts from a form of “p-hacking” through multiple steps of sequential data access. In the above problems, we will provide not only performance guarantees of algorithms but also certain notions of optimality. Whenever applicable, careful empirical studies on synthetic and real data are also included.

▼ To better understand why machine learning works, we cast learning problems as searches and characterize what makes searches successful. We prove that any search algorithm can only perform well on a narrow subset of problems, and show the effects of dependence on raising the probability of success for searches. We examine two popular ways of understanding what makes machine learning work, empirical risk minimization and compression, and show how they fit within our search frame-work. Leveraging the “dependence-first” view of learning, we apply this knowledge to areas of unsupervised time-series segmentation and automated hyperparameter optimization, developing new algorithms with strong empirical performance on real-world problem classes.

► Active search studies algorithms that can find all positive examples in an unknown environment by collecting and learning from labels that are costly to obtain.…
(more)

▼ Active search studies algorithms that can find all positive examples in an unknown environment by collecting and learning from labels that are costly to obtain. They start with a pool of unlabeled data, act to design queries, and get rewarded by the number of positive examples found in a long-term horizon. Active search is connected to active learning, multi-armed bandits, and Bayesian optimization. To date, most active search methods are limited by assuming that the query actions and rewards are based on single data points in a low-dimensional Euclidean space. Many applications, however, define actions and rewards in a more complex way. For example, active search may be used to recommend items that are connected by a network graph, where the edges indicate item (node) similarity. The active search reward in environmental monitoring is defined by regions because pollution is only identified by finding an entire region with consistently large measurement outcomes. On the other hand, to efficiently search for sparse signal hotspots in a large area, aerial robots may act to query at high altitudes, taking the average value in an entire region. Finally, active search usually ignores the computational complexity in the design of actions, which is infeasible in large problems.We develop methods to address the disparate issues in the new problems. In a graph environment, the exploratory queries that reveal the most information about the user models are different than the Euclidean space. We used a new exploration criterion called Σ-optimality, which is motivated by a different objective, active surveying, yet empirically performed better due to a tendency to query cluster centers. We also showed submodularity-based guarantees that justify for greedy application of various heuristics including Σ-optimality and we performed regret analysis for active search with results comparable to existing literature. For active area search for region rewards, we designed an algorithm called APPS, which optimizes for one-step look-ahead expected rewards for finding positive regions with high probability. APPS was initially solved by Monte-Carlo estimates; but for simple objectives, e.g. to find region with large average pollution concentrations, APPS has a closed-form solution called AAS that connects to Bayesian quadrature. For active needle search with region queries using aerial robots, we pick queries to maximize the information gain about possible signal hotspot locations. Our method is called RSI and it reduces to bisection search if the measurements are noiseless and the signal hotspot is unique. Turning to noisy measurements, we showed that RSI has near-optimal expected number of measurements, which is comparable to results from compressive sensing (CS). On the other hand, CS relies on weighted averages, which are harder to realize than our use of plain averages. Finally, to address the scalability challenge, we borrow ideas from Thompson sampling, which approximates near-optimal decisions by drawing from the model uncertainty and…

▼ Modern machine learning systems pose several new statistical, scalability, privacy and ethical challenges. With the advent of massive datasets and increasingly complex tasks, scalability has especially become a critical issue in these systems. In this thesis, we focus on fundamental challenges related to scalability, such as computational and communication efficiency, in modern machine learning applications. The underlying central message of this thesis is that classical statistical thinking leads to highly effective optimization methods for modern big data applications. The first part of the thesis investigates optimization methods for solving large-scale nonconvex Empirical Risk Minimization (ERM) problems. Such problems have surged into prominence, notably through deep learning, and have led to exciting progress. However, our understanding of optimization methods suitable for these problems is still very limited. We develop and analyze a new line of optimization methods for nonconvex ERM problems, based on the principle of variance reduction. We show that our methods exhibit fast convergence to stationary points and improve the state-of-the-art in several nonconvex ERM settings, including nonsmooth and constrained ERM. Using similar principles, we also develop novel optimization methods that provably converge to second-order stationary points. Finally, we show that the key principles behind our methods can be generalized to overcome challenges in other important problems such as Bayesian inference. The second part of the thesis studies two critical aspects of modern distributed machine learning systems — asynchronicity and communication efficiency of optimization methods. We study various asynchronous stochastic algorithms with fast convergence for convex ERM problems and show that these methods achieve near-linear speedups in sparse settings common to machine learning. Another key factor governing the overall performance of a distributed system is its communication efficiency. Traditional optimization algorithms used in machine learning are often ill-suited for distributed environments with high communication cost. To address this issue, we dis- cuss two different paradigms to achieve communication efficiency of algorithms in distributed environments and explore new algorithms with better communication complexity.

► As data become more pervasive and computing power increases, the opportunity for transformative use of data grows. Collecting data from individuals can be useful to…
(more)

▼ As data become more pervasive and computing power increases, the opportunity for transformative use of data grows. Collecting data from individuals can be useful to the individuals (by providing them with personalized predictions) and the data collectors (by providing them with information about populations). However, collecting these data is costly: answering survey items, collecting sensed data, and computing values of interest deplete finite resources of time, battery, life, money, etc. Dynamically ordering the items to be collected, based on already known information (such as previously collected items or paradata), can lower the costs of data collection by tailoring the information-acquisition process to the individual. This thesis presents a framework for an iterative dynamic item ordering process that trades off item utility with item cost at data collection time. The exact metrics for utility and cost are application-dependent, and this frame- work can apply to many domains. The two main scenarios we consider are (1) data collection for personalized predictions and (2) data collection in surveys. We illustrate applications of this framework to multiple problems ranging from personalized prediction to questionnaire scoring to government survey collection. We compare data quality and acquisition costs of our method to fixed order approaches and show that our adaptive process obtains results of similar quality at lower cost. For the personalized prediction setting, the goal of data collection is to make a prediction based on information provided by a respondent. Since it is possible to give a reasonable prediction with only a subset of items, we are not concerned with collecting all items. Instead, we want to order the items so that the user provides information that most increases the prediction quality, while not being too costly to provide. One metric for quality is prediction certainty, which reflects how likely the true value is to coincide with the estimated value. Depending whether the prediction problem is continuous or discrete, we use prediction interval width or predicted class probability to measure the certainty of a prediction. We illustrate the results of our dynamic item ordering framework on tasks of predicting energy costs, student stress levels, and device identification in photographs and show that our adaptive process achieves equivalent error rates as a fixed order baseline with cost savings up to 45%. For the survey setting, the goal of data collection is often to gather information from a population, and it is desired to have complete responses from all samples. In this case, we want to maximize survey completion (and the quality of necessary imputations), and so we focus on ordering items to engage the respondent and collect hopefully all the information we seek, or at least the information that most characterizes the respondent so imputed values will be accurate. One item utility metric for this problem is information gain to get a “representative” set of answers from the respondent.…

► Robots have become increasingly adept at performing a wide variety of tasks in the world. However, many of these tasks can benefit tremendously from having…
(more)

▼ Robots have become increasingly adept at performing a wide variety of tasks in the world. However, many of these tasks can benefit tremendously from having more than a single robot simultaneously working on the problem. Multiple robots can aid in a search and rescue mission each scouting a subsection of the entire area in order to cover it quicker than a single robot can. Alternatively, robots with different abilities can collaborate in order to achieve goals that individually would be more difficult, if not impossible, to achieve. In these cases, multi-robot collaboration can provide benefits in terms of shortening search times, providing a larger mix of sensing, computing, and manipulation capabilities, or providing redundancy to the system for communications or mission accomplishment. One principle drawback of multi-robot systems is how to efficiently and effectively generate plans that use each of the team members to their fullest extent, particularly with a heterogeneous mix of capabilities. Towards this goal, I have developed a series of planning algorithms that incorporate this collaboration into the planning process. Starting with systems that use collaboration in an exploration task I show teams of homogeneous ground robots planning to efficiently explore an initially unknown space. These robots share map information and in a centralized fashion determine the best goal location for each taking into account the information gained by other robots as they move. This work is followed up with a similar exploration scheme but this time expanded to a heterogeneous air-ground robot team operating in a full 3-dimensional environment. The extra dimension adds the requirement for the robots to reason about what portions of the environment they can sense during the planning process. With an air-ground team, there are portions of the environment that can only be sensed by one of the two robots and that information informs the algorithm during the planning process. Finally, I extend the air-ground robot team to moving beyond merely collaboratively constructing the map to actually using the other robots to provide pose information for the sensor and computationally limited team members. By explicitly reasoning about when and where the robots must collaborate during the planning process, this approach can generate trajectories that are not feasible to execute if planning occurred on an individual robot basis. An additional contribution of this thesis is the development of the State Lattice Planning with Controller-based Motion Primitives (SLC) framework. While SLC was developed to support the collaborative localization of multiple robots, it can also be used by a single robot to provide a more robust means of planning. For example, using the SLC algorithm to plan using a combination of vision-based and metric-based motion primitives allows a robot to traverse a GPS-denied region.

Butzke, J. M. (2017). Planning for a Small Team of Heterogeneous Robots: from Collaborative Exploration to Collaborative Localization. (Thesis). Carnegie Mellon University. Retrieved from http://repository.cmu.edu/dissertations/1119

Note: this citation may be lacking information needed for this citation format:Not specified: Masters Thesis or Doctoral Dissertation

Note: this citation may be lacking information needed for this citation format:Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Butzke JM. Planning for a Small Team of Heterogeneous Robots: from Collaborative Exploration to Collaborative Localization. [Thesis]. Carnegie Mellon University; 2017. Available from: http://repository.cmu.edu/dissertations/1119

Note: this citation may be lacking information needed for this citation format:Not specified: Masters Thesis or Doctoral Dissertation

► Data driven approaches to modeling time-series are important in a variety of applications from market prediction in economics to the simulation of robotic systems. However,…
(more)

▼ Data driven approaches to modeling time-series are important in a variety of applications from market prediction in economics to the simulation of robotic systems. However, traditional supervised machine learning techniques designed for i.i.d. data often perform poorly on these sequential problems. This thesis proposes that time series and sequential prediction, whether for forecasting, filtering, or reinforcement learning, can be effectively achieved by directly training recurrent prediction procedures rather then building generative probabilistic models. To this end, we introduce a new training algorithm for learned time-series models, Data as Demonstrator (DaD), that theoretically and empirically improves multi-step prediction performance on model classes such as recurrent neural networks, kernel regressors, and random forests. Additionally, experimental results indicate that DaD can accelerate model-based reinforcement learning. We next show that latent-state time-series models, where a sufficient state parametrization may be unknown, can be learned effectively in a supervised way using predictive representations derived from observations alone. Our approach, Predictive State Inference Machines (PSIMs), directly optimizes { through a DaD-style training procedure { the inference performance without local optima by identifying the recurrent hidden state as a predictive belief over statistics of future observations. Finally, we experimentally demonstrate that augmenting recurrent neural network architectures with Predictive-State Decoders (Psds), derived using the same objective optimized by PSIMs, improves both the performance and convergence for recurrent networks on probabilistic filtering, imitation learning, and reinforcement learning tasks. Fundamental to our learning framework is that the prediction of observable quantities is a lingua franca for building AI systems.

► While much work in human-robot interaction has focused on leaderfollower teamwork models, the recent advancement of robotic systems that have access to vast amounts of…
(more)

▼ While much work in human-robot interaction has focused on leaderfollower teamwork models, the recent advancement of robotic systems that have access to vast amounts of information suggests the need for robots that take into account the quality of the human decision making and actively guide people towards better ways of doing their task. This thesis proposes an equal-partners model, where human and robot engage in a dance of inference and action, and focuses on one particular instance of this dance: the robot adapts its own actions via estimating the probability of the human adapting to the robot. We start with a bounded-memory model of human adaptation parameterized by the human adaptability - the probability of the human switching towards a strategy newly demonstrated by the robot. We then examine more subtle forms of adaptation, where the human teammate adapts to the robot, without replicating the robot’s policy. We model the interaction as a repeated game, and present an optimal policy computation algorithm that has complexity linear to the number of robot actions. Integrating these models into robot action selection allows for human-robot mutual-adaptation. Human subject experiments in a variety of collaboration and shared-autonomy settings show that mutual adaptation significantly improves human-robot team performance, compared to one-way robot adaptation to the human.

► Granular media continue to be among the most manipulated materials found in various industries. Particle interactions in granular flow has fundamental importance in analyzing the…
(more)

▼ Granular media continue to be among the most manipulated materials found in various industries. Particle interactions in granular flow has fundamental importance in analyzing the performance of a wide range of key engineering applications such as hoppers, tumblers, and mixers etc. In spite of such ubiquitous presence, till date, our understanding of the granular flow is very limited. This restricts our ability to design efficient and optimal granular processing equipment. Additionally, the existing design abilities are also constrained by the number of particles to be analyzed, where, a typical industrial application involves millions of particles. This motivated the current research where investigations on the above limitations are pursued from three different angles: experimental, theoretical, and simulation. More specifically, this work aims to study particle-wall interaction and developing a computationally efficient cellular automata simulation framework for industrial granular applications. Towards this end, the current research is divided into two part: (I) energy dissipation during particle-wall interaction (II) cellular automata modeling. In part I, detailed experiments are performed on various sphere-thin plate combinations to measure the coefficient of restitution (COR) which is a measure of energy dissipation and it is one of the most important input parameters in any granular simulation. Alternatively, the energy dissipation measure also used to evaluate the elastic impact performance of superelastic Nitinol 60 material. Explicit finite element simulations are performed to gain detail understanding of the contact process and underlying parameters such as contact forces, stress-strain fields, and energy dissipation modes. A parametric study reveals a critical value of plate thickness above which the effect of plate thickness on the energy dissipation can be eliminated in the equipment design. It is found that the existing analytical expressions has limited applicability in predicting the above experimental and numerical results. Therefore, a new theoretical model for the coefficient of restitution is proposed which combines the effect of plastic deformation and plate thickness (i.e. flexural vibrations). In part II, in order to advance the existing granular flow modeling capabilities for the industry (dry and slurry flows) a cellular automata (CA) modeling framework is developed which can supplement the physically rigorous but computationally demanding discrete element method (DEM). These include a three-dimensional model which takes into account particle friction and spin during collision processing, which provides the ability to handle flows beyond solely the kinetic regime, and a multiphase framework which combines computational fluid dynamics (CFD) with CA to model multi-million particle count applications such as particle-laden flows and slurry flows.

► Over the past several years, rapid advances in the field of integrated photonics coupled with nanofabrication capabilities have enabled studies of the interaction of light…
(more)

▼ Over the past several years, rapid advances in the field of integrated photonics coupled with nanofabrication capabilities have enabled studies of the interaction of light with the mechanics of a variety of physical structures. Concurrently, mechanical resonators have been extensively studied in the MEMS community due to their high quality factors, and have been implemented in a variety of RF filters and oscillators. The combination of MEMS with integrated optomechanical structures can generate a variety of novel devices that can be used for applications in RF-Photonics, timing and optical switching. While there are several demonstrations of electrostatic devices integrated with optomechanical structures, fewer examples exist in the piezoelectric domain. In particular, photonic integration in a piezoelectric material can benefit from some of the traditional strengths associated with this type of actuation, such as the ability to easily scale to higher frequencies of operation by patterning lateral features, the ability to interface with 50Ω electronics and strong electromechanical coupling. In addition, it enables a platform to produce new architectures for photonic-based electronic frequency reference oscillators that incorporate multiple degrees of freedom. This thesis presents the development of a piezoelectrically-actuated acousto-optic modulator in the aluminum nitride (AlN) material system. The process of implementing this device is carried out in five principal stages. First, light coupling from optical fibers to the AlN thin film is demonstrated with the use of on-chip grating couplers, exhibiting a peak insertion loss of -6.6 dB and a high 1 dB bandwidth of 60 nm for operation in the telecommunications C- and L-bands. This is followed by characterization of photonic whispering gallery mode microdisk and microring resonators with optical quality factors on the order of 104. Next, a robust fabrication method combining optical and electron-beam lithography is developed to produce a fully-integrated device preserving the critical features for acoustic and photonic resonators to be colocalized in the same platform. Acousto-optic modulation is demonstrated with the use of a contour mode resonator which drives displacements in the photonic resonator at 653 MHz, corresponding to the mechanical resonance of the composite structure. The modulator is then implemented in an opto-acoustic oscillator loop, for which an initial phase noise of -72 dBc/Hz at 10 kHz offset from the carrier is recorded with a large contribution from thermal noise at the photodetector. Finally, some possibilities to improve the modulator efficiency and oscillator phase noise are provided along with prospects for future work in this area.

► While tribology involves the study of friction, wear, and lubrication of interacting surfaces, the tribosurfaces are the pair of surfaces in sliding contact with a…
(more)

▼ While tribology involves the study of friction, wear, and lubrication of interacting surfaces, the tribosurfaces are the pair of surfaces in sliding contact with a fluid (or particulate) media between them. The ubiquitous nature of tribology is evident from the usage of its principles in all aspects of life, such as the friction promoting behavior of shoes on slippery water-lubricated walkways and tires on roadways to the wear of fingernails during filing or engine walls during operations. These tribosurface interfaces, due to the small length scales, are difficult to model for contact mechanics, fluid mechanics and particle dynamics, be it via theory, experiments or computations. Also, there is no simple constitutive law for a tribosurface with a particulate media. Thus, when trying to model such a tribosurface, there is a need to calibrate the particulate media against one or more property characterizing experiments. Such a calibrated media, which is the “virtual avatar” of the real particulate media, can then be used to provide predictions about its behavior in engineering applications. This thesis proposes and attempts to validate an approach that leverages experiments and modeling, which comprises of physics-based modeling and machine learning enabled surrogate modeling, to study particulate media in two key particle matrix industries: metal powder-bed additive manufacturing (in Part II), and energy resource rock drilling (in Part III). The physics-based modeling framework developed in this thesis is called the Particle-Surface Tribology Analysis Code (P-STAC) and has the physics of particle dynamics, fluid mechanics and particle-fluid-structure interaction. The Computational Particle Dynamics (CPD) is solved by using the industry standard Discrete Element Method (DEM) and the Computational Fluid Dynamics (CFD) is solved by using finite difference discretization scheme based on Chorin's projection method and staggered grids. Particle-structure interactions are accounted for by using a state-of-the art Particle Tessellated Surface Interaction Scheme and the fluid-structure interaction is accounted for by using the Immersed Boundary Method (IBM). Surrogate modeling is carried out using back propagation neural network. The tribosurface interactions encountered during the spreading step of the powder-bed additive manufacturing (AM) process which involve a sliding spreader (rolling and sliding for a roller) and particulate media consisting of metal AM powder, have been studied in Part II. To understand the constitutive behavior of metal AM powders, detailed rheometry experiments have been conducted in Chapter 5. CPD module of P-STAC is used to simulate the rheometry of an industry grade AM powder (100-250microns Ti-6Al-4V), to determine a calibrated virtual avatar of the real AM powder (Chapter 6). This monodispersed virtual avatar is used to perform virtual spreading on smooth and rough substrates in Chapter 7. The effect of polydispersity in DEM modeling is studied in Chapter 8. A polydispersed virtual avatar of…

► In recent years, soft materials have seen increased prevalence in the design of robotic systems and wearables capable of addressing the needs of individuals living…
(more)

▼ In recent years, soft materials have seen increased prevalence in the design of robotic systems and wearables capable of addressing the needs of individuals living with disabilities. In particular, pneumatic artificial muscles (PAMs) have readily been employed in place of electromagnetic actuators due to their ability to produce large forces and motions, while still remaining lightweight, compact, and flexible. Due to the inherent nonlinearity of PAMs however, additional external or embedded sensors must be utilized in order to effectively control the overall system. In the case of external sensors, the bulkiness of the overall system is increased, which places limits on the system’s design. Meanwhile, the traditional cylindrical form factor of PAMs limits their ability to remain compact and results in overly complex fabrication processes when embedded fibers and/or sensing elements are required to provide efficient actuation and control. In order to overcome these limitations, this thesis proposed the design of flat pneumatic artificial muscles (FPAMs) capable of being fabricated using a simple layered manufacturing process, in which water-soluble masks were utilized to create collapsed air chambers. Furthermore, hyperelastic deformation models were developed to approximate the mechanical performance of the FPAMs and were verified through experimental characterization. The feasibility of these design techniques to meet the requirements of human centered applications, including the suppression of hand tremors and catheter ablation procedures, was explored and the potential for these soft actuation systems to act as solutions in other real world applications was demonstrated. We expect the design, fabrication, and modeling techniques developed in this thesis to aid in the development of future wearable devices and motivate new methods for researchers to employ soft pneumatic systems as solutions in human-centered applications.

► Since the 1970s, the percentage of the US population that is overweight and obese has increased significantly, with nearly 70% of American adults now overweight…
(more)

▼ Since the 1970s, the percentage of the US population that is overweight and obese has increased significantly, with nearly 70% of American adults now overweight or obese (National Center for Health Statistics, 2013). The American Medical Association officially recognized obesity as a disease (American Medical Association, 2013) that afflicts approximately one out of every three adults in the US (National Center for Health Statistics, 2013). While the health implications of being overweight or obese are well established, the environmental impacts have not received equal attention. In light of this inattention, this dissertation analyzes the effects of the overweight and obese population on energy use, water withdrawals, greenhouse gas (GHG) emissions, and fuel costs through the US food supply system and transportation system. The first empirical chapter investigates the impacts of current US food consumption on energy use, water withdrawals, and GHG emissions. The purpose of this analysis is twofold: first, two top-down approaches are used to establish a range of life-cycle industrial energy use, water withdrawals, and GHG emissions in the US food supply system that are attributed to total food consumed by the US adult population. The two methods utilized are 1) economic input-output life-cycle assessment (EIO-LCA) and 2) process-based analysis. Second, the additional industrial energy use, water withdrawals, and GHG emissions required to support the extra Caloric intake of the US overweight and obese adult population are estimated. Extra Caloric intake estimates are developed using anthropometric data from the Centers for Disease Control (CDC) National Health and Nutrition Examination Survey. In 2012, 6.1-6.2 million TJ of cumulative energy use, 100-105 billion m3 of water withdrawals, and 600 million metric tons (MMT) CO2-eq were needed to provide food to the US adult population. Furthermore, 8-10% of total Caloric intake of adults were extra Calories consumed from overeating for overweight and obese adults. Providing these additional Calories resulted in 440,000-610,000 TJ of energy use, 7-10 billion m3 of water withdrawals, and 43-59 MMT CO2-eq. The second empirical chapter uses a bottom-up approach to measure the changes in energy use, water withdrawals, and GHG emissions associated with shifting from current US food consumption patterns to three dietary scenarios, which are based, in part, on the 2010 USDA Dietary Guidelines (US Department of Agriculture and US Department of Health and Human Services 2010). Amidst the current overweight and obesity epidemic in the US, the Dietary Guidelines provide food and beverage recommendations that are intended to help individuals achieve and maintain healthy weight. The three dietary scenarios examined include 1) reducing Caloric intake levels to achieve “normal” weight without shifting food mix, 2) switching current food mix to USDA recommended food patterns, without reducing Caloric intake, and 3) reducing Caloric intake levels and shifting current food mix to USDA…

Tom, M. S. (2015). Impacts of the Overweight and Obese on the US Food Supply and Transportation Systems. (Thesis). Carnegie Mellon University. Retrieved from http://repository.cmu.edu/dissertations/603

Note: this citation may be lacking information needed for this citation format:Not specified: Masters Thesis or Doctoral Dissertation

► Nondiffusive thermal transport occurs when length or time scales of a system are on the order of the mean free paths (MFPs) or lifetimes of…
(more)

▼ Nondiffusive thermal transport occurs when length or time scales of a system are on the order of the mean free paths (MFPs) or lifetimes of the energy carriers. As a result, a local equilibrium temperature cannot be defined and the thermal transport properties of the system can no longer be taken as the bulk values. When system boundaries are decreased below energy carrier MFPs, nondiffusive transport can be described with a reduced, effective thermal conductivity. Heat dissipation in light emitting diodes and transistors is adversely impacted by reductions in thermal conductivity, while thermoelectric energy conversion devices benefit. In my PhD, I studied the physics governing nondiffusive thermal transport. In this dissertation I describe my contributions in nondiffusive thermal transport to the scientific community. First, I describe the development of broadband frequency domain thermoreflectance (BBFDTR), an experimental technique used to observe nondiffusive thermal transport in materials by creating length scales comparable to energy carrier MFPs. I use BB-FDTR to induce nondiffusive thermal transport in Si-based materials at device operating temperatures. I relate my measurements to the thermal conductivity accumulation function, a fundamental physical quantity that describes cumulative contributions to thermal conductivity from energy carriers with different MFPs. Using a first order interpretation of my data I show that 40±5% of the thermal conductivity of crystalline silicon at a temperature of 311 K comes from phonons with MFP > 1 μm. Additional BB-FDTR measurements on a 500 nm thick amorphous silicon film indicate propagating phonon-like modes that contribute more than 35±7% to thermal conductivity at a temperature of 306 K, despite atomic disorder. I also describe the development of multiple models that are used to refine the interpretation of BB-FDTR measurements and better understand nondiffusive thermal transport measurements. First, I the Boltzmann transport equation (BTE) analytically to explain how and why measurements of thermal conductivity change as a function of experimental length scales in BB-FDTR. My solution incorporates two experimentally defined length scales: thermal penetration depth and heating laser spot radius. Comparison of the BTE result with that from conventional heat diffusion theory enables a mapping of MFP-specific contributions to the measured thermal conductivity based on the experimental length scales. The result is used to re-interpret nondiffusive thermal conductivity measurements of silicon with first principles-based calculations of its thermal conductivity accumulation function. Next, I develop a solution to the two-temperature diffusion equation in axisymmetric cylindrical coordinates to model heat transport in thermoreflectance experiments. The solution builds upon prior solutions that account for two-channel diffusion in each layer of an N-layered geometry, but adds the ability to deposit heat at any location within each layer. I use this solution to account for…

…time at CMU was the
invention and development of BB-FDTR, as described in Chapter 2 (and… …from my time at CMU is the measurement of kexp vs. Lp for Si-based
materials at device…

► Personal data is everywhere. The widespread adoption of the Internet, fueled by the proliferation of smartphones and data plans, has resulted in an amazing amount…
(more)

▼ Personal data is everywhere. The widespread adoption of the Internet, fueled by the proliferation of smartphones and data plans, has resulted in an amazing amount of digital information about each individual. Social interactions (e.g. email, SMS, phone, Skype, Facebook), planning and coordination (e.g. calendars, TripIt, Basecamp, online to do lists), entertainment (e.g. YouTube, iTunes, Netflix, Spotify), and commerce (e.g. online banking, credit card purchases, Amazon, Zappos, eBay) all generate personal data. This data holds promise for a breadth of new service opportunities to improve people’s lives through deep personalization, through tools to manage aspects of their personal wellbeing, and through services that support identity construction. However, there is a broad gap between this vision of leveraging personal data to benefit individuals and the state of personal data today. This thesis proposes unified personal data as a new framing for engaging with personal data. Through this framing, it synthesizes previous research on personal data and describes a generalized framework for developing applications that depend on personal data, exposing current challenges and issues. Next, it defines a set of design goals to improve the state of personal data systems today. Finally, it contributes Phenom, a software service designed to address the challenges of developing applications that rely on personal data.

▼ Constructing spline models for isogeometric analysis is important in integrating design and analysis. Converting designed CAD (Computer Aided Design) models with B-reps to analysis-suitable volumetric T-spline is fundamental for the integration. In this thesis, we work on two directions to achieve this: (a) using Boolean operations and skeletons to build polycubes for feature-preserving high-genus volumetric T-spline construction; and (b) developing weighted T-splines with arbitrary degree for T-spline surface and volume modeling which can be used for analysis. In this thesis, we first develop novel algorithms to build feature-preserving polycubes for volumetric T-spline construction. Then a new type of T-spline named the weighted T-spline with arbitrary degree is defined. It is further used in converting CAD models to analysis-suitable volumetric T-splines. An algorithm is first developed to use Boolean operations in CSG (Constructive Solid Geometry) to generate polycubes robustly, then the polycubes are used to generate volumetric rational solid T-splines. By solving a harmonic field with proper boundary conditions, the input surface is automatically decomposed into regions that are classified into topologically either a cube or a torus. Two Boolean operations, union and difference, are performed with the primitives and polycubes are generated by parametric mapping. With polycubes, octree subdivision is carried out to obtain a volumetric T-mesh. The obtained T-spline surface is C2-continuous everywhere except the local region surrounding irregular nodes, where the surface continuity is elevated from C0 to G1. B´ezier elements are extracted from the constructed solid T-spline models, which are further used in isogeometric analysis. The Boolean operations preserve the topology of the models inherited from design and can generate volumetric T-spline models with better quality. Furthermore, another algorithm is developed which uses skeleton as a guidance to the polycube construction. From the skeleton of the input model, initial cubes in the interior are first constructed. By projecting corners of interior cubes onto the surface and generating a new layer of boundary cubes, the entire interior domain is split into different cubic regions. With the splitting result, octree subdivision is performed to obtain T-spline control mesh or T-mesh. Surface features are classified into three groups: open curves, closed curves and singularity features. For features without introducing new singularities like open or closed curves, we preserve them by aligning to the parametric lines during subdivision, performing volumetric parameterization from frame field, or modifying the skeleton. For features introducing new singularities, we design templates to handle them. With a valid T-mesh, we calculate rational trivariate T-splines and extract B´ezier elements for isogeometric analysis. Weighted T-spline basis functions are designed to satisfy partition of unity and linear independence. The weighted T-spline is proved to be…

▼ In this thesis several blood related problems are studied: 1. Malaria-infected, the removal of parasitized red blood cells (pRBCs) using a magnetic force; 2. A new mathematical model for thrombus growth, which incorporates the thrombus-blood interaction, shear induced platelets activation, shear induced platelets embolization and deposited platelets stabilization, is developed, and a successful direct numerical prediction of thrombus formation in an axial blood pump is obtained. According to our knowledge, this is the first time such a study has been performed ; 3. Based on the application of Mixture Theory (or Theory of Interacting Continua), a multiphase model for blood flow is derived, and a new viscosity term, which considers the effect of shear stress and volume fraction of RBCs, is introduced. First, a blood filter system, mPharesis™ system, that will allow the removal of toxic malariainfected, parasitized RBCs (pRBCs or i-RBCs) from circulation using magnetic force is studied. The problem is modeled as a multi-component flow system using CFD-DEM method, where plasma is treated as a Newtonian fluid, the RBCs and pRBCs are modeled as soft-sphere solid particles which move under the influence of the plasma, other RBCs and the magnetic field. The simulation results show that for a channel with nominal height of 100 microns the addition of upstream constriction of 80% improved the stratification by 111% (from 28% to 139%); and a downstream diffuser reduced remixing, hence improved efficiency of stratification to 260%. Second, based on the Sorenson’s model of thrombus formation [1, 2], a new mathematical model describing the process of thrombus growth is developed. In this model the blood is treated as a Newtonian fluid, and the transport and reactions of the chemical and biological species are modeled using CRD (convection-reaction-diffusion) equations. A computational fluid dynamic (CFD) solver for the mathematical model is developed using the libraries of OpenFOAM®. Applying the CFD solver, several representative benchmark problems are studied: rapid thrombus growth in vivo by injecting Adenosine diphosphate (ADP) using iontophoretic method and thrombus growth in rectangular microchannel with crevices. Very good agreements between the numerical and the experimental results validate the model and indicate its potential to study a host of complex and practical problems in the future. Then applying the model, thrombus growth in an axial blood pump is studied. First, the flow field analysis in the blood pump is studied using visualization and numerical simulations. Then applying the thrombus model, a direct prediction of the thrombus growth is performed. The simulation shows a very good agreement with clinical observations. For reducing the computational cost, a dimensionally-reduced model is also developed, based on the complete thrombus model. The dimensionally-reduced model shows good capability to predict the thrombus deposition in blood pump as well. And finally, for describing the multiphase characteristics of…

Recent simulations have indicated that the traditional Helfrich-Canham model for topographical fluctuations in fluid phase biomembranes should be enriched to include molecular tilt. Experimental evidence supporting the aforementioned enrichment is reported.

► The development of active and inexpensive catalysts is vital for progress in technologies related to efficient energy generation, storage, and utilization. Transition metal oxides (TMOs)…
(more)

▼ The development of active and inexpensive catalysts is vital for progress in technologies related to efficient energy generation, storage, and utilization. Transition metal oxides (TMOs) make up a significant fraction of current state-of-the-art catalysts for these technologies. Density functional theory (DFT), the workhorse for computational chemistry and catalysis, can calculate the activity of catalysts, provide synthesis targets, and accelerate the discovery of active and cheap TMO catalysts. This dissertation develops DFT methods for accurately calculating and understanding the catalytic activity of TMOs. Known electron self-interaction errors in TMO bulk oxidation energies implies reactions energies on TMO surfaces should contain similar errors. The linear response U, proposed to correct self-interaction error, was evaluated as a method for obtaining more accurate TMO reaction energies. Application of the linear response U gave unprecedented improvement in TMO oxidation energies, mixed improvement in TMO formation energies, and improved trends in TMO surface reactivity. These results motivate the continued development of linear response U for bulk and surface calculations. The calculated electronic structure of a catalyst can be used to relate its structure and composition to its activity. Physical and chemical complexities of TMOs hinder development of useful and elucidative electronic structure models. Using the understanding of adsorption on metals as a foundation, a number of correlations between the calculated electronic structure and adsorption energy were found on TMO surfaces. These correlations led to structure-function relationships of binary, ternary, and polymorph TMOs. Methods and results used provides research directions on the continued search for new transition metal compound catalysts.

▼ The operation of our society depends heavily on infrastructure systems. To prevent failures and to reduce costs of maintenance, structural health monitoring (SHM) systems have been implemented on an increasing number of infrastructure systems. SHM systems have the potential to give reliable prediction of structural deterioration with less human safety risk and labor costs, and without interruption of normal operations. In the field of SHM, many techniques have been proposed in recent decades. Among these techniques, ultrasonic testing has been widely used for damage characterization in structures and materials. However, there remain many challenges in real-world SHM applications. For example, temperature variations can cause a significant decrease in performance of ultrasonic testing. Although there exist some temperature compensation techniques to improve the performance of ultrasonic testing under temperature variations, these techniques have their own limitations. This dissertation will focus on novel ultrasonic signal processing techniques for damage detection, quantification and temperature compensation. In Chapter 2, I will propose a modified optimal signal stretching (OSS) method and an singular value decomposition (SVD) method to solve the temperature compensation problem, where the OSS method (in its original form) failed to perform well for damage detection. In Chapter 3, I will study the statistical orthogonal relationship between temperature-induced and damage-induced ultrasonic change signals. The orthogonal relationship can be used to explain why SVD performs well under varying temperature conditions and why it also has the potential (under some conditions) to be directly used for damage detection and quantification. In Chapter 4, I viii will study the ultrasonic time-of-flight diffraction technique, which is used to quantify wall thickness loss of thick-walled aluminum tubes, because the conventional pulse-echo method did not perform well in my target application. In Chapter 5, I will propose a novel ultrasonic passband technique to quantify the alkali-silica reaction (ASR) caused cracking damage in concrete structures. This technique is based on the ultrasonic wave filtering effects of cracks in concrete. With the progress of ASR caused cracking damage in concrete, more high frequency components of ultrasonic waves are filtered out than low frequency components. The research work in this dissertation has the potential to help advance ultrasonic SHM techniques, to improve the real-world performance of ultrasonic SHM, to prevent failures of infrastructure systems, and to reduce the costs of maintenance if the proposed ultrasonic techniques can be implemented in real infrastructure systems in the future. However, some future work still needs to be done in order to implement the techniques studied in this dissertation in real-world applications.

► In this thesis, we present three image processing tools inspired by and designed for histology image analysis. Histology, which is the examination of biological tissue…
(more)

▼ In this thesis, we present three image processing tools inspired by and designed for histology image analysis. Histology, which is the examination of biological tissue under a microscope, is a critical technique in biomedical research and clinical practice. While slide preparation and imaging is increasingly becoming automated, the analysis of the resulting histology images is not: even routine analyses still require the trained eyes of a pathologist. In our work, we aim to understand histology images as a class of signals and develop tools to aid in the automated analysis of these signals. Our first contribution is in the area of histology image normalization, where the goal is to digitally remove the variation in staining between histology images, an important preprocessing step in many histology image analysis systems. To this end, we created a new benchmark dataset with which to compare normalization methods and proposed our own. Our second contribution is a tissue segmentation method, which delineates single-tissue regions in histology images. Along with this method, we propose a new mathematical model for histology images. Our final contribution is a method for describing distributions of angles, which is useful for segmentation as well as a variety of other image processing tasks.

► Morphogenesis, the process by which the tissues and organs of the embryo are properly shaped, is a fundamental feature of development. In the sea urchin,…
(more)

▼ Morphogenesis, the process by which the tissues and organs of the embryo are properly shaped, is a fundamental feature of development. In the sea urchin, the formation of the calcified enodoskeleton is a major morphogenetic event. Differentiation of the skeletogenic primary mesenchyme cells (PMCs) has been considered to occur in two phases: the autonomous specification of PMCs followed by signal-dependent patterning of PMCs and the embryonic skeleton they produce. Autonomous specification creates a homogenous population of PMCs, but the later differentiation of these cells is influenced by extrinsic signals that provide essential positional information. Recent studies showed that ectodermal growth factors are critically involved in the guidance of PMC migration and skeletal differentiation. However, a better understanding of the various signaling pathways that regulate skeletogenesis and their role in PMC gene expression remains to be established. This study examines the regulation of morphogenesis by signaling pathways, using skeletogenesis in the sea urchin embryo as a model. The aim of this study was to identify and study the roles of extrinsic signals in regulating PMC gene expression, focusing on the later, signal-dependent phase of PMC differentiation. By analyzing and classifying spatial expression patterns of 39 genes preferentially expressed in PMCs, I find that: 1) these genes are expressed non-uniformly within the PMC syncytium, reflecting a widespread influence of locally activated signals; 2) regions with elevated gene expression correlate with sites of rapid biomineral deposition at each stage; 3) non-uniform expression of genes within the PMC syncytium is controlled by multiple signal in a precise temporal sequence. I also provide evidence that ectoderm-derived VEGF signaling regulates gene expression in PMCs via the MAPK pathway on the ventral side of the embryo. Additionally, my work has identified an essential role for TGF-β signaling in skeletogenesis. Previous studies indicate that a complete repertoire of TGF-β signaling components is present in the sea urchin genome and TgfbrII mRNA is preferentially expressed in PMCs at the early gastrula stage. In this study, I show that TgfbrII mRNA is specifically expressed in the PMC lineage from the hatched blastula to the mid-gastrula stage. Perturbation experiments indicate that TgfbrII is activated by the single, sensu stricto TGF-β ligand in sea urchins and is required for skeletogenesis in the sea urchin embryo. I also show that the late activity of Alk4/5/7, the putative Type I receptor, regulates skeletogenesis in a dose-dependent manner. Isolation and in vitro culture of PMCs demonstrates that both Alk4/5/7 and TgfbrII function cell autonomously in these cells. I provide evidence that TGF-β-TgfbrII signaling is not involved in dorsal-ventral axis patterning or PMC specification; instead, this pathway plays a selective role in later skeletal patterning. Taken as a whole, my research demonstrates that skeletogenesis is regulated by a much more…

► Proteins and protein-based materials are used for a wide range of therapeutic, diagnostic, and biotechnological applications. Still, the inherent instability of proteins in non-native environments…
(more)

▼ Proteins and protein-based materials are used for a wide range of therapeutic, diagnostic, and biotechnological applications. Still, the inherent instability of proteins in non-native environments greatly limits the applications in which they are effective. In order to increase their utility, proteins are often modified, either biologically or chemically, to manipulate their bioactivity and stability profiles. In this work, covalent attachment of polymers to the enzyme chymotrypsin was used to predictably tailor protein bioactivity and stability. Specifically, atom transfer radical polymerization (ATRP) based polymer-based protein engineering (PBPE) was used to grow polymers directly from the surface of chymotrypsin. First, the temperature responsive polymers poly(N-isopropyl acrylamide) (pNIPAM), which has a lower critical solution temperature (LCST) and poly(dimethylamino propane sulfonate) (pDMAPS), which has an upper critical solution temperature (UCST), were separately grown from chymotrypsin. The temperature responsive properties of the polymers were conserved in the protein-polymer conjugates, and chymotrypsin bioactivity, productivity, and substrate specificity were predictably tailored at different temperatures depending on the structural organization of the polymers. Next, a dual block polymer-chymotrypsin conjugate was synthesized by growing poly(sulfobetaine methacrylamide) (pSBAm)-block-pNIPAm conjugates from the surface of chymotrypsin. The CT-pSBAm-b-pNIPAm conjugates showed temperature dependent kinetics, due to UCST or LCST driven polymer collapse at high and low temperature. Most interestingly, the dual block conjugates were dramatically more stable than native chymotrypsin to low pH. In order to further investigate the effect of polymer conjugation on chymotrypsin stability at low pH, four distinct and uniquely charged polymers were grown from the surface of chymotrypsin. With these new conjugates, we confirmed that chymotrypsin low pH stability was dependent on the chemical structure of polymers covalently attached to chymotrypsin. Indeed, positively charged polymers stabilized chymotrypsin to low pH, but negatively charged and amphiphilic polymers destabilized the enzyme. Lastly, after developing strategies for low pH stabilization, new protein-polymer conjugates with the chemical permeation enhancer 1-phenylpiperazine were designed to enable protein transport across the intestinal epithelium. Bovine serum albumin-poly(oligoethylene methacrylate)-block-poly(phenylpiperazine acrylamide) BSA-pOEGMA-b-pPPZ conjugates induced dose dependent increases in Caco-2 monolayer permeability and transported across an in vitro intestinal monolayer model with low cell toxicity.

► Natural gas is a growing energy source in the US for various end-uses, and its potential future as a transportation fuel has been the focus…
(more)

▼ Natural gas is a growing energy source in the US for various end-uses, and its potential future as a transportation fuel has been the focus of recent policy discussions. Nationally, ethanol is blended with gasoline up to 10% for conventional vehicles, and up to 85% (E85) for use in Flexible Fuel Vehicles (FFVs). Federal mandates require increasing ethanol use in the transportation sector. Meeting the mandates could mean increasing the blend in conventional gasoline, or increasing the use of E85 in FFVs. This dissertation explores the economic, environmental and policy effects from producing ethanol from natural gas, and generally expanding access to ethanol as a transportation fuel (feedstock agnostic). Three processes are considered for producing ethanol from natural gas: (1) autothermal reforming (ATR) with catalytic conversion, (2) TCX, a process that produces intermediate products of methanol and acetic acid, developed by Celanese Corp., and (3) a fermentation process developed by Coskata Inc. I first estimate the cost of producing ethanol from natural gas to power light-duty FFVs in Pennsylvania (PA). Relying on production cost estimates provided by developers and assuming recent natural gas and gasoline prices are good proxies for future prices, I conclude that the cost of producing ethanol with either the Coskata or ATR processes would more likely than not be cheaper than gasoline and corn-based ethanol. However, capital costs from these emerging processes and future natural gas and gasoline prices are highly uncertain. The NGLF ethanol must also have acceptable greenhouse gas (GHG) emissions, for which an estimate is not currently available in the literature. I find the average life cycle GHG emissions for a 100-yr global warming potential (GWP) are 137 g CO2-eqiuv/MJ (ATR Catalytic), 119 g CO2-eqiuv/MJ (Celanese TCX) and 156 g CO2-eqiuv/MJ (Coskata fermentation), given the uncertainty in some parameters the estimate could be slightly higher or lower. All processes have life cycle emissions well above gasoline, and the 20% reduction from gasoline required by the Renewable Fuel Standard (RFS2). Even in the unlikely scenario of zero emissions from the upstream processes, NGLF ethanol process and combustion emissions are still larger than gasoline, although with more overlap in the error bars. More detailed life cycle assessments with process modeling could refine the emissions estimates. Existing policies incentivize ethanol produced from renewable sources, but no current policy provisions specifically incentivize the use or production of ethanol produced from natural gas. I conclude the dissertation with estimates of additional refueling costs for an FFV driver and infrastructure costs for expanding E85 access in Pennsylvania. The state recently received government grants for biofuels infrastructure. I find that even with a subsidy to cover average infrastructure costs of 0.03 to 1.48 per gasoline gallon equivalent (gge) for the retailer, the consumer would still incur additional costs for refueling more…

► Sigma 3 grain boundaries play a large role in the microstructure of fcc materials in general, and particularly so in grain boundary engineered materials. A…
(more)

▼ Sigma 3 grain boundaries play a large role in the microstructure of fcc materials in general, and particularly so in grain boundary engineered materials. A recent survey of grain boundary properties revealed that many of these grain boundaries possess very large mobilities, and that these mobilities increase at lower temperature, contrary to typical models of thermallyactivated grain boundary motion. Such boundaries would have a tremendous mobility advantage over other boundaries at low temperature, which may explain some observed instances of abnormal grain growth at low temperature. This work explains the boundary structure and motion mechanism that allows for such mobilities, and explores several of the unique factors that must be considered when simulating the motion of these boundaries. The mobilities of a number of boundaries, both thermally-activated and antithermal, were then calculated over a wide temperature range, and several trends were identified that relate boundary crystallography to thermal behavior and mobility. An explanation of the difference in thermal behavior observed in sigma 3 boundaries is proposed based on differences in their dislocation structure.

▼ With the emergence of multi-standard and cognitive radios, the need for reconfigurable RF circuits increased. Such circuits require wide-band quadrature voltage controlled oscillators (QVCOs) to provide the local oscillator (LO) signal for up and down conversion. Wide-band QVCOs performance has lagged behind their narrowband VCO counterparts and numerous circuit techniques have been introduced to bridge the gap. This dissertation presents techniques that have been used to implement wide-band reconfigurable QVCOs with focus on dual-resonance based circuits. System and circuit analysis are performed to understand the tuning-range, phase noise, and power tradeoffs and to consider quadrature phase errors. An 8.8-15.0 GHz actively coupled QVCO and a 13.8-20GHz passively coupled QVCO are presented. Both oscillators employ dual-resonance to achieve extended tuning ranges. Impulse sensitivity functions were used to study the impact of different passive and active device noises on the overall phase noise performance of the dual-resonance oscillator and the actively and passively coupled quadrature oscillators. The quadrature phase error due to the different architecture parameters were investigated for the actively and passively coupled quadrature oscillators. The advantages of using switched capacitor tuning as a major part of passive tuning are identified, and the advantage of employing switches with large bandwidths, such as those associated with phase change materials, is mathematically quantified. Furthermore, a novel method for accurate off chip phase error measurement using discrete components and phase shifters that does not require calibration is introduced.

► Increasing the percentage of wind power in the United States electricity generation mix would facilitate the transition towards a more sustainable, low-pollution, and environmentally-conscious electricity…
(more)

▼ Increasing the percentage of wind power in the United States electricity generation mix would facilitate the transition towards a more sustainable, low-pollution, and environmentally-conscious electricity grid. However, this effort is not without cost. Wind power generation is time-variable and typically not synchronized with electricity demand (i.e., load). In addition, the highest-output wind resources are often located in remote locations, necessitating transmission investment between generation sites and load. Furthermore, negative public perceptions of wind projects could prevent widespread wind development, especially for projects close to densely-populated communities. The work presented in my dissertation seeks to understand where it’s best to locate wind energy projects while considering these various factors. First, in Chapter 2, I examine whether energy storage technologies, such as grid-scale batteries, could help reduce the transmission upgrade costs incurred when siting wind projects in distant locations. For a case study of a hypothetical 200 MW wind project in North Dakota that delivers power to Illinois, I present an optimization model that estimates the optimal size of transmission and energy storage capacity that yields the lowest average cost of generation and transmission (/MWh). I find that for this application of storage to be economical, energy storage costs would have to be 100/kWh or lower, which is well below current costs for available technologies. I conclude that there are likely better ways to use energy storage than for accessing distant wind projects. Following from this work, in Chapter 3, I present an optimization model to estimate the economics of accessing high quality wind resources in remote areas to comply with renewable energy policy targets. I include temporal aspects of wind power (variability costs and correlation to market prices) as well as total wind power produced from different farms. I assess the goal of providing 40 TWh of new wind generation in the Midwestern transmission system (MISO) while minimizing system costs. Results show that building wind farms in North/South Dakota (windiest states) compared to Illinois (less windy, but close to population centers) would only be economical if the incremental transmission costs to access them were below 360/kW of wind capacity (break-even value). Historically, the incremental transmission costs for wind development in North/South Dakota compared to in Illinois are about twice this value. However, the break-even incremental transmission cost for wind farms in Minnesota/Iowa (also windy states) is 250/kW, which is consistent with historical costs. I conclude that for the case in MISO, building wind projects in more distant locations (i.e., Minnesota/Iowa) is most economical. My two final chapters use semi-structured interviews (Chapter 4) and conjoint-based surveys (Chapter 5) to understand public perceptions and preferences for different wind project siting characteristics such as the distance between the project and a…