In this work we discuss an approach for uncertainty propagation
through computationally expensive physics simulation
codes. Our approach incorporates gradient information information
to provide a higher quality surrogate with fewer simulation
results compared with derivative-free approaches.

We use this information in two ways: we fit a polynomial or Gaussian process model ("surrogate") of the system response. In a third approach we hybridize the techniques where a Gaussian process with polynomial mean is fit resulting in an improvement of both techniques. The surrogate coupled with input uncertainty information provides a complete uncertainty approach when
the physics simulation code can be run at only a small number
of times. We discuss various algorithmic choices such as polynomial basis and covariance kernel. We demonstrate our findings on synthetic
functions as well as nuclear reactor models.

Ordinary differential equations with uncertain parameters are a vast field of research.
Monte-Carlo simulation techniques are widely used to approximate quantities
of interest of the solution of random ordinary differential equations. Nevertheless,
over the last decades, methods based on spectral expansions of the solution
process have drawn great interest. They are promising methods to efficiently
approximate the solution of random ordinary differential equations. Although global
approaches on the parameter domain reveal to be very inaccurate in many
cases, an element-wise approach can be proven to converge. This poster presents
an algorithm, which is based on the stochastic Galerkin Runge-Kutta method.
It incorporates adaptive stepsize control in time and adaptive partitioning of
the parameter domain.

This talk will describe experiences and challenges at Boeing with Uncertainty Quantification (UQ) and Optimization Under Uncertainty (OUU) in conceptual design problems that use complex computer simulations. The talk will describe tools and methods that have been developed and used by the Applied Math group at Boeing and their perceived strengths and limitations. Application of the tools and methods will be illustrated with an example in conceptual design of a hypersonic vehicle. Finally I will discuss future development plans and needs in UQ and OUU.

The curse of dimensionality is a ubiquitous challenge in uncertainty quantification. It usually comes about as the complexity of analysis is controlled by the complexity of input parameters. In most cases of practical relevance, the output quantity of interest (QoI) is some integral of the input quantities and can thus be described in a much lower dimensional setting. This talk will describe novel procedures for honoring the low-dimensional character of the QoI without any loss of information. The talk will also describe the range of QoI that can be addressed using this formalism.

The role of UQ as the engine behind the model validation puts a burden of rigor on UQ formulations. The ability to explore the effect of particular probabilistic choices on model validity is paramount for practical applications in general, and data-poor applications in particular. The talk will also address achievable and meaningful definitions of the validation process and demonstrate their relevance in the context of industrial problems.

Deterministic design optimization approaches are no longer satisfactory for industrial high technology products. Product and process designs often exploit physical limits to improve performance. In this regime uncertainty originating from fluctuations during fabrication and small disturbances in system operations severely impacts product performance and quality. Design robustness becomes a key issue in optimizing industrial designs.
We present challenges and solution approaches implemented in our robust design tool RoDeO applied turbo charger design. In addition to the challenges for electricity generating turbines, turbo chargers have to work efficiently for a wide range of rotation frequencies. Time-consuming aerodynamic (CFD) and mechanical (FEM) computations for large sets of frequencies became a severely limiting factor even for deterministic optimization. Further more constrained deterministic optimization could not guarantee critical design limits under impact of uncertainty during fabrication. Especially, the treatment of design constraints in terms of thresholds for von Mises stress or modal frequencies became crucial. We introduce an efficient approach for the numerical treatment of such chance constraints that even do not need additional CFD and FEM calculations in our robust design tool set.
An outlook for further design challenges concludes the presentation.
Contents of this presentation are joint work of U. Wever, M. Klaus, M. Paffrath and A. Gilg.

The problem of estimating uncertainties in climate prediction is not well defined. While one can express its solution within a Bayesian statistical framework, the solution is not necessarily correct. One must confront the scientific issues for how observational data is used to test various hypotheses for the physics of climate. Moreover, one also must confront the computational challenges of estimating the posterior distribution without the help of a statistical emulator of the forward model. I will present results of a recently completed estimate of the uncertainty in specifying 15 parameters important to clouds, convection, and radiation of the Community Atmosphere Model. I learned that the maximum posterior probably is not in the same region of parameter space as the minimum log-likelihood. I have interpreted these differences to the existence of model biases and the potential that the minimum log-likelihood, which are often the desired solutions to data inversion problems, are over-fitting the data. Such a result highlights the need for a combination of scientific and computational thinking to begin to address uncertainties for complex multi-physics phenomena.

The problem of estimating uncertainties in climate prediction is not well defined. While one can express its solution within a Bayesian statistical framework, the solution is not necessarily correct. One must confront the scientific issues for how observational data is used to test various hypotheses for the physics of climate. Moreover, one also must confront the computational challenges of estimating the posterior distribution without the help of a statistical emulator of the forward model. I will present results of a recently completed estimate of the uncertainty in specifying 15 parameters important to clouds, convection, and radiation of the Community Atmosphere Model. I learned that the maximum posterior probably is not in the same region of parameter space as the minimum log-likelihood. I have interpreted these differences to the existence of model biases and the potential that the minimum log-likelihood, which are often the desired solutions to data inversion problems, are over-fitting the data. Such a result highlights the need for a combination of scientific and computational thinking to begin to address uncertainties for complex multi-physics phenomena.

A team at the Lawrence Livermore National Laboratory is
currently undertaking an uncertainty analysis of the Cummunity Earth
System Model (CESM), as a part of a larger effort to advance the
science of Uncertainty Quantification (UQ). The Climate UQ effort has
three major phases: UQ of the Cummunity Atmospheric Model (CAM)
component of CESM, UQ of CAM coupled to a simple slab ocean model, and
UQ of the fully coupled CESM (CAM + 3D ccean). In this poster we
describe the first phase of the Climate UQ effort; the generate of CAM
ensemble of simulations for sensitivity and uncertainty analysis.

Kriging response surfaces are now widely used to optimize design parameters in industrial applications where assessing a design's performance requires long computer simulations. The typical approach starts by running the computer simulations at points in an experiment design and then fitting kriging surfaces to the resulting data. One then proceeds iteratively: calculations are made on the surfaces to select new point(s); the simulations are run at these points; and the surfaces are updated to reflect the results. The most advanced approaches for selecting new points for sampling balance sampling where the kriging predictor is good (local search) with sampling where the kriging mean squared error is high (global search). Putting some emphasis on searching where the error is high ensures that we improve the accuracy of the surfaces between iterations and also makes the search global.

A potential problem with these approaches, however, is that the classic formula for the kriging mean squared error underestimates the true error, especially in small samples. The reason is that the formula is derived under the assumption that the parameters of the underlying stochastic process are known, but in reality they are estimated. In this paper, we show how to fix this underestimation problem and explore how doing so affects the performance of kriging-based optimization methods.