While DFT, with regard to computationally-driven material discovery, has been successful in eliminating bad candidate materials, often times, good candidates are lost due to uncertainty. Therefore, a robust quantification of uncertainty is important to increase the success of descriptor-based screening[1]. Using built-in Bayesian error estimation capabilities within the BEEF-vdW exchange correlation functional, we propose a robust and computationally efficient method for quantifying uncertainty in mechanical properties, magnetic ground states and catalytic activity.To quantify uncertainty in mechanical properties, which depend on the derivatives of the energy, we calculate energies around the equilibrium cell volume at different strains and an energy-strain relationship. At each strain, we use an ensemble of energies to obtain an ensemble of fits and thereby, an ensemble of mechanical properties, whose spread can be used to quantify its uncertainty[2]. The importance of this method will be discussed in the context of designing solid-electrolytes with the desired mechanical properties for Li metal anodes.Uncertainty in magnetic ground states is predicted by calculating the energy of a single material for various magnetic configurations. Each magnetic configuration has an ensemble of calculated energies with each energy corresponding to a specific exchange-correlation functional. We then compare the relative ordering of the energies of all possible magnetic states functional by functional to determine the consistency of the prediction. We define the c-value[3] as the proportion of functionals that agree with the best fit prediction and will discuss how this metric can be used to aid in high throughput material discovery.Estimating uncertainty in the activity of catalysts is done by first determining the precise catalyst surface speciation. It is therefore important to quantify uncertainty in the surface pourbaix diagram and we develop a probabilistic pourbaix diagram using the uncertainty in free energies of adsorbed species. Secondly, using this probabilistic surface speciation diagram, we can determine the adsorption free energy of reaction intermediates under relevant reaction conditions. We use this framework to successfully demonstrate uncertainty in activity for oxygen reduction reaction[4], oxygen evolution reaction, hydrogen evolution reaction, etc.We believe uncertainty quantification will emerge as a crucial enabler as computational methods move from providing robust qualitative insights to robust quantitative predictions.[1] J. Ling, M. Hutchinson, E. Antono, S. Paradiso, B. Meredig, arXiv:1704.07423.[2] Z. Ahmad and V. Viswanathan, Phys. Rev. B 94, 064105 (2016).[3] G. Houchins and V. Viswanathan, arXiv:1706.00416.[4] S. Deshpande, J. R. Kitchin, and V. Viswanathan, ACS Catal. 6, 5251 (2016).

Density functional theory (DFT) is largely accepted as a reliable and computationally efficient theoretical technique and is routinely used for describing properties of relevance to electrochemical systems. However, approximations associated the use of an exchange-correlation functional at the generalized gradient approximation (GGA) leads to potential uncertainties related to the calculated properties. A key question emerges: How incorrect is the first principles approximation of DFT? Given uncertainty quantification in large scale computational material searches may lead to more efficient discovery of candidates,1 uncertainty quantification in DFT continues to be an important topic for any type of system in which DFT is used to understand. We specifically discuss a way to quantify the uncertainty associated with magnetic ground state prediction, a particularly relevant topic given the abundance of transition metal ions in electrochemical devices.

We use the Bayesian Error Estimation Functional (BEEF), which includes an empirically fit exchange-correlation functional as well an estimation of error based on this empirical fit. Along with an estimation of error, BEEF provides a way to non-self-consistently calculation the energy of a system with thousands of functionals within GGA. Using this, we formulate a way to systematically test the confidence of a prediction of magnetic state over a range of GGA functionals. We will demonstrate the applicability of this approach using a broad range of material classes and different kind of magnetic systems.2

We also include in this formulation of confidence a way to simultaneously compare multiple antiferromagnetic configurations that can occur when there are multiple length scales of interaction. This is in contrast to conventional DFT predictions of magnetic structure which only show the energy difference between the ferromagnetic state and a single antiferromagnetic state.

Overall, a way to systematically understand the prediction confidence of DFT may be used to identify cases where there is disagreement between GGA-functionals and signal that more accurate methods are needed to fully understand the system. It is then possible that the strategic application of more accurate theories could lead to new emergent phenomena.

We also believe this method will be crucial to enable high-throughput material discovery especially in the context of Li-ion cathode materials where these issues become exceptionally important.

A major challenge in multiscale materials simulation is the ab initio prediction of phase stabilities in multi-phase materials. Since it involves complex simulation protocols, the uncertainty of the ab initio input and the error propagation to the desired free energies, transition temperatures and entropy changes is a critical issue. At this level, a combination of model uncertainties, numerical, convergence and statistical errors is present. Already the determination of the equilibrium lattice constant and bulk modulus requires a careful analysis of the fitting of energy-volume curves, going beyond the consideration of standard convergence parameters like cutoff and k-points.In order to handle this delicate interplay of uncertainties, we introduce the concept of uncertainty phase diagrams. Based on the uncertainty phase diagrams we model the convergence gradients of the contributing errors, to automate the convergence process not only for the error in energy. The modelling of uncertainties in relation to the corresponding ab initio calculation is enabled by our recently developed Python based workbench pyiron. In particular the generic interfaces to simulation codes at different time and length scales and the in-process data management model are used to reduce the technological complexity of our uncertainty propagation model. Our investigations revealed that commonly used rules of the thumb for fitting ground state materials properties become invalid for high precision calculations, as the dominating sources of error change. We will demonstrate the suitability of this automated approach to simulate phase stabilities in Mg alloys at temperatures up to the melting point.

Modeling complex systems such as liquids or solid-liquid interfaces at finite temperature requires sampling of an appropriate statistical ensemble. One traditional approach to achieving this goal relies on molecular dynamics (MD) simulations. We consider here the case of First-Principles Molecular Dynamics (FPMD), which builds on a quantum description of interatomic forces, and is commonly used in situations where accurate model potentials are not available. Due to the statistical variability of MD trajectories and their sensitivity to initial conditions, validation and verification for FPMD simulations is more challenging than for static electronic structure calculations. We discuss various aspects of V&V activities for FPMD simulations of liquids, nanoparticles, and solid-liquid interfaces, including the validation of pseudopotentials, and testing of multiple exchange-correlation functionals. Examples of generation and archival of complete trajectories for later analysis are presented. Choices of formats and metadata for archival of FPMD samples and trajectories will also be discussed.

This work is supported by the US Department of Energy through the Midwest Integrated Center for Computational Materials (MICCoM).

The first-principles prediction of non-equilibrium processes such as oxidation and phase transformations remains a significant challenge in materials science. The evolution of a solid out of equilibrium is affected by intrinsic thermodynamic, mechanical and kinetic properties that are often difficult if not impossible to measure accurately in isolation. Comparisons between predictions made at the electronic structure scale to experimental measurements must, therefore, rely on a multi-scale, statistical mechanics approach that connects all the relevant length and time scales. Unfortunately, errors and uncertainties accumulate along the statistical mechanics ladder that links the electronic structure to the phenomenological descriptions at the macroscopic scale at finite temperature. Furthermore, many non-equilibrium processes remain poorly understood and do not yet have an accurate phenomenological description at the macroscopic scale. Even if they did, a rigorous link between coefficients that appear in continuum models and the electronic structure of the underlying phases is often lacking. Sensitivity analysis tools and uncertainty quantification methods will increasingly serve as an essential component in the validation of first-principles multi-scale predictions using non-equilibrium experimental measurements. Furthermore, such methods are enabling systematic approaches to discover models of complex multi-scale phenomena. In this talk we will illustrate the validation of first-principles predictions of high temperature, non-equilibrium processes drawing on examples that involve magnetic alloys, the oxidation of early transition metals, and the electrochemical response of intercalation compounds for Li-ion and Na-ion batteries.

10:30 AM - *TC05.01.06

Uncertainty Quantification for Solute Transport Modeling

Dallas Trinkle 1 1 Materials Science and Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois, United States

Solute transport controls a wide variety of both material properties and processing, and computational prediction of transport coefficients plays an increasingly important role in materials design. First-principles methods can routinely compute both defect energies and transition states to provide the atomic-scale information for transport, but scaling up to mesoscale solute mobility requires the solution of the master equation. Kinetic Monte Carlo provides one route to computing transport coefficients, but the stochastic solution to the master equation can make uncertainty quantification difficult. The use of Green functions to compute solute mobilities offers an alternate approach; in addition to being accurate and computationally efficient, the deterministic solution permits the use of a Bayesian framework for uncertainty quantification. In this case, uncertainties in first-principles energies and energy barriers can be propagated forward into uncertainties in mobilities. Furthermore, sensitivity analysis is possible to identify which energies and barriers are most important for modeling mobilities.

The topic description of this symposium identifies 4 aspects under the general umbrella of uncertainty quantification. This contribution focusses on two of them, with ramifactions into the other two: verification and validation for Density Functional Theory. Verification of DFT codes at the PBE level has been performed by an extensive pair-wise comparison of of mainstream codes, by a community-wide consortium of code developers and expert users [1]. The main conclusions will be highlighted, and follow-up work towards the ultimate test bed for pseudopotentials will be presented. As far as validation is concerned, we will review how a regression analysis gives quantitative information about the ability of DFT or DFT-based models to predict equilibrium volumes, bulk moduli and elastic constants, thermal expansion and melting temperatures, surface energies and work functions [2-4].

In this talk we discuss recent work on modeling Density Functional Theory (DFT) data and error uncertainty in complex, high-dimensional chemical reaction networks. We present related information-based mathematical tools for sensitivity analysis and uncertainty quantification (UQ), capable to handle a large number of model parameters and high-dimensional state spaces. We also discuss the impact of electronic structure-induced parameter correlations and related uncertainties on model predictions. Finally, we introduce a concept of a UQ Index, based on new tight information inequalities for model bias, that allows us to assess the impact of different sources of uncertainty and/or error on chemical kinetics predictions. This is a joint work Luc Rey-Bellet (UMass Amherst) and Dion Vlachos (U. of Delaware).

Parameterization of an empirical potential can be performed through an estimation of the Pareto surface to produce an ensemble of potentials for predictive fidelity and robustness. This process produces a wealth of information which can be used to explore the relationship between parametric uncertainty and the accuracy of material properties predictions. An analysis of the parameterization of a Buckingham potential for MgO is presented. Analysis of the Pareto surface is used to assess the efficacy of different representations for interatomic potential functional forms. An analysis of the different embedding functions for the Ni EAM potential is discussed.

This work was supported by Sandia Laboratory Directed Research and Development and by funding from the University of Florida Division of Sponsored Programs. Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA0003525.

2:00 PM - TC05.02.02

Addressing Uncertainty in Machine-Learning Representations of the Potential-Energy Surface

Machine-learning models have made impressive demonstrations in their ability to reproduce the energy and force predictions of higher-accuracy electronic structure calculations. However, because of the lack of physics that is built into the models, they can make wildly inaccurate predictions when asked to make predictions outside the realm of the training data. Further, due to the model's abstraction, it can often be difficult to know when the model has left regions of high data. In this talk, we outline some recent techniques that we have developed to help address uncertainty in machine learning, and suggest that rigorous uncertainty predictions are not possible due to the highly curved nature of the potential energy surface that can occur in specific regions.

The combination of high fidelity, first principles quantum mechanical modeling schemes, such as density functional theory (DFT), and machine learning (ML) have recently emerged as a powerful tool to develop accurate interatomic potentials, also known as force fields, through ‘learning’ atomic energies and forces data [1,3,4]. In our previous work, we proposed a novel ML based methodology that ‘learns’ from reference quantum mechanical data to predict forces directly, given only the atomic configuration [1,2]. This ‘learning’ however is not true machine learning by definition, as any adaptive improvement of the force field is performed through 'post-processing' techniques. This is due to the fact that these ML techniques suffer from an inability to identify atomic environments on-the-fly, which lie out of its domain of predictability [1,2,5]. In this work we present a novel methodology to quantify uncertainty in terms of inherent properties within the feature space. Based on these uncertainties we present a framework that can determine the domain in which our model is applicable, and also establish an on-the-fly learning scheme that continuously ‘mines’ atomic environments with high uncertainty, and subsequently append them to original training dataset. We demonstrate this methodology using a Platinum force field as a case study. This work demonstrates that our regression-based machine learning approach can be adaptively improved on-the-fly using new methods to quantify uncertainty, paving the way for a machine learning scheme that truly learns.

Accuracy of Molecular Dynamics (MD) simulations strongly depends on a set of parameters, known as force-field, that defines the intra- and inter-molecular interactions between atoms/molecules. Herein, we have employed particle swarm optimization (PSO) method in expediting the search for optimized force-field parameters for the development of elastomechanically stable coarse-grained (EM CG) models. This approach is used to develop new non-polarizable and polarizable CG water models. These new models could reproduce the experimentally determined physical, chemical, and thermodynamic properties of water. Our work demonstrates the potential of PSO method in accelerating the search for optimized force-field parameters and thus fast-track the discovery of new hybrid materials.

Kinetic Monte Carlo and other related kinetic network models are frequently used to study the long-timescale dynamical behavior of biomolecular and materials systems. KMC models are often constructed bottom-up using brute-force molecular dynamics (MD) simulations when the model contains a large number of states and kinetic pathways that are not known a priori. However, the resulting network generally encompasses only parts of the configurational space, and regardless of any additional MD performed, several states and pathways will still remain missing. This implies that the duration for which the KMC model can faithfully capture the true dynamics, which we term as the validity time for the model, is always finite and unfortunately much shorter than the MD time invested to construct the model. A general framework that relates the kinetic uncertainty in the model to the validity time, missing states and pathways, network topology, and statistical sampling is presented. Performing additional calculations for frequently-sampled states/pathways may not alter the KMC model validity time. A new class of enhanced kinetic sampling techniques is introduced that aims at targeting rare states/pathways that contribute most to the uncertainty so that the validity time is boosted in an effective manner. Examples including straightforward 1D energy landscapes, lattice models, materials and biomolecular systems are provided to illustrate the application of the method.

For more details see The Journal of Chemical Physics - Special Issue on Reaction Pathways 147, 152702 (2017); doi: http://dx.doi.org/10.1063/1.4984932

Material models, no matter what scale of simulation, are crucial to predicting the correct properties and capturing the right mechanisms for multiscale models. However, the topic of uncertainty at sub-continuum length scales is not discussed nearly as much as at macroscale length scales. In many ways, the challenge of validating models at these scales may have prevented studies of uncertainty quantification at these scales. In this talk, I will discuss research aimed at sampling the interatomic potential parameter space for modified embedded atom method (MEAM) and reaxFF potentials, quantifying the parameter sensitivity for various properties, representing the parameter-property relationships using surrogate models (or supervised machine learning), quantifying the correlation (and clustering) between various properties (including some expensive properties), and examining the implications for optimization of the interatomic potential. Understanding uncertainty due to the interatomic potential development process can help to understand how uncertainty propagates to properties at the atomistic scale.

4:00 PM - *TC05.02.07

Tools and Resources for Finding, Selecting and Using Interatomic Potentials

The choice of model is an important consideration in how much confidence a user can place in the results from a simulation, and the choice of interatomic potential may be a primary source of uncertainty in classical atomistic simulations such as molecular dynamics. NIST’s Interatomic Potentials Repository has for the last nine years been a source of developer-approved models for interatomic potentials that have been used for molecular simulation research and, more recently, incorporated into various other projects. However, as new types of models and repositories are continuously being established, it becomes even more important to help users find new resources as well as information and tools to assess each model’s appropriateness for the problem under consideration. To address this need, we are developing computational tools that users can adapt to their own research and teaching environments and performing property calculations that assist users in model selection and benchmarking. We are also putting increased emphasis on providing links and information to connect users with tools and projects maintained by NIST and other institutions. Here we will focus on that effort and how it fits into the larger picture of interatomic potential selection and use.

Validating whether interatomic potentials are suitable for specific applications requires comparing basic property predictions from the potentials to experiments or more robust calculation methods. However, considerable variability may exist in the reported values of these basic properties due to differences in calculation methodologies. As an example, three distinct methods for evaluating the lattice and elastic constants of bulk crystal structures are implemented within a computational framework based on Python scripts. High-throughput execution of the calculations was performed across 120 interatomic potentials, 18 crystal prototypes, 5 small strain values, and all possible combinations of unique lattice site and elemental model pairings. Comparing results across potentials, methods, and crystal structures reveals conditions where the resulting property values are sensitive to how the evaluation was performed. This analysis also assists with the verification of potentials, calculation methods, and molecular dynamic software by identifying outliers for further investigation. All results, calculation scripts, and computational infrastructure are openly available to support researchers in performing meaningful simulations.

Super alloys (SA) and high entropy alloys (HEA) have attracted the attention of the materials science and technology communities because of their excellent mechanical performance, such as high mechanical strength, resistance to corrosion, oxidation, and creep. To achieve a better understanding of the stacking fault strengthening mechanism of HEA and SA, a phase field dislocation dynamics (PFDD) method is applied. The PFDD model is a very efficient tool to study strengthening mechanisms in such alloys and provides insights to design new alloys with improved mechanical strength. Some key findings of our study are: (i) decorrelation between the leading and trailing partial dislocations in SA and HEA, (ii) dislocations are trapped at low stacking fault regions, (iii) the stress needed to move the dislocations from these low stacking fault regions is proportional to the jump in the stacking fault energy. In HEA, the local stacking fault energy depends on the local composition, which leads to regions with different SFE. We perform a sensitivity analysis on the effect of the average size of these regions and the initial dislocation density on the yield stress. Our work reveals that the yield stress is inversely proportional to the average size.

Robust prediction of the electrical performance of additively manufactured conductors and dielectrics is challenging due to the influence of microscale spatial heterogeneities on device level properties and by the large parameter space of processing variables in 3D printing processes. Characterization of the electrical performance uncertainty, and the corresponding sensitivity to microstructure and the fabrication process, is essential for optimizing print protocols of new conductive/dielectric inks and for tuning the performance variation for specific applications. To address this challenge, we characterized the capacitance probability distribution of a model interdigitated capacitor as a function of electrode geometry variability, spatial variability of the dielectric matrix, and the size distribution of dielectric inclusions. Monte Carlo simulations of an analytical model, that incorporates permittivity and electrode geometry uncertainties, was compared with a high through-put experimental data set. The experimental system consisted of a large array of capacitors with aerosol jet printed, silver particle-based electrodes and a float-coated dielectric layer. In parallel, an in silico capacitance probability distribution was generated using a high fidelity 3D finite element model of the capacitor that included the assignment of different particle sizes and spatial distributions. Discrepancies between capacitance distributions in analytical and experimental models were evaluated and reduced by further updating analytical models using experimental data based on Bayesian model updating technique and Gaussian process. Collectively, the study provides a useful framework to correlate electrical performance with both macro- and microstructural variation sources, and will enable a sensitivity analysis of the processing parameters, which is key to accelerating additive manufacturing materials development.

8:30 AM - *TC05.03.03

Uncertainty Quantification and Data Science in Materials Applications

Laura Swiler 1 1 , Sandia National Laboratories, Albuquerque, New Mexico, United States

This talk will discuss the differences between uncertainty quantification (UQ) and data science methods, specifically focused on materials applications such as additive manufacturing. Data science and UQ are closely related but their goals are different. Data science seeks to provide analysis and linkages such as process-structure-property mappings, while UQ methods seek to propagate input uncertainties through simulation models to understand resulting output uncertainties.

The talk will review current work in data science, including various data analytic methods and approaches that use higher-order statistics. The talk will also review uncertainty quantification (UQ) methods and highlight areas of multi-fidelity UQ, multi-physics UQ, and multi-scale UQ. Particular topics include polynomial chaos expansions, dimension reduction and active subspace approaches, multi-level Monte Carlo methods, Bayesian calibration, and coupled methods. We will focus on using data science to inform UQ studies and vice-versa. Applications to materials problems and multi-scale materials applications will be presented.

9:00 AM - *TC05.03.04

Model Form Sensitivity and Uncertainty in Multiscale Model Calibration

David McDowell 1 1 Institute for Materials, Woodruff School of Mechanical Engineering, School of Materials Science and Engineering, Georgia Institute of Technology, Atlanta, Georgia, United States

This talk will first consider the sensitivity of results of computational atomistic modeling to the underlying interatomic potential based on two distinct strategies. The first will address one-at-a-time sensitivity study of parameters within a modified embedded atom method potential for a given set of properties of bcc uranium and zirconium [1]. The second will consider utilization of five different fits to the embedded atom method potential for atomistic modeling of dislocation pileup-grain boundary slip transfer reactions in Ni, assessing sensitivity of the physical details of responses in terms of absorption-desorption-transmission reactions [2]. Large scale concurrent atomistic-continuum (CAC) method simulations are performed to address the slip transfer of mixed character dislocations across GBs in FCC Ni. Two symmetric tilt GBs, a Σ3{111} coherent twin boundary (CTB) and a Σ11{113} symmetric tilt GB (STGB), are investigated using five different fits to the embedded-atom method (EAM) interatomic potential to assess the variability of predicted dislocation-interface reaction. It is shown that for the Σ3 CTB, two of these potentials predict dislocation transmission while the other three predict dislocation absorption. In contrast, all five fits to the EAM potential predict that dislocations are absorbed by the Σ11 STGB.We will also discuss reduction of uncertainty in the process of calibrating non-unique parameters of a mesoscopic model via incorporation of additional data, either from bottom-up simulations or from top-down experiments, in the case where data are limited in number and scope (e.g., sparse supportive data). This is often the case in materials research and development. Distinctions are drawn between the nature of atomistic simulation input into a crystal plasticity slip system flow rule for bcc Fe, with kinetics dominated by formation of coordinated kink pairs, and top-down experimental results. The flow rule is calibrated to experiments using an approach that considers the influence of each data point on the parameter calibration process using pseudodata generated from the same material model. A point of diminishing return in addition of more data points is identified based on the evolution of uncertainty measures. References:[1] Moore, A.P., Deo, C., Baskes, M.I., Okuniewski, M., and McDowell, D.L., “Understanding the Uncertainty of Interatomic Potentials’ Parameters and Formalism,” Computational Materials Science, Vol. 126, 2017, pp. 308-320.[2] Xu, S., Xiong, L., Chen, Y., and McDowell, D.L., “Comparing EAM Potentials to Model Slip Transfer of Sequential Mixed Character Dislocations Across Two Symmetric Tilt Grain Boundaries in Ni,” JOM, 69(5), 814-821.[3] Tallman, A., Swiler, L.P., Wang, Y, and McDowell, D.L., “Calibration based Model Form Uncertainty Quantification in bcc Fe Crystal Plasticity Modeling of Yield Strength,” in preparation, May 2017.

9:30 AM - TC05.03.05

Use of Bayesian Inference in Characterization of Ceramic Materials—An Introduction and Applications in Ferroelectrics

Materials development remains limited by our ability to “see” and characterize newly synthesized materials. Over the past decades, great advancements have been seen in X-ray and neutron characterization instruments. However, the analysis of data from such instruments has progressed slowly, an example being the Rietveld method for refinement of crystallographic structures using least squares (1969). In this talk, I will introduce to the materials researcher the alternative statistical framework of Bayesian statistics and its application to analysis of diffraction data when employed in conjunction with a Markov Chain Monte Carlo (MCMC) algorithm. The talk will include a basic introduction and application to modeling single reflections, doublets from ferroelastic degenerate reflections, and the entire pattern (full profile). The parameters in the new models represent structure using probability distributions, treating solutions probabilistically with improved uncertainty quantification. For ferroelectrics, we demonstrate that these probability distributions can be readily propagated into new calculated parameters related to domain reorientation. The conventional least squares solutions and its confidence intervals will be compared/contrasted to the new approach and its credible intervals. The new approach offers more confident structure-property correlations.

In light of current global challenges, the conversion of CO2 to carbonic acid by hydration is recognized as an important route in reducing CO2 emissions. We investigate the CO2 hydration reaction (CHR) mechanism of small molecule mimics of the carbonic anhydrase (CA) active site, consisting of Zn(II) coordinated by three imidazole groups, using density functional theory (DFT). Molecular catalysts provide an excellent test bed for identifying the extent of errors in approximate DFT for kinetic predictions. We carry out a systematic study at the domain based local pair natural orbital coupled-cluster (DLPNO-CCSD(T)) level of theory for the ΔE values associated with the CO2 hydration pathway to investigate the performance of a set of selected density functionals. In addition, we carry out a first-principles computational screening approach to discover design strategies for the chemistry of three- and four-coordinated sp2 or sp3 nitrogen-ligand motifs to tune the CHR turnover. We leverage multimillion molecule organic libraries to identify diverse nitrogen-ligand candidates (monodentate and bidentate ligands) in conjunction with our recently developed inorganic discovery toolkit, molSimplify code. We discover a strong correlation between the energetic span (δE) of CHR and the pKa of the water coordinated to the Zn, critical to the performance of the catalyst. Our simulations suggest that the electron withdrawing/donating nature of the chemical substituents can lead to varying catalytic reaction rates, and enable us to develop new design strategies for biomimetic Zn catalyst for CO2 hydration.

In practical application, materials typically do not fail subject to uniform, uniaxial loading. Yet, such methods are the basis for predicting ultimate material strength and failure. Realistic loads are often combinatorial and uncertain.

Here, we present the preliminary formulation and implementation of a performance-based materials design methodology, encompassing the development of a computational nanoscale incremental dynamic analysis (NIDA) protocol to determine the failure risk of material systems at different loading thresholds, applied to a simple carbon nanotube (CNT) system subject to stochastic vibrations. However, in general, the proposed methodology can be applied to any random or highly variable load condition.

Rather than a single data point of strength, the NIDA approach will produce fragility curves that reflect anticipated damage as a function of load threshold (viascaling of random stochastic loadings). The aim is to link effective performance to relevant statistical loading fields that demonstrate a scientific path and design methodology to optimize materials with specific risk thresholds. The downside to the method is that a vast number of scaled loading states (or histories) are required to provide adequate statistics. However, computational methods are increasing in efficiency, enabling multiple parallel runs of system replicas.

As an illustrative case study, we explore the effect of vibration magnitude and applied pre-strain on the failure of a single CNT. The vibrations can be thought of as indirect loading of the system, caused by surrounding kinetic energy waves and thermal fluctuations, manifesting as mechanical vibration/excitations. Similar to seismic loading of macrostructures, we can subject the system to vibrations by imposing an additional acceleration on each particle in the system (e.g., global acceleration), or, alternatively, at specified boundaries. How does a CNT perform subject to nano-earthquakes?

Embracing and extending concepts from performance-based structural engineering, NIDA can enable the comprehensive assessment of material behavior under stochastic loading conditions, allowing for the first time the evaluation of multiple limit states across a range of load levels/thresholds. A key recognition is that static values of “strength” is insufficient - material assembly and configuration combined with combinatorial load conditions dictate ultimate capacity and failure modes. Introducing a paradigm of performance-based materials design, material systems can be interpreted beyond a static set of properties (e.g., ultimate strength, toughness) and designed according to probable risk, loss of functionality, and/or complete failure.

Molecular dynamics simulations of simple bicrystal systems have been much used as a tool to explore how the migration of grain boundaries varies with their structure and with experimental conditions. In order to permit the exploration of a large parameter space, many studies are forced to rely on a small number of simulations (often a single simulation) for each configuration. The motion of a grain boundary is inherently statistical and any variability in the measured grain boundary velocity should be taken into account in subsequent analysis of trends in grain boundary mobility.

Here we present the results of large numbers of simulations of equivalent boundaries, which show that this variability can be large, particularly when small systems are simulated. We show how a bootstrap resampling approach can be used to characterise the statistical uncertainty in boundary velocity using the information present in a single simulation. We show that the approach is robust across a variety of system sizes, temperatures and driving force strengths and types, and provides a good order-of-magnitude measure of the population standard deviation across multiple equivalent simulations.

This work studied the computational details of the Green–Kubo method with molecular dynamics (MD) simulation for thermal conductivity prediction. In MD thermal conductivity calculation, little consensus has been made about the inclusion of zero-pressure volume relaxation in isobaric-isothermal (NpT) ensemble, which determines the simulation lattice parameter. Simulations of fcc-based structures with different lattice parameters were performed to calculate lattice thermal conductivities and phonon properties, and the results were compared to experimental reports and ab initio results. It was concluded that thermal conductivity is strongly dependent on the choice of lattice parameter, and NpT volume relaxation is crucial to predict the accurate thermal conductivity.In addition, the relation between thermal conductivity and cutoff distance of interatomic potential was also analyzed in the similar context. The results suggested that calculated thermal conductivity was strictly dependent on the lattice parameter, and essentially independent of the cutoff distance. By fixing the lattice parameter and reducing the cutoff distance, the thermal conductivity calculation was greatly accelerated without sacrificing the accuracy.

11:30 AM - TC05.03.10

New Insights on Nanoporous Gold by Mining and Analysis of Published Images

One way of expediting materials development is to decrease the need for new experiments by making greater use of published literature. We present an exercise in data mining to gather new insights on nanoporous gold (NPG) without conducting additional experiments or simulations. NPG is a three-dimensional porous network that has found applications in catalysis, sensing, and actuation. Using specially-developed, automated image analysis software, we mine published images from thousands of publications on NPG. These images allow us to determine scaling exponents and thermal activation energies for coarsening in NPGs. Surprisingly, our work also suggests that the sizes and aspect ratios of ligaments in NPG are not correlated, indicating that they may be independently tunable microstructure features in NPG. We also address uncertainty quantification and error estimation arising from analyzing low quality images in the literature.

11:45 AM - TC05.03.11

First-Principles Modeling and Experimental Validation of Lattice Heat Transport in Alkaline Chlorides and Lead Chalcogenides

Lattice thermal conductivity (κL) is a key quantity among various properties of materials. Minimum κL enables the development of higher-efficiency thermoelectric materials and thermal barrier coatings. Boltzmann transport equation combined with first-principles computed phonon scattering rates are nowadays widely used to model κL in bulk crystals and their alloys, calling for careful examination of the methodology and comparison with high-quality experimental data. In this talk, we focus on validating the theoretical methodology against rigorously authenticated experimental data of carefully identified test systems. The two test systems adopted in our study are alkaline chlorides (LiCl, NaCl, KCl and RbCl) and lead chalcogenides (PbS, PbSe and PbTe). Despite the fact that both of them crystalize in the rock-salt structure, lead chalcogenides generally exhibit much stronger lattice anharmonicity than alkaline chlorides, offering the opportunity to test the capability of the methodology for broad application. In the validation process, we first synthesized high-quality single crystals and polycrystals to measure κL, confirming and validating previous experimental results. Then we perform comprehensive comparison between our theoretical calculation (using compressive sensing lattice dynamics) and experiments, as well as other existing softwares such as ShengBTE and Phono3py, to demonstrate the degree of fidelity between model predictions and experimental observations. To investigate the sensitivity and uncertainty in theoretical modeling, we explore the effect of lattice parameter, exchange-correlation functional (particularly, hybrid functional), high-order interactions (beyond three phonon processes) and temperature effect (phonon renormalization) on theoretical predictions. Besides, we will also comment on the previously identified resonant scattering mechanism [Nat. Commun. 5, 3525 (2014)] to reduce lattice thermal conductivity.

The US Nuclear Energy Advanced Modeling and Simulation (NEAMS) program is developing science-based next generation fuel performance modeling capability as part its Fuel Product Line in order to facilitate the predictive capability of nuclear fuel performance and assist the design and analysis of reactor systems. Critical experimental data are needed to validate MARMOT models, particularly on effective thermal transport, fracture mechanisms, grain growth kinetics and fission gas behavior. The fabrication of sintered fuel pellets with well-controlled microstructure is prerequisite to establish the correlation of the microstructure features and fuel behavior in order to develop high fidelity fuel performance models. In this talk, we will highlight recent advancements of using field-assisted sintering technologies, specifically spark plasma sintering (SPS), in tailoring and engineering fuel matrix as the target systems for validating MARMOT physics models. Thermal properties of the sintered fuel pellets were measured and the effects of key microstructure characteristics such as grain size, porosity, pore distribution and fuel stoichiometry, on thermal conductivity were probed. The experimental values are compared with the predicted local thermal conductivity across different microstructures in order to verify and validate the MARMOT thermal transport model. The uncertainties were also quantified by analyzing the sensitivity of microstructure variables and model input parameters, and their influence on effective thermal conductivity using a DAKOTA code. It has been shown that the MARMOT heat transport model is fully capable of predicting the right physical properties of the microstructure and thermal conductivity of fuel materials.

2:00 PM - TC05.04.02

Algorithms for Distributed Multiscale Computation with Application to Modeling Energetics

Multiscale models are often extremely computationally demanding due to the repeated evaluation of costly models at each scale. In this presentation, we will discuss recent advancements in a distributed multiscale computational framework to reduce the computational cost of expensive multiscale models through the dynamic replacement of costly at-scale models with cheaper surrogate models constructed using Gaussian process regression. The approach relies on an error estimate provided by the surrogate model at new evaluation points to select between acceptance of the surrogate model prediction and the evaluation of the underlying at-scale model. When the at-scale model is evaluated, the result is used to retrain the surrogate model, thus allowing for improved predictions and fewer future at-scale model evaluations. The dynamic nature of the approach, involving unpredictable evaluation of cheap surrogate models or expensive at-scale models, presents challenges for the efficient utilization of resources on high performance computers. We will discuss strategies for the speculative evaluation of at-scale models on unoccupied computational resources, to reduce overall wall-clock time through prefetching of needed at-scale model results before they are explicitly required. We will demonstrate the application of these algorithms in the context of a challenging multiscale problem: modeling deformation of the energetic material 1,3,5-trinitrohexahydro-s-triazine (RDX). In the model, a continuum finite element solver acquires equation of state of the material through evaluation of a mesoscale dissipative particle dynamics model. Using our surrogate modeling strategies, we demonstrate reduction in overall computational cost of the model by several orders of magnitude, with controllable error. In addition, we will highlight recent work on incorporation of chemical reactivity into our model, a challenge due to its higher dimensionality and time-dependence, requiring statistical techniques.

Integrated computational materials engineering (ICME) provides a powerful framework for predicting the properties and performance of new materials. ICME models rely heavily on Calculation of Phase Diagrams (CALPHAD)-based thermodynamics—models of chemical thermodynamics fit to experimental and ab-initio data. Uncertainty in this underlying thermodynamic data can drive uncertainty in the CALPHAD thermodynamic models and ICME models. However, traditionally, once a CALPHAD database has been fit, the experimental data and statistics of the fit are discarded and the CALPHAD database is taken as ground truth. By discarding this information, considerable value to the material design process is lost.At QuesTek Innovations, we are developing tools to incorporate Bayesian inference into CALPHAD thermodynamics and ICME model. By applying Bayesian inference to the development of CALPHAD databases, we can fit thermodynamic models to diverse data sources while quantifying the uncertainty of the resulting models due to the experimental data. Furthermore, because this quantified uncertainty comes in the form of probability distributions over thermodynamic model parameters, the uncertainty in CALPHAD thermodynamic models can be propagated through ICME process/structure and structure/property models. Understanding how thermodynamic model uncertainty affects predicted material properties enables us to identify experiments which have the potential to reduce this uncertainty and design materials that are robust to this model uncertainty.In this talk we will discuss the development of tools to apply Bayesian inference to materials design and the use of these tools in several case studies. The thermoelectric alloy PbTe-PbS provides a use case where there exist experimental and ab-initio thermodynamic data which provide conflicting views of the phase stability of this system. Using Bayesian inference, we can synthesize these different data sources and quantify how this synthesis drives uncertainty in particular thermodynamic model parameters. A second use case is the design of thermodynamic databases and ICME models for high entropy alloys (HEAs). Typical CALPHAD databases are fit to data of material systems with a main matrix element and a few alloying additions. However, HEAs contain 5+ elements in roughly equal proportions, limiting the extent to which existing databases can be applied to HEAs. By synthesizing ab initio data of solid-solution thermodynamics with experimental data of HEA phase stability, we can develop CALPHAD databases suitable for HEAs while maintaining, quantifying, and propagating the uncertainty in the database parameters through to HEA property models.

2:30 PM -

BREAK

3:30 PM - *TC05.04.04

Phase Field Models and Interfacial Evolution—A Critical Test of Simulation

Phase field models are widely used computational tools for modeling the evolution of the micro- and nano-structures of materials. It is now possible to employ X-ray tomography to record the temporal evolution of a microstructure in three dimensions. By using an experimentally measured microstructure as an initial condition in a phase field calculation, it is possible to compare the experimentally measured microstructure at a later time with that computed using the phase field method, thus providing a challenging test of the method. This approach will be illustrated using measurements of grain growth in polycrystalline materials. In addition, we shall discuss a method to use such a comparison as a means of determining difficult-to-measure materials parameters, such as the solute diffusivity in a liquid metal. The generalization of this approach to other materials parameters will be discussed.

The phase field method has grown dramatically in popularity over the past few decades as computational power has increased. Phase field models of microstructural evolution are implemented in several different software packages intended for community use as well as a substantial number of bespoke research codes. This burgeoning collection of tools calls for a set of public benchmark problems to validate and verify new implementations. Inspired by the Micromagnetic Standard Problem suite, the Center for Hierarchical Materials Design (CHiMaD) hosts an ongoing series of workshops devoted to the development and exercise of such benchmarks. The goal is to test the usability, computational resources, numerical capabilities and physical scope of phase field simulation codes. This talk will discuss our experience with developing tractable and informative benchmarking problems. One challenge has been determining what data and metadata needs to be collected that is not too onerous for participants, but which allows meaningful comparisons between codes using disparate numerical methods (finite difference, finite element, spectral, adaptive mesh, adaptive timestep, etc.). Benchmark problems examined have included spinodal decomposition, Ostwald ripening, dendritic growth, and heterogenous elasticity. All problems have been exercised in a variety of geometries and with a variety of boundary conditions to effectively illustrate the capabilities of different codes.

In this presentation, we discuss the use of global sensitivity analysis (GSA) and uncertainty quantification (UQ) techniques to quantify the influence and uncertainty of parameters in a continuum phase-field model for polycrystalline ferroelectric model. The former is necessary to statistically determine which parameters critically affect the response and hence must be calibrated using either experimental or synthetic data constructed using DFT simulations. Due to inherent correlation between parameters in the energy functionals, standard global sensitivity analysis techniques, which rely on the assumption of mutual independent parameters, are not applicable. We demonstrate the implementation and interpretation of techniques that accommodate the correlated parameter structure. We subsequently discuss the use of Bayesian inference to quantify uncertainties inherent to the influential parameters. Finally, we demonstrate the construction of prediction intervals for responses by propagating parameter uncertainties through the models.

In this work we upscale surface tension, defined at the microscopic scale, to the macroscopic scale of a petroleum reservoir in order to numerically stabilize the coarse-grained equations for flow in porous media. Non-convergence during simulation of fluid flow through extremely heterogeneous porous media remains a significant limiting factor in petroleum reservoir models despite decades of sophisticated improvements to non-linear solvers. Often there is no choice but to significantly reduce the timestep to regain convergence, even when a fully implicit discretization is used. With the help of the Kantorovich theorem, it is shown that the restriction arises from limitations with Newton's method, which is not always guaranteed to converge. Previous efforts to alleviate these restrictions have focused on numerical methods, but here we focus on regularizing the equations themselves with the addition of an energy constraint. This is done with the introduction of a macroscopic surface tension introduced using the phase-field method.

The approach is demonstrated for the Buckley-Leverett equations, which model incompressible, immiscible, two-phase flow with no capillary potential. The equations are recast as a gradient flow using the phase-field method, and a convex energy splitting scheme is applied to enable large timesteps, even for high degrees of heterogeneity in permeability and viscosity. By using the phase-field formulation as a homotopy map, the underlying hyperbolic flow equations can be solved directly with large timesteps. As macroscopic surface tension is progressively taken to zero, the phase-field solution is continuously transformed into the underlying Buckley-Leverett solution. The homotopy method allows the timestep to be increased by more than six orders of magnitude relative to the unmodified equations while maintaining convergence for a 2D test problem. Results of the technique applied to SPE10, a numerically difficult comparative solution problem for petroleum reservoir simulation, will also be presented.