This book serves as a standard reference, making this area accessible not only to researchers in probability and statistics, but also to graduate students and practitioners. The book assumes only a first-year graduate course in probability. Each chapter begins with a brief overview and concludes with a wide range of exercises at varying levels of difficulty. The authors supply detailed hints for the more challenging problems, and cover many advances made in recent years.

This book examines non-Gaussian distributions. It addresses the causes and consequences of non-normality and time dependency in both asset returns and option prices. The book is written for non-mathematicians who want to model financial market prices so the emphasis throughout is on practice. There are abundant empirical illustrations of the models and techniques described, many of which could be equally applied to other financial time series.

First derived within the context of life-testing, inverse Gaussian distribution has become one of the most important and widely employed distributions, and is often used to model the lifetimes of components. It is also used as a model in many varied applications, including fatigue analysis, economic prediction analysis, and the analysis of extreme events such as rainfall and flood levels. The interesting features and properties of this distribution make it an important and realistic model in a variety of problems across numerous disciplines.Because of the broad range of applications, this handbook will be useful not only to members of the statistical community but will also appeal to applied scientists, engineers, econometricians, and anyone who desires a thorough evaluation of this important topic.

In a real-world environment, you can imagine that a robot or an artificial intelligence won’t always have access to the optimal answer, or maybe there isn’t an optimal correct answer. You’d want that robot to be able to explore the world on its own, and learn things just by looking for patterns.Think about the large amounts of data being collected today, by the likes of the NSA, Google, and other organizations. No human could possibly sift through all that data manually. It was reported recently in the Washington Post and Wall Street Journal that the National Security Agency collects so much surveillance data, it is no longer effective.Could automated pattern discovery solve this problem?Do you ever wonder how we get the data that we use in our supervised machine learning algorithms?Kaggle always seems to provide us with a nice CSV, complete with Xs and corresponding Ys.If you haven’t been involved in acquiring data yourself, you might not have thought about this, but someone has to make this data!A lot of the time this involves manual labor. Sometimes, you don’t have access to the correct information or it is infeasible or costly to acquire.You still want to have some idea of the structure of the data.This is where unsupervised machine learning comes into play.In this book we are first going to talk about clustering. This is where instead of training on labels, we try to create our own labels. We’ll do this by grouping together data that looks alike.The 2 methods of clustering we’ll talk about: k-means clustering and hierarchical clustering.Next, because in machine learning we like to talk about probability distributions, we’ll go into Gaussian mixture models and kernel density estimation, where we talk about how to learn the probability distribution of a set of data.One interesting fact is that under certain conditions, Gaussian mixture models and k-means clustering are exactly the same! We’ll prove how this is the case.Lastly, we’ll look at the theory behind principal components analysis or PCA. PCA has many useful applications: visualization, dimensionality reduction, denoising, and de-correlation. You will see how it allows us to take a different perspective on latent variables, which first appear when we talk about k-means clustering and GMMs.All the algorithms we’ll talk about in this course are staples in machine learning and data science, so if you want to know how to automatically find patterns in your data with data mining and pattern extraction, without needing someone to put in manual work to label that data, then this book is for you.All of the materials required to follow along in this book are free: You just need to able to download and install Python, Numpy, Scipy, Matplotlib, and Sci-kit Learn.

Branching Brownian motion (BBM) is a classical object in probability theory with deep connections to partial differential equations. This book highlights the connection to classical extreme value theory and to the theory of mean-field spin glasses in statistical mechanics. Starting with a concise review of classical extreme value statistics and a basic introduction to mean-field spin glasses, the author then focuses on branching Brownian motion. Here, the classical results of Bramson on the asymptotics of solutions of the F-KPP equation are reviewed in detail and applied to the recent construction of the extremal process of BBM. The extension of these results to branching Brownian motion with variable speed are then explained. As a self-contained exposition that is accessible to graduate students with some background in probability theory, this book makes a good introduction for anyone interested in accessing this exciting field of mathematics.

Physical Sciences Data, Volume 16: Gaussian Basis Sets for Molecular Calculations provides information pertinent to the Gaussian basis sets, with emphasis on lithium, radon, and important ions. This book discusses the polarization functions prepared for lithium through radon for further improvement of the basis sets.Organized into three chapters, this volume begins with an overview of the basis set for the most stable negative and positive ions. This text then explores the total atomic energies given by the basis sets. Other chapters consider the distinction between diffuse functions and polarization function. This book presents as well the exponents of polarization function. The final chapter deals with the Gaussian basis sets.This book is a valuable resource for chemists, scientists, and research workers.

Gaussian Markov Random Field (GMRF) models are most widely used in spatial statistics - a very active area of research in which few up-to-date reference works are available. This is the first book on the subject that provides a unified framework of GMRFs with particular emphasis on the computational aspects. This book includes extensive case-studies and, online, a c-library for fast and exact simulation. With chapters contributed by leading researchers in the field, this volume is essential reading for statisticians working in spatial theory and its applications, as well as quantitative researchers in a wide range of science fields where spatial data analysis is important.

Simulation of materials at the atomistic level is an important tool in studying microscopic structures and processes. The atomic interactions necessary for the simulations are correctly described by Quantum Mechanics, but the size of systems and the length of processes that can be modelled are still limited. The framework of Gaussian Approximation Potentials that is developed in this thesis allows us to generate interatomic potentials automatically, based on quantum mechanical data. The resulting potentials offer several orders of magnitude faster computations, while maintaining quantum mechanical accuracy. The method has already been successfully applied for semiconductors and metals.

Gaussian Process Regression Analysis for Functional Data presents nonparametric statistical methods for functional regression analysis, specifically the methods based on a Gaussian process prior in a functional space. The authors focus on problems involving functional response variables and mixed covariates of functional and scalar variables.Covering the basics of Gaussian process regression, the first several chapters discuss functional data analysis, theoretical aspects based on the asymptotic properties of Gaussian process regression models, and new methodological developments for high dimensional data and variable selection. The remainder of the text explores advanced topics of functional regression analysis, including novel nonparametric statistical methods for curve prediction, curve clustering, functional ANOVA, and functional regression analysis of batch data, repeated curves, and non-Gaussian data.Many flexible models based on Gaussian processes provide efficient ways of model learning, interpreting model structure, and carrying out inference, particularly when dealing with large dimensional functional data. This book shows how to use these Gaussian process regression models in the analysis of functional data. Some MATLAB® and C codes are available on the first author’s website.

The principal focus here is on autoregressive moving average models and analogous random fields, with probabilistic and statistical questions also being discussed. The book contrasts Gaussian models with noncausal or noninvertible (nonminimum phase) non-Gaussian models and deals with problems of prediction and estimation. New results for nonminimum phase non-Gaussian processes are exposited and open questions are noted. Intended as a text for gradutes in statistics, mathematics, engineering, the natural sciences and economics, the only recommendation is an initial background in probability theory and statistics. Notes on background, history and open problems are given at the end of the book.