Abstract. In an ongoing push to construct probabilistic extensions of classic ODE solvers for application in statistics and machine learning, two recent papers have provided distinct methods that return probability measures instead of point estimates, based on sampling and filtering respectively.
While both approaches leverage classical numerical analysis, by building on well-studied solutions of existing seminal solvers, the different constructions of probability measures strike a divergent balance between a formal quantification of epistemic uncertainty and a low computational overhead.

On the one hand, Conrad et al. proposed to randomise existing non-probabilistic one-step solvers by adding suitably scaled Gaussian noise after every step and thereby inducing a probability measure over the solution space of the ODE which contracts to a Dirac measure on the true unknown solution in the order of convergence of the underlying classic numerical method.
But the computational cost of these methods is significantly above that of classic solvers.

On the other hand, Schober et al. recast the estimation of the solution as state estimation by a Gaussian (Kalman) filter and proved that employing a integrated Wiener process prior returns a posterior Gaussian process whose maximum likelihood (ML) estimate matches the solution of classic Runge–Kutta methods.
In an attempt to amend this method's rough uncertainty calibration while sustaining its negligible cost overhead, we propose a novel way to quantify uncertainty in this filtering framework by probing the gradient using Bayesian quadrature.

Abstract. This paper develops a class of meshless methods that are well-suited to statistical inverse problems involving partial differential equations (PDEs). The methods discussed in this paper view the forcing term in the PDE as a random field that induces a probability distribution over the residual error of a symmetric collocation method. This construction enables the solution of challenging inverse problems while accounting, in a rigorous way, for the impact of the discretisation of the forward problem. In particular, this confers robustness to failure of meshless methods, with statistical inferences driven to be more conservative in the presence of significant solver error. In addition, (i) a principled learning-theoretic approach to minimise the impact of solver error is developed, and (ii) the challenging setting of inverse problems with a non-linear forward model is considered. The method is applied to parameter inference problems in which non-negligible solver error must be accounted for in order to draw valid statistical conclusions.

Abstract. This article extends the framework of Bayesian inverse problems in infinite-dimensional parameter spaces, as advocated by Stuart (Acta Numer. 19:451–559, 2010) and others, to the case of a heavy-tailed prior measure in the family of stable distributions, such as an infinite-dimensional Cauchy distribution, for which polynomial moments are infinite or undefined.
It is shown that analogues of the Karhunen–Loève expansion for square-integrable random variables can be used to sample such measures.
Furthermore, under weaker regularity assumptions than those used to date, the Bayesian posterior measure is shown to depend Lipschitz continuously in the Hellinger metric upon perturbations of the misfit function and observed data.