Brought to you by:
Paper

Solving large-scale PDE-constrained Bayesian inverse problems with Riemann manifold Hamiltonian Monte Carlo

and

Published 28 October 2014 © 2014 IOP Publishing Ltd
, , Citation T Bui-Thanh and M Girolami 2014 Inverse Problems 30 114014 DOI 10.1088/0266-5611/30/11/114014

0266-5611/30/11/114014

Abstract

We consider the Riemann manifold Hamiltonian Monte Carlo (RMHMC) method for solving statistical inverse problems governed by partial differential equations (PDEs). The Bayesian framework is employed to cast the inverse problem into the task of statistical inference whose solution is the posterior distribution in infinite dimensional parameter space conditional upon observation data and Gaussian prior measure. We discretize both the likelihood and the prior using the H1-conforming finite element method together with a matrix transfer technique. The power of the RMHMC method is that it exploits the geometric structure induced by the PDE constraints of the underlying inverse problem. Consequently, each RMHMC posterior sample is almost uncorrelated/independent from the others providing statistically efficient Markov chain simulation. However this statistical efficiency comes at a computational cost. This motivates us to consider computationally more efficient strategies for RMHMC. At the heart of our construction is the fact that for Gaussian error structures the Fisher information matrix coincides with the Gauss–Newton Hessian. We exploit this fact in considering a computationally simplified RMHMC method combining state-of-the-art adjoint techniques and the superiority of the RMHMC method. Specifically, we first form the Gauss–Newton Hessian at the maximum a posteriori point and then use it as a fixed constant metric tensor throughout RMHMC simulation. This eliminates the need for the computationally costly differential geometric Christoffel symbols, which in turn greatly reduces computational effort at a corresponding loss of sampling efficiency. We further reduce the cost of forming the Fisher information matrix by using a low rank approximation via a randomized singular value decomposition technique. This is efficient since a small number of Hessian-vector products are required. The Hessian-vector product in turn requires only two extra PDE solves using the adjoint technique. Various numerical results up to 1025 parameters are presented to demonstrate the ability of the RMHMC method in exploring the geometric structure of the problem to propose (almost) uncorrelated/independent samples that are far away from each other, and yet the acceptance rate is almost unity. The results also suggest that for the PDE models considered the proposed fixed metric RMHMC can attain almost as high a quality performance as the original RMHMC, i.e. generating (almost) uncorrelated/independent samples, while being two orders of magnitude less computationally expensive.

Export citation and abstract BibTeX RIS

1. Introduction

Inverse problems are ubiquitous in science and engineering. Perhaps the most popular family of inverse problems is to determine a set of parameters (or a function) given a set of indirect observations, which are in turn provided by a parameter-to-observable map plus observation uncertainties. For example, if one considers the problem of determining the heat conductivity of a thermal fin given measured temperature at a few locations on the thermal fin, then: (i) the desired unknown parameter is the distributed heat conductivity, (ii) the observations are the measured temperatures, (iii) the parameter-to-observable map is the mathematical model that describes the temperature on the thermal fin as a function of the heat conductivity; indeed the temperature distribution is a solution of an elliptic partial differential equation (PDE) whose coefficient is the heat conductivity, and (iv) the observation uncertainty is due to the imperfection of the measurement device and/or model inadequacy.

The Bayesian inversion framework refers to a mathematical method that allows one to solve statistical inverse problems taking into account all uncertainties in a systematic and coherent manner. The Bayesian approach does this by reformulating the inverse problem as a problem in statistical inference, incorporating uncertainties in the observations, the parameter-to-observable map, and prior information on the parameter. In particular, we seek a statistical description of all possible (set of) parameters that conform to the available prior knowledge and at the same time are consistent with the observations. The solution of the Bayesian framework is the so-called posterior measure that encodes the degree of confidence on each set of parameters as the solution to the inverse problem under consideration.

Mathematically the posterior is a surface in high dimensional parameter space. The task at hand is therefore to explore the posterior by, for example, characterizing the mean, the covariance, and/or higher moments. The nature of this task is to compute high dimensional integrals for which most contemporary methods are intractable. Perhaps the most general method to attack these problems is the Markov chain Monte Carlo (MCMC) method which shall be introduced in subsequent sections.

Let us now summarize the content of the paper. We start with the description of the statistical inverse problem under consideration in section 2. It is an inverse steady state heat conduction governed by elliptic PDEs. We postulate a Gaussian measure prior on the parameter space to ensure that the inverse problem is well-defined. The prior itself is a well-defined object whose covariance operator is the inverse of an elliptic differential operator and with the mean function living in the Cameron–Martin space of the covariance. The posterior is given by its Radon–Nikodym derivative with respect to the prior measure, which is proportional to the likelihood. Since the Riemann manifold Hamiltonian Monte Carlo (RMHMC) simulation method requires the gradient, Hessian, and the derivative of the Fisher information operator, we discuss, in some depth, how to compute the derivatives of the potential function (the misfit functional) with PDE constraints efficiently using the adjoint technique in section 3. In particular, we define a Fisher information operator and show that it coincides with the well-known Gauss–Newton Hessian of the misfit. We next present a discretization scheme for the infinite Bayesian inverse problem in section 4. Specifically, we employ a standard continuous H1-conforming finite element method (FEM) to discretize both the likelihood and the Gaussian prior. We choose to numerically compute the truncated Karhunen–Loève (KL) expansion which requires one to solve an eigenvalue problem with fractional Laplacian. In order to accomplish this task, we use a matrix transfer technique (MTT) which leads to a natural discretization of the Gaussian prior measure. In section 5, we describe the RMHMC and its variants at length, and its application to our Bayesian inverse problem. Section 6 presents a low rank approach to approximate the Fisher information matrix and its inverse efficiently. This is possibly due to the fact that the Gauss–Newton Hessian, and hence the Fisher information operator, is a compact operator. Various numerical results supporting our proposed approach are presented in section 7. We begin this section with an extensive study and comparison of Riemannian manifold MCMC methods for problems with two parameters, and end the section with 1025-parameter problem. Finally, we conclude the paper in section 8 with a discussion on future work.

2. Problem statement

In order to clearly illustrate the challenges arising in PDE-constrained inverse problems for MCMC based Bayesian inference, we consider the following heat conduction problem governed by an elliptic PDE in the open and bounded domain $\Omega \subset {{\mathbb{R}}^{n}}$

where $w$ is the forward state, $u$ the logarithm of distributed thermal conductivity on Ω, ${\bf n}$ the unit outward normal on ∂Ω, and Bi the Biot number.

In the forward problem, the task is to solve for the temperature distribution $w$ given a description of distributed parameter $u$. In the inverse problem, the task is to reconstruct $u$ given some available observations, e.g, temperature observed at some parts/locations of the domain Ω. We initially choose to cast the inverse problem in the framework of PDE-constrained optimization. To begin, let us consider the following additive noise-corrupted pointwise observation model3

Equation (1)

where K is the total number of observation locations, $\left\{ {{{\bf x}}_{j}} \right\}_{j=1}^{K}$ the set of points at which $w$ is observed, ηj the additive noise, and ${{d}_{j}}$ the actual noise-corrupted observations. In this paper we work with synthetic observations and hence there is no model inadequacy in (1). Concatenating all the observations, one can rewrite (1) as

Equation (2)

with $\mathcal{G}:={{[w\left( {{{\bf x}}_{1}} \right),\ldots ,w\left( {{{\bf x}}_{K}} \right)]}^{T}}$ denoting the map from the distributed parameter $u$ to the noise-free observables, $\eta $ being random numbers normally distributed by $\mathcal{N}\left( 0,{\bf L} \right)$ with bounded covariance matrix ${\bf L},$ and ${\bf d}={{[{{d}_{1}},\ldots ,{{d}_{K}}]}^{T}}$. For simplicity, we take ${\bf L}={{\sigma }^{2}}{\bf I}$, where ${\bf I}$ is the identity matrix.

Our inverse problem can be now formulated as

Equation (3)

subject to

Equation (4a)

Equation (4b)

Equation (4c)

where ${{\left| \cdot \right|}_{{\bf L}}}:=\mid {{{\bf L}}^{-\frac{1}{2}}}\cdot \mid $ denotes the weighted Euclidean norm induced by the canonical inner product $\left( \cdot ,\cdot \right)$ in ${{\mathbb{R}}^{K}}$. This optimization problem is however ill-posed. An intuitive reason is that the dimension of observations ${\bf d}$ is much smaller than that of the parameter $u$, and hence they provide limited information about the distributed parameter $u$. As a result, the null space of the Jacobian of the parameter-to-observation map $\mathcal{G}$ is non-empty. Indeed, we have shown that the Gauss–Newton approximation of the Hessian (which is the square of this Jacobian, and is also equal to the full Hessian of the data misfit $\mathcal{J}$ evaluated at the optimal parameter) is a compact operator [13], and hence its range space is effectively finite-dimensional.

One way to overcome the ill-posedness is to use Tikhonov regularization (see, e.g., [4]), which proposes to augment the cost functional (3) with a quadratic term, i.e.

Equation (5)

where κ is a regularization parameter, R some regularization operator, and $\left\|\cdot \right\|$ some appropriate norm. This method is a representative of deterministic inverse solution techniques that typically do not take into account the randomness due to measurements and other sources, though one can equip the deterministic solution with a confidence region by post-processing (see, e.g., [5] and references therein). It should be pointed out that if the regularization term is replaced by the Cameron–Martin norm of $u$ (the second term in (7)), the Tikhonov solution is in fact identical to the maximum a posteriori (MAP) point in (7). However, such a point estimate is insufficient for the purpose of fully taking the randomness into account.

In this paper, we choose to tackle the ill-posedness using a Bayesian framework [610]. We seek a statistical description of all possible $u$ that conform to some prior knowledge and at the same time are consistent with the observations. The Bayesian approach does this by reformulating the inverse problem as a problem in statistical inference, incorporating uncertainties in the observations, the forward map $\mathcal{G}$, and prior information. This approach is appealing since it can incorporate most, if not all, kinds of randomness in a systematic manner. To begin, we postulate a Gaussian measure $\mu :=\mathcal{N}\left( {{u}_{0}},{{\alpha }^{-1}}\mathcal{C} \right)$ on $u$ in ${{L}^{2}}\left( \Omega \right)$ where

with the domain of definition

where ${{H}^{2}}\left( \Omega \right)$ is the usual Sobolev space. Assume that the mean function ${{u}_{0}}$ lives in the Cameron–Martin space of $\mathcal{C}$, then one can show (see [9]) that the measure μ is well-defined when $s\gt n/2$ (d is the spatial dimension), and in that case, any realization from the prior distribution μ is almost surely in the Hölder space $X:={{C}^{0,\;\beta }}\left( \Omega \right)$ with $0\lt \beta \lt s/2$. That is, $\mu \left( X \right)=1$, and the Bayesian posterior measure ν satisfies the Radon–Nikodym derivative

Equation (6)

if $\mathcal{G}$ is a continuous map from X to ${{\mathbb{R}}^{K}}$. Note that the Radon–Nikodym derivative is proportional to the the likelihood defined by

The MAP point is defined as

Equation (7)

where ${\left\|\cdot \right\|}_{\mathcal{C}}:=\parallel {{\mathcal{C}}^{-\frac{1}{2}}}\cdot \parallel $ denotes the weighted ${{L}^{2}}\left( \Omega \right)$ norm induced by the ${{L}^{2}}\left( \Omega \right)$ inner product $\left\langle \cdot ,\cdot \right\rangle $.

3. Adjoint computation of gradient, Hessian, and the third derivative tensor

In this section, we briefly present the adjoint method to efficiently compute the gradient, Hessian, and the third derivative of the cost functional (3). We start by considering the weak form of the (first order) forward equation (4):

Equation (8)

with $\hat{\lambda }$ as the test function. Using the standard reduced space approach (see, e.g., a general discussion in [11] and a detailed derivation in [12]) one can show that the gradient $\nabla \mathcal{J}\left( u \right)$, namely the Fréchet derivative of the cost functional $\mathcal{J}$, acting in any direction $\tilde{u}$ is given by

Equation (9)

where the (first order) adjoint state λ satisfies the adjoint equation

Equation (10)

with $\hat{w}$ as the test function. On the other hand, the Hessian, the Fréchet derivative of the gradient, acting in directions $\tilde{u}$ and ${{u}^{2}}$ (superscript '2' means the second variation direction) reads

Equation (11)

where the second order forward state ${{w}^{2}}$ obeys the second order forward equation

Equation (12)

and the second order adjoint state ${{\lambda }^{2}}$ is governed by the second order adjoint equation

Equation (13)

We define the generalized Fisher information operator4 acting in directions $\tilde{u}$ and ${{u}^{2}}$ as

Equation (14)

where the expectation is taken with respect to the likelihood—the distribution of the observation ${\bf d}$. Now, substituting (11) into (14) and assuming that the integrals/derivatives can be interchanged we obtain

where we have used the assumption that the parameter $u$ is independent of observation ${\bf d}$ and the fact that $w$ and ${{w}^{2}}$ do not depend on ${\bf d}$. The next step is to compute $\nabla {{\mathbb{E}}_{{{\pi }_{{\rm like}}}\left( {\bf d}|u \right)}}[\lambda ]$ and ${{\mathbb{E}}_{{{\pi }_{{\rm like}}}\left( {\bf d}|u \right)}}[{{\lambda }^{2}}]$. To begin, let us take the expectation the first order adjoint equation (10) with respect to ${{\pi }_{{\rm like}}}\left( {\bf d}|u \right)$ to arrive at

where the second equality is obtained from (1) and the assumption ${{\eta }_{j}}\sim \mathcal{N}\left( 0,{{\sigma }^{2}} \right)$. We conclude that

Equation (15)

On the other hand, if we take the expectation of the second order adjoint equation (13) and use (15) we have

Equation (16)

Let us define

then (16) becomes

Equation (17)

As a result, the Fisher information operator acting along directions $\tilde{u}$ and ${{u}^{2}}$ reads

Equation (18)

where $\tilde{{{\lambda }^{2}}}$ is the solution of (17), a variant of the second order adjoint equation (13). The Fisher information operator therefore coincides with the Gauss–Newton Hessian of the cost functional (3).

The procedure for computing the gradient acting on an arbitrary direction is clear. One first solves the first order forward equation (8) for $w$, then the first order adjoint (10) for λ, and finally evaluate (9). Similarly, one can compute the Hessian (or the Fisher information operators) acting on two arbitrary directions by first solving the second order forward equation (12) for ${{w}^{2}}$, then the second order adjoint equation (13) (or its variant (17)) for ${{\lambda }^{2}}$ (or $\tilde{{{\lambda }^{2}}}$), and finally evaluating (11) (or (18)).

One of the main goals of the paper is to study the RMHMC method in the context of Bayesian inverse problems governed by PDEs. It is therefore essential to compute the derivative of the Fisher information operator. This task is obvious for problems with available closed form expressions of the likelihood and the prior, but it is not so for those governed by PDEs. Nevertheless, using the adjoint technique we can compute the third order derivative tensor acting on three arbitrary directions with three extra PDE solves, as we now show. To that end, recall that the Fisher information operator acting on directions $\tilde{u}$ and ${{u}^{2}}$ is given by (18). The Fréchet derivative of the Fisher information operator along the additional direction ${{u}^{3}}$ (superscript '3' means the third variation direction) is given by

Equation (19)

where ${{w}^{3}}$, ${{\lambda }^{2,3}}$ are the variation of $w$ and $\tilde{{{\lambda }^{2}}}$ in the direction ${{u}^{3}}$, respectively. One can show that ${{u}^{3}}$ satisfies another second order forward equation

Equation (20)

Similarly, ${{\lambda }^{2,3}}$ is the solution of the third order adjoint equation

Equation (21)

and ${{w}^{2,3}}$, the variation of ${{u}^{2}}$ in direction ${{u}^{3}}$, satisfies the following third order forward equation

Equation (22)

Note that it would have required four extra PDE solves if one computes the third derivative of the full Hessian (11).

It is important to point out that the operator $T$ is only symmetric with respect to $\tilde{u}$ and ${{u}^{2}}$ since the Fisher information is symmetric, but not with respect to $\tilde{u}$ and ${{u}^{3}}$ or ${{u}^{2}}$ and ${{u}^{3}}$. The full symmetry only holds for the derivative of the full Hessian, that is, the true third derivative of the cost functional.

4. Discretization

As presented in section 2, we view our inverse problem from an infinite dimensional point of view. As such, to implement our approach on computers, we need to discretize the prior, the likelihood and hence the posterior. We choose to use the FEM. In particular, we employ the standard ${{H}^{1}}\left( \Omega \right)$ FEM to discretize the forwards and adjoints (the likelihood), and the operator $\mathcal{A}$ (the prior). It should be pointed out that the Cameron–Martin space can be shown (see, e.g., [9]) to be a subspace of the usual fractional Sobolev space ${{H}^{s}}\left( \Omega \right)$, which is in turn a subspace of ${{H}^{1}}\left( \Omega \right)$. Thus, we are using a non-conforming FEM approach (outer approximation). For convenience, we further assume that the discretized state and parameter live on the same finite element mesh. Since FEM approximation of elliptic operators is standard (see, e.g., [13]), we will not discuss it here. Instead, we describe the MTT (see, e.g, [14] and the references therein) to discretize the prior.

Define $\mathcal{Q}:=\;{{\mathcal{C}}^{1/2}}={{\mathcal{A}}^{-s/2}}$, then the eigenpairs $\left( {{\lambda }_{i}},{{v}_{i}} \right)$ of $\mathcal{Q}$ define the KL expansion of the prior distribution as

where ${{a}_{i}}\sim \mathcal{N}\left( 0,1 \right)$. We need to solve

or equivalently

Equation (23)

To solve (23) using the MTT, let us denote by ${\bf M}$ the mass matrix, and ${\bf K}$ the stiffness matrix resulting from the discretization of the Laplacian Δ. The representation of $\mathcal{A}$ in the finite element space (see, e.g., [15] and the references therein) is given by

Let bold symbols denote the corresponding vector of FEM nodal values, e.g., ${\bf u}$ is the vector containing all FEM nodal values of $u$. If we define $\left( {{\sigma }_{i}},{{{\bf v}}_{i}} \right)$ as eigenpairs for ${\bf A},$ i.e.

Equation (24)

where ${\bf v}_{i}^{T}{\bf M}{{{\bf v}}_{j}}={{\delta }_{ij}}$, and hence ${{{\bf V}}^{-1}}={{{\bf V}}^{T}}{\bf M}$, ${{\delta }_{ij}}$ is the Kronecker delta function, and $\Sigma $ is the diagonal matrix with entries σi. Since ${\bf A}$ is similar to ${{{\bf M}}^{-\frac{1}{2}}}\left( {\bf K}+{\bf M} \right){{{\bf M}}^{-\frac{1}{2}}}$, a symmetric positive definite matrix, ${\bf A}$ has positive eigenvalues. Using MTT method, the matrix representation of (23) reads

where

It follows that

The Galerkin FEM approximation of the prior via truncated KL expansion reads

Equation (25)

with ${\bf u}$ as the FEM nodal value of the approximate prior sample $u$ and N as the number of FEM nodal points. Note that for ease in writing, we have used the same notation $u$ for both infinite dimensional prior sample and its FEM approximation. Since $u\in {{L}^{2}}\left( \Omega \right)$, ${\bf u}$ naturally lives in $\mathbb{R}_{{\bf M}}^{N}$, the Euclidean space with weighted inner product ${{\left( \cdot ,\cdot \right)}_{{\bf M}}}:=\;\left( \cdot ,{\bf M}\cdot \right)$.

A question arises: What is the distribution of ${\bf u}$? Clearly ${\bf u}$ is a Gaussian with mean ${{{\bf u}}_{0}}$ since ai are. The covariance matrix ${\bf C}$ for ${\bf u}$ is defined by

where we have used (25) to obtain the second equality and $\Lambda $ is the diagonal matrix with entries ${{\Lambda }_{ii}}=\lambda _{i}^{-1}$. It follows that

Equation (26)

as a map from $\mathbb{R}_{{\bf M}}^{N}$ to $\mathbb{R}_{{\bf M}}^{N}$, and its inverse can be shown to be

whence the distribution of ${\bf u}$ is

Equation (27)

As a result, the FEM discretization of the prior can be written as

Thus, the FEM approximation of the posterior is given by

The detailed derivation of the FEM approximation of infinite Bayesian inverse problems in general and the prior in particular will be presented elsewhere [16].

5. Riemannian manifold Langevin and Hamiltonian Monte Carlo methods

In this section we give a brief overview of the MCMC algorithms that we consider in this work. Some familiarity with the concepts of MCMC is required by the reader since an introduction to the subject is out of the scope of this paper.

5.1. Metropolis–Hastings

For a random vector ${\bf u}\in {{\mathbb{R}}^{N}}$ with density $\pi ({\bf u})$ the Metropolis–Hastings algorithm employs a proposal mechanism $q({{{\bf u}}^{*}}|{{{\bf u}}^{t-1}})$ and proposed moves are accepted with probability ${\rm min} \{1,\pi ({{{\bf u}}^{*}})q({{{\bf u}}^{t-1}}|{{{\bf u}}^{*}})/\pi ({{{\bf u}}^{t-1}})q({{{\bf u}}^{*}}|{{{\bf u}}^{t-1}})\}$. Tuning the Metropolis–Hastings algorithm involves selecting an appropriate proposal mechanism. A common choice is to use a Gaussian proposal of the form $q({{{\bf u}}^{*}}|{{{\bf u}}^{t-1}})=\mathcal{N}({{{\bf u}}^{*}}|{{{\bf u}}^{t-1}},\Sigma )$, where $\mathcal{N}(\cdot |\mu ,\Sigma )$ denotes the multivariate normal density with mean $\mu $ and covariance matrix $\Sigma $.

Selecting the covariance matrix however, is far from trivial in most of cases since knowledge about the target density is required. Therefore a more simplified proposal mechanism is often considered where the covariance matrix is replaced with a diagonal matrix such as $\Sigma =\epsilon {\bf I}$ where the value of the scale parameter epsilon has to be tuned in order to achieve fast convergence and good mixing. Small values of epsilon imply small transitions and result in high acceptance rates while the mixing of the Markov Chain is poor. Large values on the other hand, allow for large transitions but they result in most of the samples being rejected. Tuning the scale parameter becomes even more difficult in problems where the standard deviations of the marginal posteriors differ substantially, since different scales are required for each dimension, and when correlations between different variables exist. In the case of PDE-constrained inverse problems in very high dimensions with strong nonlinear interactions inducing complex non-convex structures in the target posterior this tuning procedure is typically doomed to failure of convergence and mixing.

There have been many subsequent developments of this basic algorithm however the most important with regard to inverse problems, arguably, is the formal definition of Metropolis Hastings in an infinite dimensional functional space. One of the main failings of Metropolis Hastings is the drop-off in acceptance probability as the dimension of the problem increases. By defining the Metropolis acceptance probability in the appropriate Hilbert space the acceptance probability should then be invariant to the dimension of the problem and this is indeed the case as is described in a number of scenarios by [17]. Furthermore the definition of a Markov chain transition kernel directly in the Hilbert space which exploits Hamiltonian dynamics in the proposal mechanism followed in [18].

These are important methodological advances for MCMC applied to Inverse Problems. As the infinite dimensional nature of the problem is a fundamental aspect of the problem it is sensible that this characteristic is embedded in the MCMC scheme. In a similar vein by noting that the statistical model associated with the specific inverse problem is generated from an underlying PDE or system of ordinary differential equations a natural geometric structure structure on the space of probability distributions is induced. This structure provides a rich source of model specific information that can be exploited in devising MCMC schemes that are informed by the underlying structure of the model itself.

In [19] a way around this situation was provided by accepting that the statistical model can itself be considered as an object with an underlying geometric structure that could be embedded into the proposal mechanism. A class of MCMC methods were developed based on the differential geometric concepts underlying Riemannian manifolds.

5.2. Riemann manifold Metropolis adjusted Langevin algorithm (RMMALA)

Denoting the log of the target density as $\mathcal{L}({\bf u})={\rm log} \pi ({\bf u})$, the manifold Metropolis Adjusted Langevin Algorithm method [19], defines a Langevin diffusion with stationary distribution $\pi ({\bf u})$ on the Riemann manifold of density functions with metric tensor ${\bf G}({\bf u})$. By employing a first order Euler integrator for discretising the stochastic differential equation a proposal mechanism with density $q({{{\bf u}}^{*}}|{{{\bf u}}^{t-1}})=\mathcal{N}({{{\bf u}}^{*}}|\mu ({{{\bf u}}^{t-1}},\epsilon ),{{\epsilon }^{2}}{{{\bf G}}^{-1}}({{{\bf u}}^{t-1}}))$ is defined, where epsilon is the integration step size, a parameter which needs to be tuned, and the kth component of the mean function $\mu {{({\bf u},\epsilon )}_{k}}$ is

Equation (28)

where $\Gamma _{i,j}^{k}$ are the Christoffel symbols of the metric in local coordinates. Note that we have used the Christoffel symbols to express the derivatives of the metric tensor, and they are computed using the adjoint method presented in section 3.

Due to the discretisation error introduced by the first order approximation convergence to the stationary distribution is not guaranteed anymore and thus the Metropolis–Hastings ratio is employed to correct for this bias. In [19] a number of examples are provided illustrating the potential of such a scheme for challenging inference problems.

One can interpret the proposal mechanism of RMMALA as a local Gaussian approximation to the target density where the effective covariance matrix in RMMALA is the inverse of the metric tensor evaluated at the current position. Furthermore a simplified version of the RMMALA algorithm, termed sRMMALA, can also be derived by assuming a manifold with constant curvature thus cancelling the last term in equation (28) which depends on the Christoffel symbols. Whilst this is a step forward in that much information about the target density is now embedded in the proposal mechanism it is still driven by a random walk. The next approach to be taken goes beyond the direct and scaled random walk by defining proposals which follow the geodesic flows on the manifold of densities and thus presents a potentially really powerful scheme to explore posterior distributions.

5.3. RMHMC

The RMHMC method defines a Hamiltonian on the Riemann manifold of probability density functions by introducing the auxiliary variables ${\bf p}\sim \mathcal{N}({\bf 0},{\bf G}({\bf u}))$ which are interpreted as the momentum at a particular position ${\bf u}$ and by considering the negative log of the target density as a potential function. More formally the Hamiltonian defined on the Riemann manifold is

Equation (29)

where the terms $-\mathcal{L}({\bf u})+\frac{1}{2}{\rm log} \left( 2\pi |{\bf G}({\bf u})| \right)$ and $\frac{1}{2}{{{\bf p}}^{T}}{\bf G}{{({\bf u})}^{-1}}{\bf p}$ are the potential energy and kinetic energy terms respectively. and the dynamics given by Hamiltons equations are

Equation (30)

Equation (31)

These dynamics define geodesic flows at a particular energy level and as such make proposals which follow deterministically the most efficient path across the manifold from the current density to the proposed one. Simulating the Hamiltonian requires a time-reversible and volume preserving numerical integrator. For this purpose the generalized leapfrog algorithm can be employed and provides a deterministic proposal mechanism for simulating from the conditional distribution, i.e. ${{{\bf u}}^{*}}|{\bf p}\sim \pi ({{{\bf u}}^{*}}|{\bf p})$. More details about the generalized lseapfrog integrator can be found in [19]. To simulate a path (which turns out to be a local geodesic) across the manifold, the Leapfrog integrator is iterated L times which along with the integration step size epsilon are parameters requiring tuning. Again due to the discrete integration errors on simulating the Hamiltonian in order to ensure convergence to the stationary distribution the Metropolis–Hastings acceptance ratio is applied.

The RMHMC method has been shown to be highly effective in sampling from posteriors induced by complex statistical models and offers the means to efficiently explore the hugely complex and high dimensional posteriors associated with PDE-constrained inverse problems.

6. Low rank approximation of the Fisher information matrix

As presented in section 5, we use the Fisher information matrix at the MAP point augmented with the Hessian of the prior as the metric tensor in our HMC simulations. It is therefore necessary to compute the augmented Fisher matrix and its inverse. In [13], we have shown that the Gauss–Newton Hessian of the cost functional (3), also known as the data misfit, is a compact operator, and that for smooth $u$ its eigenvalues decay exponentially to zero. Thus, the range space of the Gauss–Newton Hessian is effectively finite-dimensional even before discretization, i.e., it is independent of the mesh. In other words, the Fisher information matrix admits accurate low rank approximations and the accuracy can be improved as desired by simply increasing the rank of the approximation. We shall exploit this fact to compute the augmented Fisher information matrix and its inverse efficiently. We start with the augmented Fisher information matrix in $\mathbb{R}_{{\bf M}}^{N}$

where ${\bf H}$ is the Fisher information matrix obtained from (18) by taking $\tilde{u}$ and ${{u}^{2}}$ as FEM basis functions.

Assume that ${\bf H}$ is compact (see, e.g., [1, 2]), together with the fact that ${{\Lambda }_{ii}}$ decays to zero, we conclude that the prior-preconditioned Fisher information matrix

also has eigenvalues decaying to zero. Therefore it is expected that the eigenvalues of the prior-preconditioned matrix decays faster than those of the original matrix ${\bf H}$. Indeed, the numerical results in section 7 will confirm this observation. It follows that (see, e.g., [20, 21] for similar decomposition) $\tilde{{\bf H}}$ admits a r-rank approximation of the form

where ${{{\bf V}}_{r}}$ and ${\bf S}$ (diagonal matrix) contain the first r dominant eigenvectors and eigenvalues of $\tilde{{\bf H}}$, respectively. In this work, similar to [21], we use the one-pass randomized algorithm in [22] to compute the low rank approximation. Consequently, the augmented Fisher information matrix becomes

from which we obtain the inverse, by using the Woodbury formula [23]

where ${\bf D}$ is a diagonal matrix with ${{{\bf D}}_{ii}}={{{\bf S}}_{ii}}/\left( {{{\bf S}}_{ii}}+1 \right)$.

In the RMHMC method, we need to randomly draw the momentum variable as ${\bf p}\sim \mathcal{N}\left( {\bf 0},{\bf G} \right)$. If one considers

where ${{{\bf b}}_{i}},{{{\bf c}}_{i}}\sim \mathcal{N}\left( 0,1 \right)$, then one can show, by inspection, that ${\bf p}$ is distributed by $\mathcal{N}\left( {\bf 0},{\bf G} \right)$.

7. Numerical results

For convenience, let us recall that the FEM approximation of the posterior is given as

Equation (32)

${{{\bf u}}_{0}}$ is the FEM nodal value of the prior mean function ${{u}_{0}}$, ${\bf M}$ is the mass matrix, ${\bf V}$ the matrix of eigenvectors defined in (24), $\Lambda $ the diagonal matrix introduced in (26), ${\bf L}={{\sigma }^{2}}{\bf I}$, ${\bf d}$ vector of observation data, and $\mathcal{G}\left( u \right)$ the forward map given by the forward equation

that is discretized by the H1-conforming FEM method.

In this section, we study Riemann manifold Monte Carlo methods and their variations to explore the posterior (32). In particular, we compare the performance of four methods: (i) sRMMALA obtained by ignoring the third derivative in RMMALA, (ii) RMMALA, (iii) sRMHMC obtained by first computing the augmented Fisher metric tensor at the MAP point and then using it as the constant metric tensor, (iv) RMHMC. For all methods, we form the augmented Fisher information matrix exactly using (18) with $\tilde{u}$ and ${{u}^{2}}$ as finite element basis vectors. For RMMALA and RMHMC we also need the derivative of the metric tensor which is a third order tensor. It can be constructed exactly using (19) with $\tilde{u},{{u}^{2}}$ and ${{u}^{3}}$ as finite element basis vectors. We also need extra work for the RMHMC method since each Stormer–Verlet step requires an implicit solve for both the first half of momentum and full position. For inverse problems such as those considered in this paper, the fixed point approach proposed in [19] does not seem to converge. We therefore have to resort to a full Newton method. Since we explicitly construct the metric tensor and its derivative, it is straightforward for us to develop the Newton scheme. For all problems considered in this section, we have observed that it takes at most five Newton iterations to converge.

Note that we limit ourselves in comparing these four methods in the Riemannian manifold MCMC sampling family. Clearly, other methods are available, we avoid 'unmatched comparison' in terms of cost and the level of exploiting the structure of the problem. Even in this limited family, RMHMC is most expensive since it requires not only third derivatives but also implicit solves, but the ability in generating almost independent samples is attractive and appealing as we shall show.

Though our proposed approach described in previous sections are valid for any spatial dimension d, we restrict ourselves to a one dimensional problem, i.e. d = 1, to clearly illustrate our points and findings. In particular, we take $\Omega =[0,1]$, ${{\Gamma }_{R}}=\left\{ 1 \right\}$. We set Bi = 0.1 for all examples. As discussed in section 2, for the Gaussian prior to be well-defined, we take $s=0.6\gt n/2=1/2$.

7.1. Two-parameter examples

We start our numerical experiments with two parameters. This will help demonstrate various aspects of RMHMC which are otherwise too computationally expensive for high dimensional problems. In particular, two-parameter example allows us to compute the complete third derivative tensor and perform the Newton method for each Stormer–Verlet step. This in turn allows us to show the capability of the full RMHMC over its simplified variants in tackling challenging posterior densities in which the metric tensor changes rapidly.

In order to construct the case with two parameters we consider FEM with one finite element. We assume that there is one observation point, i.e. K = 1, and it is placed at the left boundary x = 0. In the first example, we first take s = 0.6, $\sigma =0.1$, and $\alpha =0.1$. The posterior in this case is shown in figure 1(a). We start by taking a time step of 0.02 with 100 Stormer–Verlet steps for both sRMHMC and RMHMC. The acceptance rate for both methods is 1. One would take a time step of 2 for both sRMMALA and RMMALA to be comparable with sRMHMC and RMHMC, but the acceptance would be zero. Instead we take time step of 1 so that the acceptance rate is about 0.5 for sRMMALA and 0.3 for RMMALA. The MAP point is chosen as the initial state for all the chains with 5000 sample excluding the first 100 burn-ins. The result is shown in figure 2.

Figure 1.

Figure 1. The contours of the posterior for three combinations of s (the prior smoothness), α (the 'amount' of the prior) and σ (the noise standard deviation). (a) $s=0.6,\;\alpha =0.1$ and σ = 0.1, (b) $s=0.6,\;\alpha =1$ and σ = 0.01 (c) $s=0.6,\;\alpha =0.1$ and $\sigma =0.01$.

Standard image High-resolution image
Figure 2.

Figure 2. Comparison of simRMMALA, RMMALA, simRMHMC and RMHMC: chains with 5000 samples, burn-in of 100, starting at the MAP point. In this example, s = 0.6, $\alpha =0.1$ and $\sigma =0.1$. Time step is $\varepsilon =1$ for simRMMALA and RMMALA, and $\varepsilon =0.02$ with the number of time steps L = 100 for simRMHMC and RMHMC. In the left column: the exact synthetic solution is black, the sample mean is red, and the shaded region is the 95% credibility region. In the middle column: blue is the trace plot for ${{u}_{1}}$ while green is for ${{u}_{2}}$. In the right column: red and black are the autocorrelation functions for ${{u}_{1}}$ and ${{u}_{2}}$, respectively.

Standard image High-resolution image

As can be seen, the RMHMC chain is the best in terms of mixing by comparing the second column (the trace plot) and the third column (the autocorrelation function (ACF)). Each RMHMC sample is almost uncorrelated to the previous ones. The sRMHMC is the second best, but the samples are strongly correlated compared to those of RMHMC, e.g. one uncorrelated sample for every 40. It is interesting to observe that the full RMMALA and sRMMALA have performance in terms of auto-correlation length that is qualitatively similar at least in the first 5000 samples. This is due to the RMMALA schemes being driven by a single step random walk that cannot exploit fully the curvature information available to the geodesic flows of RMHMC, see rejoinder of [19].

Note that it is not our goal to compare the behavior of the chains when they converge. Rather we would like to qualitatively study how fast the chains are well-mixed (mixing time). This is important for large-scale problems governed by PDEs since 'unpredicted' mixing time implies a lot of costly waste in PDE solves which one must avoid. Though RMHMC is expensive in generating a sample, the cost of generating an uncorrelated/independent sample seems to be comparable to sRMHMC for this example. In fact, if we measure the cost in terms of the number of PDE solves, the total number of PDE solves for RMHMC is 42 476 480 while it is 1 020 002 for sRMHMC, a factor of 40 more expensive. However, the cost in generating an almost uncorrelated/independent sample is the same since sRMHMC generates one uncorrelated sample out of 40 while it is one out of one for RMHMC.

To see how each method distributes the samples we plot one for every five samples in figure 3. All methods seem to explore the high probability density region very well. This explains why the sample mean and the 95% credibility region are similar for all methods in the first column of figure 2.

Figure 3.

Figure 3. Comparison MCMC trajectories (plot one for each five samples) among simRMMALA, RMMALA, simRMHMC and RMHMC: chains with 5000 samples, burn-in of 100, starting at the MAP point. In this example, s = 0.6, $\alpha =0.1$ and $\sigma =0.1$. Time step is $\varepsilon =1$ for simRMMALA and RMMALA, and $\varepsilon =0.02$ with the number of time steps L = 100 for simRMHMC and RMHMC.

Standard image High-resolution image

In the second example we consider the combination s = 0.6, $\sigma =0.01$, and $\alpha =1$ which leads to the posterior shown in figure 1(b). For sRMHMC and RMHMC, we take time step $\varepsilon =0.04$ with L = 100 time steps, while it is 1 for both sRMMALA and RMMALA. Again, the acceptance rate is unity for both sRMHMC and RMHMC while it is 0.65 for sRMMALA and 0.55 for RMMALA, respectively. The result for four methods is shown in figure 4. As can be seen, this example seems to be easier than the first one since even though the time step is larger, the trace plot and the ACF looks better. It is interesting to observe that sRMHMC is comparable with RMHMC (in fact the ACF seems to be a bit better) for this example. As a result, RMHMC is more expensive than sRMHMC for less challenging posterior in figure 1(b). Here, by less challenging we mean that the posterior is quite well approximated by a Gaussian at the MAP point, e.g. the metric tensor is almost constant. This is true for the posterior in figure 1(b) in which the Gaussian prior contribution is significant, i.e., $\alpha =1$ instead of $\alpha =0.1$. Conversely, the posterior is challenging if the metric tensor changes rapidly. Similar to the first example, one also see that the sample mean and the 95% credibility region are almost the same for all methods.

Figure 4.

Figure 4. Comparison among simRMMALA, RMMALA, simRMHMC and RMHMC: chains with 5000 samples, burn-in of 100, starting at the MAP point. In this example, s = 0.6, $\alpha =1$ and $\sigma =0.01$. Time step is $\varepsilon =1$ for simRMMALA and RMMALA, and $\varepsilon =0.04$ with the number of time steps L = 100 for simRMHMC and RMHMC. In the left column: the exact synthetic solution is black, the sample mean is red and the shaded region is the 95% credibility region. In the middle column: blue is the trace plot for ${{u}_{1}}$ while green is for ${{u}_{2}}$. In the right column: red and black are the autocorrelation functions for ${{u}_{1}}$ and ${{u}_{2}}$, respectively.

Standard image High-resolution image

In the third example we consider the combination s = 0.6, $\sigma =0.01$, and $\alpha =0.1$ which leads to a skinny posterior with a long ridge as shown in figure 1(c). For sRMHMC and RMHMC, we take time step $\varepsilon =0.02$ with L = 100 time steps, while it is 1 for both sRMMALA and RMMALA. Again, the acceptance rate is unity for both sRMHMC and RMHMC while it is 0.45 for sRMMALA and RMMALA. The result for four methods is shown in figure 5. For this example, the RMHMC is more desirable than sRMHMC since the cost to generate an uncorrelated/independent sample is smaller for the former than the latter. The reason is that the total number of PDEs solves for the former is 40 times more than the latter, but one out of very sixty samples is uncorrelated/independent.

Figure 5.

Figure 5. Comparison among simRMMALA, RMMALA, simRMHMC and RMHMC: chains with 5000 samples, burn-in of 100, starting at the MAP point. In this example, s = 0.6, $\alpha =0.1$ and $\sigma =0.01$. Time step is $\varepsilon =0.7$ for simRMMALA and RMMALA, and $\varepsilon =0.02$ with the number of time steps L = 100 for simRMHMC and RMHMC. In the left column: the exact synthetic solution is black, the sample mean is red and the shaded region is the 95% credibility region. In the middle column: blue is the trace plot for ${{u}_{1}}$ while green is for ${{u}_{2}}$. In the right column: red and black are the autocorrelation functions for ${{u}_{1}}$ and ${{u}_{2}}$, respectively.

Standard image High-resolution image

7.2. Multi-parameter examples

In this section we choose to discretize $\Omega =[0,1]$ with ${{2}^{10}}=1024$ elements, and hence the number of parameters is 1025. For all simulations in this section, we choose s = 0.6, $\alpha =10$, and $\sigma =0.01$. For synthetic observations, we take K = 64 observations at ${{x}_{j}}=(j-1)/{{2}^{6}}$, $j=1,\ldots ,K$. Clearly, using the full blown RMHMC is out of the question since it is too expensive to construct the third derivative tensor and Newton method for each Stomer–Verlet step. For that reason, the sRMHMC becomes the viable choice. As studied in section 7.1, though sRMHMC loses the ability to efficiently sample from highly nonlinear posterior surfaces compared to the full RMHMC it is much less expensive to generate a sample since it does not require the derivative of the Fisher information matrix. In fact sRMHMC requires to (approximately) compute the Fisher information at the MAP point and then uses it as the fixed constant metric tensor throughout all leap-frog steps for all samples. Clearly, the gradient (9) has to be evaluated at each leap-frog step, but it can be computed efficiently using the adjoint method presented in section 3.

Nevertheless, constructing the exact Fisher information matrix requires 2 × 1025 PDEs solves. This is impractical if the dimension of the finite element space increases, e.g. by refining the mesh. Alternatively, due to the compactness of the Hessian of the prior-preconditioned misfit as discussed in section 6, we can use the randomized singular value decomposition (RSVD) technique [22] to compute its low rank approximations. Shown in figure 6 are the first 35 dominant eigenvalues of the Fisher information matrix and its prior-preconditioned counterpart. We also plot 20 approximate eigenvalues of the prior-preconditioned Fisher information matrix obtained from the RSVD method. As can be seen, the eigen spectrum of the prior-preconditioned Fisher information matrix decays faster than that of the original one. This is not surprising since the prior-preconditioned Fisher operator is a composition of the prior covariance, a compact operator, and the Fisher information operator, also a compact operator. The power of the RSVD is clearly demonstrated as the RSVD result for the first 20 eigenvalues is very accurate.

Figure 6.

Figure 6. The eigen spectrum of the Fisher information matrix, the prior-preconditioned Fisher matrix, and the first 20 eigenvalues approximated using RSVD. Here, s = 0.6, $\alpha =10$ and $\sigma =0.01$.

Standard image High-resolution image

Next, we perform the sRMHMC method using three different constant metric tensors: (i) the low rank Gauss–Newton Hessian, (ii) the exact Gauss–Newton Hessian, and (iii) the full Hessian. For each case, we start the Markov chain at the MAP point and compute 5100 samples, the first 100 of which is then discarded as burn-ins. The empirical mean (red line), the exact distributed parameter used to generate the observation (black line), and 95% credibility region are shown in figure 7. As can be seen, the results from the three methods are indistinguishable. The first sRMHMC is the most appealing since it requires $2\times 20=40$ PDE solves to construct the low rank Fisher information while the others need 2 × 1025 PDE solves. For large-scale problems with computationally expensive PDE solves, the first approach is the method of choice.

Figure 7.

Figure 7. MCMC results of three sRMHMC methods with (i) the low rank Gauss–Newton Hessian, (ii) the exact Gauss–Newton Hessian and (iii) the full Hessian. In the figure are the empirical mean (red line), the exact distributed parameter used to generate the observation (black line) and 95% credibility (shaded region).

Standard image High-resolution image

To further compare the three methods we record the trace plot of the first two (1 and 2) and the last two (1024 and 1025) parameters in figure 8. As can be observed, the chains from the three methods seem to be well-mixed and it is hard to see the difference among them. We also plot the ACF for these four parameters in figure 9. Again, results for the three sRMHMC methods are almost identical, namely, they generate almost uncorrelated samples. We therefore conclude that low rank approach is the least computational extensive, yet it maintains the attractive features of the original RMHMC. As such, it is the most suitable method for large-scale Bayesian inverse problems with costly PDE solves.

Figure 8.

Figure 8. MCMC results of three sRMHMC methods with (i) the low rank Gauss–Newton Hessian (left column), (ii) the exact Gauss–Newton Hessian (middle column) and (iii) the full Hessian at the MAP point (right column). In the figure are the trace plot of the first two (1 and 2) and last two (1024 and 1025) parameters.

Standard image High-resolution image
Figure 9.

Figure 9. MCMC results of three sRMHMC methods with (i) the low rank Gauss–Newton Hessian (left column), (ii) the exact Gauss–Newton Hessian (middle column) and (iii) the full Hessian at the MAP point (right column). In the figure are the autocorrelation function plot of the first two (1 and 2) and last two (1024 and 1025) parameters.

Standard image High-resolution image

8. Conclusions and future work

We have proposed the adoption of a computationally inexpensive Riemann manifold Hamiltonian Monte method to explore the posterior of large-scale Bayesian inverse problems governed by PDEs in a highly efficient manner. We first adopt an infinite dimensional Bayesian framework to guarantee that the inverse formulation is well-defined. In particular, we postulate a Gaussian prior measure on the parameter space and assume regularity for the likelihood. This leads to a well-defined posterior distribution. Then, we discretize the posterior using the standard FEM and a MTT, and apply the RMHMC method on the resulting discretized posterior in finite dimensional parameter space. We present an adjoint technique to efficiently compute the gradient, the Hessian, and the third derivative of the potential function that are required in the RMHMC context. This is at the expense of solving a few extra PDEs: one for the gradient, two for a Hessian-vector product, and four for the product of third order derivative with a matrix. For large-scale problems, repeatedly computing the action of the Hessian and third order derivative is too computationally expensive and this motivates us to design a simplified RMHMC in which the Fisher information matrix is computed once at the MAP point. We further reduce the effort by constructing low rank approximation of the Fisher information using a RSVD technique. The effectiveness of the proposed approach is demonstrated on a number of numerical results up to 1025 parameters in which the computational gain is about two orders of magnitude while maintaining the quality of the original RMHMC method in generating (almost) uncorrelated/independent samples.

For more challenging inverse problems with significant change of metric tensor across the parameter space, we expect that sRMHMC with constant metric tensor is inefficient. In that case, RMHMC seems to be a better option, but it is too computational extensive for large-scale problems. Ongoing work is to explore approximation of the RMHMC methods in which we approximate the trace and the third derivative in (31) using adjoint and randomized techniques.

Acknowledgments

The first author is grateful for the support from DOE grants DE-SC0010518 and DE-SC0011118.

Footnotes

  • We assume the forward state $w$ is sufficiently regular, i.e. $w\in {{H}^{s}},s\gt n/2$, so that w is, by virtue of the Sobolev embedding theorem, continuous, and therefore it is meaningful to measure w pointwise.

  • Note that the Fisher information operator is typically defined for finite dimensional settings in which it is a matrix.

Please wait… references are loading.