Next Article in Journal
Bivariate Infinite Series Solution of Kepler’s Equations
Next Article in Special Issue
Measure of Similarity between GMMs by Embedding of the Parameter Space That Preserves KL Divergence
Previous Article in Journal
Artificial Neural Network, Quantile and Semi-Log Regression Modelling of Mass Appraisal in Housing
Previous Article in Special Issue
Combining Grammatical Evolution with Modal Interval Analysis: An Application to Solve Problems with Uncertainty
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic Tempered Posterior Distributions for Bayesian Inversion Problems

1
Department of Signal Processing, Universidad rey Juan Carlos (URJC), 28942 Madrid, Spain
2
Department of Statistics, Universidad Carlos III de Madrid (UC3M), 28911 Madrid, Spain
3
Department of Signal Processing, Universidad Carlos III de Madrid (UC3M), 28911 Madrid, Spain
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(7), 784; https://doi.org/10.3390/math9070784
Submission received: 27 February 2021 / Revised: 26 March 2021 / Accepted: 1 April 2021 / Published: 6 April 2021
(This article belongs to the Special Issue Recent Advances in Data Science)

Abstract

:
We propose a novel adaptive importance sampling scheme for Bayesian inversion problems where the inference of the variables of interest and the power of the data noise are carried out using distinct (but interacting) methods. More specifically, we consider a Bayesian analysis for the variables of interest (i.e., the parameters of the model to invert), whereas we employ a maximum likelihood approach for the estimation of the noise power. The whole technique is implemented by means of an iterative procedure with alternating sampling and optimization steps. Moreover, the noise power is also used as a tempered parameter for the posterior distribution of the the variables of interest. Therefore, a sequence of tempered posterior densities is generated, where the tempered parameter is automatically selected according to the current estimate of the noise power. A complete Bayesian study over the model parameters and the scale parameter can also be performed. Numerical experiments show the benefits of the proposed approach.

1. Introduction

The estimation of unknown parameters from noisy observations is an essential problem in signal processing, statistics, and machine learning [1,2,3]. Within the Bayesian signal processing framework, these problems are addressed by constructing posterior probability distributions of the unknowns. Given the posterior, one often wants to make inference about the unknowns, e.g., if we are estimating parameters, finding the values that maximize their posterior, or the values that minimize some cost function given the uncertainty of the parameters. Unfortunately, obtaining closed-form solutions, usually expressed as integrals of the posterior, is infeasible in most practical applications. Therefore, developing approximate computational techniques, such as importance sampling and Markov chain Monte Carlo (MCMC) algorithms, is often required [4,5,6].
The so-called tempering of the posterior distributions is a well-known procedure for improving the performance of Monte Carlo (MC) algorithms [7,8,9,10]. Tempering is performed by modulating an artificial scale parameter or by sequentially including new data. There are several reasons for the improvement in performance: improving mixing, discovering modes, fostering the exploration of the inference space, etc. In the first iterations of the MC scheme, a posterior density with a bigger scale is considered. The artificial scale parameter (often called temperature) is reduced along the iterations, until considering the true posterior distribution. However, the user should select a temperature schedule, i.e., a decreasing rule for the scale parameter, which is usually chosen in an heuristic way [4,5]. In the literature, the tempering procedure has gained particular attention for the estimation of the marginal likelihood (also known as Bayesian model evidence) [9,11,12].
Furthermore, the joint inference of parameters (denoted as θ ) of observation models, f ( θ ) , and scale parameters of the likelihood function (that, in the scalar case, is usually denoted as σ ) can be a hard task. Indeed, “wrong choices” of σ values can easily jeopardize the sampling of θ . In this work, we introduce a procedure to tackle this problem.
To be specific, in this work, we design an adaptive importance sampling (AIS) scheme [13] for Bayesian inversion problems, where an automatic tempering procedure is implemented. We assume that the vector of observations y is obtained by a nonlinear transformation f ( θ ) of the variables of interest θ , perturbed by additive Gaussian noise with unknown power σ 2 . The nonlinear mapping f ( θ ) usually represents a complex physical model, a computer code, etc. The resulting posterior densities are usually highly multimodal and complex distributions. Furthermore, the inference task in the joint space [ θ , σ ] is particularly challenging. We introduce a split strategy to tackle this problem, involving an optimization approach over σ and a sampling scheme for θ . More specifically, we design an iterative procedure where these two tasks are alternated. Additionally, the current maximum likelihood (ML) estimate of the noise power, σ ^ ML 2 , is employed as a tempering parameter, starting from high values and then “cooling down” according to the ML estimates at each iteration. Therefore, the proposed scheme deals with a sequence of tempered posteriors according to the current estimation σ ^ ML 2 . It is important to observe that, given a fixed vector θ , the ML estimator σ ^ ML 2 can be obtained analytically.
Furthermore, the complete Bayesian analysis regarding the joint posterior of θ and σ is also possible (as discussed in Section 5). This is obtained by implementing a proper re-weighting of the samples generated by the proposed algorithm, called Automatic Tempering AIS (ATAIS), without any additional evaluations of the observation model. An approximation of the marginal posterior of σ is provided as well. The advantages of the proposed scheme are shown in two numerical experiments, one of them considering a complex astronomical model.

2. Problem Statement

Let us denote the observed measurements as y = [ y 1 , , y K ] R K , and the variable of interest that we wish to infer as θ = [ θ 1 , , θ M ] Θ R M . Furthermore, let us assume the observation model
y = f ( θ ) + v ,
where we have a nonlinear mapping,
f ( θ ) = [ f 1 ( θ ) , , f K ( θ ) ] : Θ R K with Θ R M ,
and a Gaussian perturbation noise,
v = [ v 1 , , v K ] N ( v | 0 , σ 2 I K ) ,
with σ > 0 , and I K denotes the K-dimensional identity matrix. The model can be easily extended to a matrix of observations Y = [ y 1 , , y K ] R d y × K instead of a vector, if the nonlinear mapping is of type F ( θ ) = [ f 1 ( θ ) , , f K ( θ ) ] : Θ R M R d y × K . The noise variance σ 2 is unknown, in general. The mapping f ( θ ) may be analytically unknown: the only assumption is that we are able to evaluate it pointwise. The likelihood function is
( y | θ , σ ) = 1 ( 2 π σ 2 ) K / 2 exp 1 2 σ 2 | | y f ( θ ) | | 2 ,
= 1 ( 2 π σ 2 ) K / 2 exp 1 2 σ 2 k = 1 K ( y k f k ( θ ) ) 2 .
Note that we have two types of variables of interest: the vector θ contains the parameters of the nonlinear mapping f ( θ ) , whereas σ is a scale parameter of the likelihood function.
Given the vector of measurements y , we wish to make inferences regarding the hidden parameters θ and the noise power σ 2 , obtaining at least some point estimators θ ^ and σ ^ 2 . We are also interested in performing uncertainty and correlation analysis for the components of θ . Furthermore, we aim to perform model selection, i.e., to compare, select, or properly average different models.
Bayesian inference in the complete space. We consider independent prior densities g θ ( θ ) and g σ ( σ ) over the unknowns. Therefore, the complete posterior density is
p ( θ , σ | y ) = 1 p ( y ) p ( θ , σ , y ) = 1 p ( y ) ( y | θ , σ ) g θ ( θ ) g σ ( σ ) ,
The marginal likelihood is
Z = p ( y ) = R + Θ ( y | θ , σ ) g θ ( θ ) g σ ( σ ) d θ d σ ,
This quantity is often needed for model selection. While Z is generally unknown, we can usually evaluate pointwise the un-normalized posterior π ( θ , σ | y ) = ( y | θ , σ ) g θ ( θ ) g σ ( σ ) , i.e., p ( θ , σ | y ) π ( θ , σ | y ) . More generally, the computation of integrals of the form
I ( h ) = R + Θ h ( θ , σ ) p ( θ , σ | y ) d θ d σ ,
where h : Θ × R + R is an integrable function, is usually required. We consider a Monte Carlo quadrature approach for approximating the integral above and, more generally, provide a particle approximation of the joint posterior p ( θ , σ | y ) .
Main observation. Generating random samples from a complicated posterior in Equation (6) and efficiently computing the integrals as in Equations (7) and (8) is very often a hard task. Moreover, this task becomes more difficult when we try to perform a joint inference where scale parameters are involved, i.e., σ , and parameters of the nonlinearity, i.e., θ . Indeed, “wrong choices” of σ values can easily jeopardize the sampling of θ . In the next section, we describe a strategy that we propose to tackle this problem. Before do so, however, we need to recall some additional definitions.
Conditional and marginal posteriors. In other to design efficient computational schemes, it is often useful to consider the conditional posteriors, for instance,
p ( θ | y , σ ) = p ( θ , y , σ ) p ( y , σ ) = ( y | θ , σ ) g θ ( θ ) g σ ( σ ) p ( y | σ ) g σ ( σ ) , = ( y | σ , θ ) g θ ( θ ) p ( y | σ ) .
In the next section, we will see that the idea underlying the proposed scheme is to split the space [ θ , σ ] , restricting the sampling problem only to θ and considering an optimization problem with respect to σ . The conditional marginal likelihood is obtained by integrating out one of the two variables, e.g.,
Z ( σ ) = p ( y | σ ) = θ ( y | θ , σ ) g θ ( θ ) d θ .
The integral above cannot be computed analytically, in general. We can also consider marginal posteriors, for instance, the marginal posterior of σ is
p ( σ | y ) = p ( y | σ ) g σ ( σ ) p ( y ) = Z ( σ ) g σ ( σ ) Z .
Note that the joint posterior in Equation (6) can be also written as
p ( θ , σ | y ) = p ( θ | y , σ ) p ( σ | y ) .
Outline of the proposed approach. The underlying idea of this work is to divide the inference study in two parts. In the first part (Section 3 and Section 4), we focus on the study of the conditional posterior p ( θ | y , σ ) given a fixed σ . Then, in the second part (Section 5), we also estimate the marginal posterior p ( σ | y ) . Finally, using (12), we can obtain a final approximation of the complete posterior p ( θ , σ | y ) . Estimations of Z ( σ ) and Z are also obtained.

3. Key Observations and Proposed Approach

3.1. Split Inference

In the first part of work, we assume a uniform proper (or improper) prior over θ , i.e., g θ ( θ ) 1 in Θ . The possible use of a general choice of g θ ( θ ) is discussed in Section 4.1. Let θ MAP = arg max θ p ( θ | y , σ ) denote the MAP estimator of θ . Generally, θ MAP should be a function of σ , i.e., θ MAP = θ MAP ( σ ) . However, due to the choice of likelihood function (and the uniform prior) considered in this paper, we have that
θ MAP = arg max θ log p ( θ | y , σ ) , = arg min θ | | y f ( θ ) | | 2 , with g θ ( θ ) 1 ,   for   θ Θ ,
which does not depend on σ , i.e., we have that θ MAP maximizes the conditional posterior p ( θ | y , σ ) for any σ . See Appendix A for further details.
Furthermore, the variance of the conditional posterior p ( θ | y , σ ) grows when σ increases. In this sense, with larger σ , the density p ( θ | y , σ ) is “broader”; thus, it is easier for Monte Carlo methods to explore the space (namely, we have a tempering effect). Based on these considerations, we can run Monte Carlo schemes (specifically IS algorithms) on p ( θ | y , σ 0 ) with a large value σ 0 for estimating θ MAP more efficiently. Furthermore, apart from estimating θ MAP , we are also interested in studying the conditional posterior p ( θ | y , σ ML ) , where
σ ML = arg max σ ( y | θ MAP , σ ) .
The value σ ML can be obtained in closed-form (see Appendix A). In fact, for any θ , we have
( y | θ , σ ) 1 σ 2 K 2 exp | | y f ( θ ) | | 2 2 σ 2 ,
which has the form of an Inverse Gamma density for σ 2 and it has a unique mode at 1 K | | y f ( θ ) | | 2 , where θ is a fixed. Therefore, finally we have
σ ML = 1 K | | y f ( θ MAP ) | | 2 .
This can serve as a point estimator of the noise power in the system, and also as a threshold value to stop the tempering of the conditional posterior, as we show in the following section.

3.2. An Iterative Scheme

Consider that we start with a large value σ 0 , which can be viewed as a coarse approximation of σ ML , so we denote it σ 0 = σ ^ ML ( 0 ) . Let θ ^ MAP ( 1 ) denote an estimate of θ MAP obtained by working w.r.t. p ( θ | y , σ ^ ML ( 0 ) ) . We use this current estimation to obtain the next value of σ , i.e., σ ^ ML ( 1 ) = 1 K | | y f ( θ ^ MAP ( 1 ) ) | | 2 . In general, σ ^ ML ( 1 ) is a better estimator of σ ML than σ ^ ML ( 0 ) , as we have tried to evaluate of the smallest error between f ( θ ) and the data, y , i.e., | | y f ( θ ) | | , which is related to the power of the noise perturbation in the system. For instance, assuming zero noise, we would have | | y f ( θ MAP ) | | = 0 , recalling that g θ ( θ ) 1 for θ Θ . We can iterate this procedure for t = 1 , , T :
1
Estimate θ ^ MAP ( t ) by Monte Carlo (e.g., an IS scheme) by approximately maximizing p ( θ | y , σ ^ ML ( t 1 ) ) .
2
Compute
σ ^ ML ( t ) = 1 K | | y f ( θ ^ MAP ( t ) ) | | 2 .
With this iterative scheme, we have that σ ^ ML ( T ) σ ML as T grows, thus we eventually perform IS with respect to the density of interest p ( θ | y , σ ML ) . Furthermore, a non-increasing sequence of values σ ^ ML ( 0 ) σ ^ ML ( 1 ) σ ^ ML ( T ) is produced, which facilitates the estimation of θ MAP , and ensures the IS estimation of p ( θ | y , σ ML ) is performed efficiently by using the set of intermediate, tempered (i.e., wider) distributions p ( θ | y , σ ^ ML ( t ) ) for t = 0 , 1 , , T . Finally, a particle approximation of p ( θ | y , σ ^ ML ( T ) ) is obtained, i.e.,
p ( θ | y , σ ^ ML ( T ) ) = t = 1 T n = 1 N w ˜ t ( n ) δ ( θ θ t ( n ) ) ,
where t = 1 T n = 1 N w ˜ t ( n ) = 1 . Note that w ˜ t ( n ) are the final corrected weights obtained at the end of the algorithm (see Algorithm 1).

4. Automatic Tempering Adaptive Importance Sampling (ATAIS)

In this section, we describe an adaptive importance sampler with an automatic tempering approach which follows the procedure given above. At each iteration t of the algorithm, we have an ML approximation of σ , i.e., σ ^ ML ( t 1 ) . Considering Equation (9), we define the un-normalized tempered conditional posterior at the t-th iteration,
π t ( θ ) = ( y | θ , σ ^ ML ( t 1 ) ) g θ ( θ ) ,
where we assume g θ ( θ ) 1 in Θ . For other generic choice of g θ ( θ ) , see the discussion in Section 4.1. At each iteration, we consider p ( θ | y , σ ^ ML ( t 1 ) ) π t ( θ ) as the target distribution. The dependence on the iteration t occurs because σ ^ ML ( t ) varies with t. The ATAIS algorithm is outlined in Algorithm 1. The resulting scheme is an adaptive IS algorithm which combines sampling schemes and stochastic optimization. It is important to remark that if σ ^ ML ( 0 ) is bigger than the true ML value, we generate a non-increasing sequence of σ ^ ML ( t ) , i.e., σ ^ ML ( 0 ) σ ^ ML ( 1 ) σ ^ ML ( t ) σ ^ ML ( t + 1 ) , etc. Note that this is true as we have assumed a uniform prior g θ ( θ ) . To see this, recall that σ ^ ML = 1 K | | y f ( θ ^ MAP ) | | 2 . Improving θ ^ MAP means that the squared error | | y f ( θ ^ MAP ) | | 2 is smaller, as shown in Equation (14), which implies that σ ^ ML always decreases (provided that we start with σ ^ ML > σ ML ).
IS steps. A set of N samples { θ t ( n ) } n = 1 N are drawn from a (normalized) proposal density q ( θ | μ t , Σ t ) with mean μ t and a covariance matrix Σ t . An importance weight
w t ( n ) = π t ( θ t ( n ) ) q ( θ t ( n ) | μ t , Σ t ) ,
is assigned to each sample.
Proposal adaptation. A particle estimation of the conditional MAP estimator of θ is given by θ ^ t = arg max n π t ( θ t ( n ) ) . The value of current MAP approximation π t ( θ ^ t ) is then compared with the value of global MAP estimator obtained so far denoted as π MAP . If π t ( θ ^ t ) π MAP , all the global MAP estimators are updated and the proposal pdf is moved at θ ^ t , i.e., we set
θ ^ MAP ( t ) = θ ^ t , π MAP = π t ( θ ^ t ) , μ t = θ ^ t .
Algorithm 1: ATAIS: AIS with automatic tempering.
  • Initializations: Choose N, μ 1 , Σ 1 , and obtain an initialization for σ ^ ML ( 0 ) , and set π MAP = 0 .
  • For t = 1 , , T :
    (a)
    Sampling:
    i.
    Draw θ t ( 1 ) , , θ t ( N ) q ( θ | μ t , Σ t ) .
    ii.
    Assign to each sample the weights
    w t ( n ) = π t ( θ t ( n ) ) q ( θ t ( n ) | μ t , Σ t ) , n = 1 , , N .
    (b)
    Current maximum estimations:
    i.
    Obtain θ ^ t = arg max n π t ( θ t ( n ) ) , and compute r ^ t = f ( θ ^ t )
    ii.
    Compute σ ^ t = 1 K | | y r ^ t | | 2 .
    (c)
    Global maximum estimations:
    i.
    If σ ^ t σ ^ ML ( t 1 ) , then set σ ^ ML ( t ) = σ ^ t . Otherwise, set σ ^ ML ( t ) = σ ^ ML ( t 1 ) .
    ii.
    If π t ( θ ^ t ) π MAP , then set θ ^ MAP ( t ) = θ ^ t and π MAP = π t ( θ ^ t ) . Otherwise, θ ^ MAP ( t ) = θ ^ MAP ( t 1 ) and keep the value of π MAP .
    (d)
    Adaptation: Set
    μ t = θ ^ MAP ( t ) ,
    Σ t = n = 1 N w ¯ t ( n ) ( θ t ( n ) θ ¯ t ) ( θ t ( n ) θ ¯ t ) + ϵ I M ,
    where w ¯ t ( n ) w t ( n ) i = 1 N w t ( i ) are the normalized weights, θ ¯ t = n = 1 N w ¯ t ( n ) θ t ( n ) and ϵ > 0 is a small scalar value .
  • Output: Return the final estimators θ ^ MAP ( T ) , σ ^ ML ( T ) , and all the weighted samples { θ t ( n ) , w ˜ t ( n ) } , for all t and n, with the corrected weights
    w ˜ t ( n ) = w t ( n ) π T + 1 ( θ t ( n ) ) π t ( θ t ( n ) ) .
Otherwise, we keep the previous values of θ ^ MAP ( t ) = θ ^ MAP ( t 1 ) , π MAP , and μ t = μ t 1 . The covariance matrix Σ t is adapted by considering the empirical covariance of the weighted samples. Note that we set μ t = θ ^ MAP ( t ) instead of using the empirical mean of the samples (as in other classical AIS schemes). This is because we have noticed that this choice provides better and more robust results, especially as the dimension of the problem grows.
Automatic tempering. As we showed in the previous section, the current ML estimator of σ can be obtained analytically as
σ ^ t = 1 K | | y r ^ t | | 2 ,
where r ^ t = f ( θ ^ t ) . If the current ML estimator σ ^ t is smaller than the current global one σ ^ ML ( t 1 ) , i.e., σ ^ t < σ ^ ML ( t 1 ) , then we update σ ^ ML ( t ) = σ ^ t , Otherwise, we keep the value of σ ^ ML ( t ) = σ ^ ML ( t 1 ) . Actually, with a uniform prior g θ ( θ ) , every time that we update θ ^ MAP ( t ) , we also update σ ^ ML ( t ) (see footnote in the previous page).
ATAIS outputs. After T iterations, a final correction of the weights is needed, i.e.,
w ˜ t ( n ) = w t ( n ) π T + 1 ( θ t ( n ) ) π t ( θ t ( n ) ) ,
in order to obtain a particle approximation of the measure of the final conditional posterior p ( θ | y , σ ^ ML ( T ) ) π T + 1 ( θ ) . Thus, the algorithm returns the final estimators θ ^ MAP ( T ) , σ ^ ML ( T ) , and all the weighted samples { θ t ( n ) , w ˜ t ( n ) } , for all n = 1 , , N and t = 1 , , T . Other outputs can be obtained with a postprocessing of the weighted samples, as shown below. Note that Equation (22) does not require any additional evaluation of the model, and the error is e t ( n ) = | | y f ( θ t ( n ) ) | | 2 . Moreover, we can also use e t ( n ) and { θ t ( n ) } for building a particle approximation of any other conditional posterior p ( θ | y , σ ) . This allows the study of the marginal posterior p ( σ | y ) and provides the complete Bayesian inference, as we show in the next section.

4.1. With a Generic Prior g θ ( θ )

The ATAIS algorithm is based on the fact that θ MAP does not depend on σ . This allows us to progressively estimate it by targeting the sequence of tempered posteriors π t ( θ ) p ( θ | y , σ ^ ML ( t ) ) that share all the same MAP. However, in the case g θ ( θ ) is not uniform, we generally have one θ MAP ( σ ) for each p ( θ | y , σ ) , and we could have that the sequence of θ MAP ( σ ^ ML ( t ) ) will not approach θ MAP ( σ ML ) .
If the data are informative and the prior g θ ( θ ) is chosen such it is vague with respect to the likelihood, the position of θ MAP ( σ ) is not very sensitive to the value of σ . Namely, we have θ MAP ( σ ^ ML ( 1 ) ) θ MAP ( σ ^ ML ( 2 ) ) θ MAP ( σ ML ) , and thus our algorithm can be applied in this context. When the data are not informative, we should use an even more vague prior (i.e., wider than the likelihood function) in order to maintain the usefulness of the algorithm.

5. Complete Bayesian Inference with ATAIS

Let us assume we have a proper prior g θ ( θ ) and we introduce another proper prior g σ ( σ ) for σ . The outputs of the ATAIS algorithm can serve to approximate the normalizing constant of the joint posterior p ( θ , σ | y ) ( y | θ , σ ) g θ ( θ ) g σ ( σ ) , i.e., the so-called marginal likelihood or Bayesian model evidence, given by
Z = R + Θ ( y | θ , σ ) g θ ( θ ) g σ ( σ ) d θ d σ = R + Z ( σ ) g σ ( σ ) d σ ,
where we have denoted Z ( σ ) = Θ ( y | θ , σ ) g θ ( θ d θ , usually called conditional marginal likelihood. The quantity Z is useful for model selection purposes. Furthermore, a complete Bayesian study of the joint posterior p ( θ , σ | y ) can be provided as well.
Approximation of Z ( σ ) = p ( y | σ ) . After the T iterations of ATAIS, we can also approximate the conditional marginal likelihood Z ( σ ) = p ( y | σ ) without additional evaluations of the target function. Indeed, saving the error values at each particle obtained for the computation of the likelihood function during ATAIS,
e t ( n ) = | | y f ( θ t ( n ) ) | | 2 ,
We can compute the IS weights,
ρ t ( n ) ( σ ) = 1 ( 2 π σ 2 ) K 2 exp e t ( n ) 2 σ 2 g θ ( θ t ( n ) ) q ( θ t ( n ) | μ t , Σ t ) ,
for a generic value of σ Thus, the IS estimator of the conditional marginal likelihood Z ( σ ) is given by the arithmetic mean of the weights ρ t ( n ) ( σ ) ,
Z ^ ( σ ) = p ^ ( y | σ ) = 1 N T t = 1 T n = 1 N ρ t ( n ) ( σ ) .
Approximation of Z. Drawing σ ( r ) g σ ( σ ) , for r = 1 , , R , (or considering a deterministic grid, e.g., as a Riemannian integration), we can approximate the global marginal likelihood Z by applying simple Monte Carlo to the integral in Equation (23),
Z ^ = 1 R r = 1 R Z ^ ( σ ( r ) ) .
Approximation of p ( σ | y ) . An approximation of the marginal posterior p ( σ | y ) = p ( y | σ ) g σ ( σ ) p ( y ) can be also obtained as
p ( σ | y ) p ^ ( σ | y ) = Z ^ ( σ ) g σ ( σ ) Z ^ ,
which can be used to approximate, e.g., the MAP of p ( σ | y ) by σ MAP marg arg max σ Z ^ ( σ ) g σ ( σ ) . Other different moments of p ( σ | y ) can be computed by a deterministic quadrature (as the problem is now one-dimensional) or applying noisy Monte Carlo approaches.
Complete Bayesian analysis. We can approximate the integral of interest as
I = R + Θ h ( θ , σ ) p ( θ , σ | y ) d θ d σ ,
= R + Θ h ( θ , σ ) p ( θ | y , σ ) p ( σ | y ) d θ d σ
1 J j = 1 J t = 1 T n = 1 N ρ ¯ t ( n ) ( σ ( j ) ) h ( θ t ( n ) , σ ( j ) ) ,
where
ρ ¯ t ( n ) ( σ ( j ) ) = ρ t ( n ) ( σ ( j ) ) τ = 1 T i = 1 N ρ τ ( i ) ( σ ( j ) ) ,
and σ ( j ) are generated by applying a noisy MCMC with invariant density p ^ ( σ | y ) Z ^ ( σ ) g σ ( σ ) . Note that the samples θ t ( n ) do not depend on the index j (they do not change) as we are recycling the particles generated by ATAIS and reusing evaluations e t ( n ) = | | y f ( θ t ( n ) ) | | 2 .

6. Simulations

We test the proposed scheme in two numerical examples: The first numerical experiment is a simple bidimensional example (which is easy to be reproduced). The second experiment considers a real-world application, i.e., a radial velocity model of exoplanet systems which is often employed in astronomy applications (with a dimension of the inference problem of 6 and 11).

6.1. First Numerical Analysis

For the sake of simplicity, let us consider θ R and an observation model given by the equation
y k = θ 2 + log ( | sin ( 10 θ ) | ) + v k ,
so that f ( θ ) = θ 2 + log ( | sin ( 10 θ ) | ) , and v k N ( 0 , σ 2 ) . We consider θ true = 2.5 , and σ true = 4 . We generate K = 8 observations from the model above. We also consider a uniform prior for θ in ( 0 , 20 ] . The conditional posterior p ( θ | y , σ true ) is shown in Figure 1c. We can observe that p ( θ | y , σ true ) is highly multimodal. Figure 1 also depicts the conditional posteriors p ( θ | y , σ ) with σ { 10 , 20 } . Considering also a uniform prior over σ in ( 0 , 20 ] , we have also a bidimensional joint posterior over [ θ , σ ] , which is depicted in Figure 2a.
In this bidimensional example, it is possible to obtain the ground-truths using an expensive thin grid. We show the ground-truths of the different pdfs in Table 1. Moreover, the true value of the complete evidence Z = p ( y ) = 1.5983 × 10 9 . As the prior over σ is uniform, the maximum likelihood of σ is σ ML = σ MAP - joint = 3.23 . The two marginal posteriors are shown in Figure 2b,c.
We apply ATAIS with the goal of estimating the expected value and the variance of the posterior density with respect to θ . We consider a Gaussian proposal q ( θ | μ t , λ t ) with μ 0 = 10 and a starting variance of λ 0 = 4 . Note that μ 0 is located in a region that does not contain modes. We also start with σ ^ ML ( 0 ) = 20 and π MAP = 0 (initial conditions). The Mean Square Error (MSE) of ATAIS, averaged over 500 runs, in estimation of different moments and modes as function of N (and with T = 10 ), is given in Table 2. The ML estimation σ ^ ML ( t ) , as function of the iteration t (with N = 5 ) for different runs, is given in Figure 3a. The approximation of the marginal posterior p ( σ | y ) , denoted p ^ ( σ | y ) , is obtained as in Equation (27) in one specific run, with different N { 10 , 100 , 500 } and T = 10 . The approximations of the joint posterior p ( θ , σ | y ) and the marginal posterior p ( θ | y ) , obtained by resampling the particles according to the normalized weights in Equations (31) and (24), are shown in Figure 4, i.e., using a sampling importance resampling procedure. For more details, see in [14] and Chapter 24 in [15].

6.2. Radial Velocity Curves of Exoplanets and Binary Systems

In this example, we consider an application in an astronomical model. In recent years, the problem of revealing objects orbiting other stars has acquired large attention. Different techniques have been proposed to discover exo-objects but, nowadays, the radial velocity technique is still the most used [16,17,18,19]. The problem consists in fitting a model (the so-called radial velocity curve) to data acquired at different moments spanning during long time periods (up to years). The model is highly nonlinear, and it is costly in terms of computation time (especially for certain sets of parameters). Obtaining a value to compare to a single observation involves numerically integrating a differential equation in time or an iterative procedure for solving to a nonlinear equation. Typically, the iteration is performed until a threshold is reached or 10 6 iterations are performed. The problem of radial velocity curve fitting is applied in several related applications.
Observation model—likelihood. When analyzing the radial velocity data of an exoplanetary system, it is commonly accepted that the wobbling of the star around the center of mass is caused by the sum of the gravitational force of each planet independently and that they do not interact with each other. Each planet follows a Keplerian orbit, and the radial velocity of the host star is given by
y r , t = V 0 + i = 1 S A i cos u i , t + ω i + e i cos ω i + ξ t ,
with t = 1 , , T and r = 1 , , R . In this equation, A i is the amplitude of the curve, w i is the argument of perigee, and e i is the eccentricity of the orbit of the i-th planet. The parameter V 0 represents the mean velocity, and it is common for all the planets. The number of objects in the system is S, which is considered to be known in this experiment (for the sake of simplicity). Both y r , t and u i , t depend on time t, and then ξ t is a Gaussian noise perturbation with variance σ 2 . The likelihood function is defined by (32) and some indicator variables described below. The angle u i , t is the true anomaly of the planet i, and it can be determined from
d u i , t d t = 2 π P i 1 + e i cos u i , t 2 1 e i 3 2
This equation has analytical solution. As a result, the true anomaly u t can be determined from the mean anomaly M. However, the analytical solution contains a nonlinear term that needs to be determined by iterating. First, we define the mean anomaly M i , t as
M i , t = 2 π P i t τ i ,
where τ i is the time of periastron passage of the planet i and P i is the period of its orbit. Then, through the Kepler’s equation,
M i , t = E i , t e i sin E i , t ,
where E i , t is the eccentric anomaly. Equation (35) has no analytic solution and it must be solved by an iterative procedure. A Newton–Raphson method is typically used to find the roots of this equation [20]. For certain sets of parameters, this iterative procedure can be particularly slow. We also have
tan u i , t 2 = 1 + e i 1 e i tan E i , t 2 ,
The variable of interest θ is then the vector
θ = [ V 0 , A 1 , ω 1 , e 1 , P 1 , τ 1 , , A S , ω S , e S , P S , τ S ] ,
Then, for a single object (e.g., a planet or a natural satellite), the dimension of θ is M = 5 + 1 = 6 , with two objects the dimension of θ is M = 11 etc.
This example consists in a synthetic radial velocity curve of a planetary system with one planet or two planets (i.e., S = 1 or S = 2 ). More specifically, we generate simulated data with a model with two planets. The orbital parameters of the planets are listed in Table 3, where P is the period of the orbit, A is the amplitude of the curve, e is the eccentricity of the orbit, ω is the argument of perigee, and τ is the last periastron passage. A mean velocity V 0 = 5 m s 1 is assumed. A Gaussian noise perturbation is added with a standard deviation σ = 3 m s 1 . To simulate observations, a total of K = 120 data points are selected from three random time periods (and two planets in the system). Note that the amplitude of the radial velocity curve of the second planet is close to the noise level. We run ATAIS and a standard AIS scheme with the model with one planet and with the model with two planets. The purpose of this simulation is to check the ability of the method to detect the two planets (by approximating the model evidence).
We apply ATAIS and a standard AIS scheme [13] over the space [ θ , σ ] for approximating the model evidence Z = p ( y ) (marginal likelihood) of both models (one planet or two planets) with the given data (generated considering two planets). Uniform priors are considered for each parameter: P [ 0 , 365 ] , A [ 20 , 20 ] , e [ 0 , 1 ] , ω [ 0 , 2 π ] , and τ [ 0 , 50 ] (moreover, σ [ 0 , 30 ] for the standard AIS scheme). The ATAIS algorithm and the standard AIS scheme have been run with N = 10 6 and T = 50 iterations for both the model with one planet and the model with two planets. In both cases, we consider the same Gaussian proposal with a starting standard deviation of 5 for each component (note that the standard AIS scheme works in higher dimensional space due the inference over σ ). To decide which model is more probable, the model evidence Z of each model is estimated. More specifically, we approximate the one-planet model Z ^ 1 = p ^ 1 ( y ) and the two-planet model Z ^ 2 = p ^ 2 ( y ) with the ATAIS algorithm and the standard AIS scheme. When Z ^ 1 > Z ^ 2 , we select the first model; otherwise, if Z ^ 1 < Z ^ 2 , we select the second one. The true model is the two-planet model, as the simulated data were generated from that model. After 500 independent runs, the percentage of correct detection of the true model for ATAIS is 98 % , whereas with the standard AIS scheme is only 56 % . This is due to the difficulty of making inference jointly over [ θ , σ ] . Let us denote the Bayesian factor as B = Z 2 / Z 1 . In ATAIS, the expected value of the ratio between the model evidences (averaged over the 500 runs) is E [ B ] 5 · 10 3 with a relative variance of E [ ( B E [ B ] ) 2 ] E [ B ] 2 0.04 . In the case of the standard AIS, we have E [ B ] 16.32 and E [ ( B E [ B ] ) 2 ] E [ B ] 2 0.15 . Therefore, for ATAIS, the model with two planets is clearly more probable than the model with one planet.
The fitted curves, corresponding to the vector of parameters θ ^ MAP obtained with ATAIS, are shown in Figure 5. From the figure, it is not clear which model better fits the simulated observations (blue points), although the model with two planets seems to better fit the observations in the time period from 200 to 300 days. The values of θ ^ MAP , obtained in one specific run by ATAIS, are given in Table 4. We notice that ω and τ are highly correlated and more iterations may be needed to obtain the actual global maximum, but the remaining parameters obtained from θ ^ MAP are similar to the simulated values. In addition, the amplitude of the curve of the second planet is close to the intensity of the noise, making it difficult to derive the best fit for that planet. Summarizing, our results show the method is able to discriminate between a model with one planet (with six dimensions of the inference problem) and a model with two planets (with 11 dimensions of the inference problem), for this particular simulation. Finally, the evolution of the automatic tempering parameter σ ^ ML ( t ) is shown in Figure 6. The dashed line is the evolution of σ ^ ML ( t ) for the single-planet model, whereas the continuous line is the evolution of σ ^ ML ( t ) for the model with two planets. In this second model, the tempering parameter reaches a smaller value, as expected.

7. Conclusions

We have proposed a novel AIS scheme for Bayesian inversion problems where an automatic tempering procedure is implemented (called ATAIS). The inference of the variables of interest θ and the noise power σ 2 is divided. A sampling strategy is considered for θ and an optimization approach is employed for σ 2 . Thus, ATAIS performs an iterative procedure, alternating sampling and optimization steps. Therefore, the proposed scheme deals with a sequence of tempered posteriors according to the current estimation of the noise power. We have also discussed the possibility of approximating the marginal posterior of σ without additional evaluations of the complex model. Furthermore, the complete Bayesian analysis regarding the complete joint posterior is possible as discussed in Section 5, again without any additional evaluations of the likelihood function.
Several simulations are provided and the application to a sophisticated astronomical model has been considered, where the number of planets in the system is detected by the analysis of the marginal likelihood. The results show the benefits of the proposed scheme. For instance, in the astronomical example, the percentage of correct detection of the true model obtained by ATAIS is ≈ 98 % , whereas with the standard AIS scheme is only ≈ 56 % . As future research, we plan to extend the ATAIS scheme in order to deal with an observation model with correlated noise perturbations (for instance, using a Gaussian Process). Moreover, the use of parallel AIS schemes (or MCMC algorithms) will be also considered. A combination of parallel MCMC chains and AIS schemes can be found in the so-called layered AIS method and other similar approaches [21,22]. This idea seems particularly interesting for improving the inference with radial velocity models.

Author Contributions

All the authors contribute to all the sections in the same way. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by the Office of Naval Research (award no. N00014-19-1-2226), the Spanish Ministry of Science and Innovation (RTI2018-099655-B- I00 CLARA and PID2019-105032GB-I00 SPGRAPH), the Foundation by the Community of Madrid in the framework of the Multiannual Agreement with the Rey Juan Carlos University in line of action 1, Encouragement of Young Ph.D. students investigation Project under Grant F661, and the regional Government of Madrid (Comunidad de Madrid, reference Y2018/TCS-4705 PRACTICO).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. On the Optimization of the Likelihood Function

Let us set δ = σ 2 and consider to optimize of the likelihood function
( θ , δ ) = 1 ( 2 π δ ) K / 2 exp V ( θ ) δ .
Recall that, in our model, we have V ( θ ) = | | y f ( θ ) | | 2 . We desire to obtain
[ θ ML , δ ML ] = arg max ( θ , δ ) .
We can write the gradient and equal to zero,
θ ( θ , δ ) = 1 δ θ V ( θ ) 1 ( 2 π δ ) K / 2 exp V ( θ ) δ = 0 θ V ( θ ) = 0 , ( θ , δ ) δ = e V ( θ ) δ 2 V ( θ ) δ K 2 K 2 + 1 δ K 2 + 2 π K / 2 = 0 δ = 2 K V ( θ ) .
We have obtained that the ML solution is defined by the system of equations,
θ V ( θ ML ) = 0 δ ML = 2 K V ( θ ML ) .

References

  1. Fitzgerald, W.J. Markov chain Monte Carlo methods with applications to signal processing. Signal Process. 2001, 81, 3–18. [Google Scholar] [CrossRef]
  2. Andrieu, C.; de Freitas, N.; Doucet, A.; Jordan, M. An Introduction to MCMC for Machine Learning. Mach. Learn. 2003, 50, 5–43. [Google Scholar] [CrossRef] [Green Version]
  3. Martino, L.; Míguez, J. Generalized Rejection Sampling Schemes and Applications in Signal Processing. Signal Process. 2010, 90, 2981–2995. [Google Scholar] [CrossRef] [Green Version]
  4. Robert, C.P.; Casella, G. Monte Carlo Statistical Methods; Springer: New York, NY, USA, 2004. [Google Scholar]
  5. Liu, J.S. Monte Carlo Strategies in Scientific Computing; Springer: New York, NY, USA, 2004. [Google Scholar]
  6. Martino, L.; Luengo, D.; Miguez, J. Independent Random Sampling Methods; Springer: New York, NY, USA, 2018. [Google Scholar]
  7. Kirkpatrick, S., Jr.; Gelatt, C.D.; Vecchi, M.P. Optimization by Simulated Annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef] [PubMed]
  8. Marinari, E.; Parisi, G. Simulated Tempering: A New Monte Carlo Scheme. Europhys. Lett. 1992, 19, 451–458. [Google Scholar] [CrossRef] [Green Version]
  9. Friel, N.; Pettitt, A.N. Marginal Likelihood Estimation via Power Posteriors. J. R. Stat. Soc. Ser. B Stat. Methodol. 2008, 70, 589–607. [Google Scholar] [CrossRef]
  10. Moral, P.D.; Doucet, A.; Jasra, A. Sequential Monte Carlo samplers. J. R. Stat. Soc. Ser. B Stat. Methodol. 2006, 68, 411–436. [Google Scholar] [CrossRef]
  11. Neal, R.M. Annealed importance sampling. Stat. Comput. 2001, 11, 125–139. [Google Scholar] [CrossRef]
  12. Llorente, F.; Martino, L.; Delgado, D.; Lopez-Santiago, J. Marginal likelihood computation for model selection and hypothesis testing: An extensive review. arXiv 2020, arXiv:2005.08334. [Google Scholar]
  13. Bugallo, M.F.; Martino, L.; Corander, J. Adaptive importance sampling in signal processing. Digit. Signal Process. 2015, 47, 36–49. [Google Scholar] [CrossRef] [Green Version]
  14. Rubin, D.B. Using the SIR algorithm to simulate posterior distributions. In Bayesian Statistics 3, Ads Bernardo, Degroot, Lindley, and Smith; Oxford University Press: Oxford, UK, 1988. [Google Scholar]
  15. Gelman, A.; Meng, X.L. Applied Bayesian Modeling and Causal Inference from Incomplete-Data Perspectives; John Wiley & Sons: New York, NY, USA, 2018. [Google Scholar]
  16. Gregory, P.C. Bayesian re-analysis of the Gliese 581 exoplanet system. Mon. Not. R. Astron. Soc. 2011, 415, 2523–2545. [Google Scholar] [CrossRef] [Green Version]
  17. Barros, S.C.C.; Brown, D.J.A.; Hébrard, G.; Gómez Maqueo Chew, Y.; Anderson, D.R.; Boumis, P.; Delrez, L.; Hay, K.L.; Lam, K.W.F.; Llama, J.; et al. WASP-113b and WASP-114b, two inflated hot Jupiters with contrasting densities. Astron. Astrophys. 2016, 593, A113. [Google Scholar] [CrossRef]
  18. Affer, L.; Damasso, M.; Micela, G.; Poretti, E.; Scand ariato, G.; Maldonado, J.; Lanza, A.F.; Covino, E.; Garrido Rubio, A.; González Hernández, J.I.; et al. HADES RV program with HARPS-N at the TNG. IX. A super-Earth around the M dwarf Gl 686. Astron. Astrophys. 2019, 622, A193. [Google Scholar] [CrossRef] [Green Version]
  19. Trifonov, T.; Stock, S.; Henning, T.; Reffert, S.; Kürster, M.; Lee, M.H.; Bitsch, B.; Butler, R.P.; Vogt, S.S. Two Jovian Planets around the Giant Star HD 202696: A Growing Population of Packed Massive Planetary Pairs around Massive Stars? Astron. J. 2019, 157, 93. [Google Scholar] [CrossRef] [Green Version]
  20. Press, W.H.; Teukolsky, S.A.; Vetterling, W.T.; Flannery, B.P. Numerical Recipes in C++: The Art of Scientific Computing; Springer: New York, NY, USA, 2002. [Google Scholar]
  21. Martino, L.; Elvira, V.; Luengo, D.; Corander, J. Layered Adaptive Importance Sampling. Stat. Comput. 2017, 27, 599–623. [Google Scholar] [CrossRef] [Green Version]
  22. Botev, Z.I.; Ecuyer, P.L.; Tuffin, B. Markov chain importance sampling with applications to rare event probability estimation. Stat. Comput. 2013, 23, 271–285. [Google Scholar] [CrossRef]
Figure 1. Conditional posteriors corresponding to different values of σ : (a) σ = 20 , (b) σ = 10 , and (c) σ = σ true = 4 .
Figure 1. Conditional posteriors corresponding to different values of σ : (a) σ = 20 , (b) σ = 10 , and (c) σ = σ true = 4 .
Mathematics 09 00784 g001
Figure 2. The bidimensional joint posterior p ( θ , σ | y ) and the two marginal posteriors p ( θ | y ) , p ( σ | y ) in Equation (11), computed by using a thin grid approximation.
Figure 2. The bidimensional joint posterior p ( θ , σ | y ) and the two marginal posteriors p ( θ | y ) , p ( σ | y ) in Equation (11), computed by using a thin grid approximation.
Mathematics 09 00784 g002
Figure 3. (a) The maximum likelihood (ML) estimation σ ^ ML ( t ) (different runs) versus the number of iterations t, with N = 5 . (b) The true marginal posterior p ( σ | y ) and different approximations, in one specific run, p ^ ( σ | y ) obtained as in Equation (27) with different N { 10 , 100 , 500 } and T = 10 (thus, the total number of samples are N T ).
Figure 3. (a) The maximum likelihood (ML) estimation σ ^ ML ( t ) (different runs) versus the number of iterations t, with N = 5 . (b) The true marginal posterior p ( σ | y ) and different approximations, in one specific run, p ^ ( σ | y ) obtained as in Equation (27) with different N { 10 , 100 , 500 } and T = 10 (thus, the total number of samples are N T ).
Mathematics 09 00784 g003
Figure 4. Approximations obtained with ATAIS. (a,b) Joint posterior p ( θ , σ | y ) : (a) by an histogram with 2 × 10 6 samples; (b) 10 4 samples from joint posterior obtained by ATAIS. (c) Approximation by an histogram with 2 × 10 6 samples, of the marginal posterior p ( θ | y ) .
Figure 4. Approximations obtained with ATAIS. (a,b) Joint posterior p ( θ , σ | y ) : (a) by an histogram with 2 × 10 6 samples; (b) 10 4 samples from joint posterior obtained by ATAIS. (c) Approximation by an histogram with 2 × 10 6 samples, of the marginal posterior p ( θ | y ) .
Mathematics 09 00784 g004
Figure 5. Comparison of the results of the ATAIS algorithm with the simulations (blue dots). Left panel shows, in gray, the radial velocity curve for θ ^ MAP using a model with one planet. Right panel is like left panel but considering a model with two planets.
Figure 5. Comparison of the results of the ATAIS algorithm with the simulations (blue dots). Left panel shows, in gray, the radial velocity curve for θ ^ MAP using a model with one planet. Right panel is like left panel but considering a model with two planets.
Mathematics 09 00784 g005
Figure 6. Evolution of the tempering parameter σ ^ ML ( t ) . We decide σ ^ ML ( 0 ) = 50 as starting value (the figure shows from t = 1 ), which is an arbitrary high value to help the exploration in the first iteration. However, after the first iteration, the algorithm is able to obtain reasonable values of σ ^ ML ( 1 ) . The dashed line is the evolution for the model with one planet. The continuous line is the evolution of the two-planet model.
Figure 6. Evolution of the tempering parameter σ ^ ML ( t ) . We decide σ ^ ML ( 0 ) = 50 as starting value (the figure shows from t = 1 ), which is an arbitrary high value to help the exploration in the first iteration. However, after the first iteration, the algorithm is able to obtain reasonable values of σ ^ ML ( 1 ) . The dashed line is the evolution for the model with one planet. The continuous line is the evolution of the two-planet model.
Mathematics 09 00784 g006
Table 1. Summary of pdfs and ground-truths for the first numerical experiment.
Table 1. Summary of pdfs and ground-truths for the first numerical experiment.
PdfExpectationVarianceMAP
p ( θ | y , σ ML ) 2.480.112.56
p ( σ | y ) 4.322.433.46
p ( θ | y ) 2.460.182.56
Table 2. Mean Square Error (MSE) of ATAIS (averaged over 500 runs), in the estimation of the evidence, different moments and modes as function of N and T = 10 .
Table 2. Mean Square Error (MSE) of ATAIS (averaged over 500 runs), in the estimation of the evidence, different moments and modes as function of N and T = 10 .
Value N = 10 N = 100 N = 1000 N = 5000 Ground-Truths
E [ θ | y , σ ML ] 0.03110.00980.00340.00242.48
var [ θ | y , σ ML ] 0.04740.03700.02980.02010.11
θ MAP 0.04100.03370.02850.01272.56
E [ σ | y ] 0.92330.07850.00970.00234.32
var [ σ | y ] 6.18690.26400.00350.00102.43
σ MAP - marg 0.00560.00040.0001 3 × 10 5 3.46
σ ML 8 × 10 5 2 × 10 5 5 × 10 7 6 × 10 9 3.23
Z = p ( y ) 2 × 10 18 1.8 × 10 20 1.4 × 10 20 3.6 × 10 22 1.6 × 10 9
Table 3. Main orbital parameters of the two exoplanets in the simulation.
Table 3. Main orbital parameters of the two exoplanets in the simulation.
ParameterPlanet 1Planet 2
P15 d115 d
A25 m s 1 5 m s 1
e0.10.0
ω 0.61 rad0.17 rad
τ 3 d24 d
Table 4. The value of θ ^ MAP and the variances of the marginal posteriors for the 2-planets model (with K = 120 data points).
Table 4. The value of θ ^ MAP and the variances of the marginal posteriors for the 2-planets model (with K = 120 data points).
ParameterPlanet 1Planet 2
θ ^ MAP Var ( θ | y ) θ ^ MAP —Planet 2 Var ( θ | y )
P14.99 d0.18110.39 d11.28
K23.78 m s 1 0.523.50 m s 1 0.44
e0.050.0470.000.003
ω 7.69 rad0.610.68 rad0.82
τ 6.8 d0.767.96 d20.31
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Martino, L.; Llorente, F.; Curbelo, E.; López-Santiago, J.; Míguez, J. Automatic Tempered Posterior Distributions for Bayesian Inversion Problems. Mathematics 2021, 9, 784. https://doi.org/10.3390/math9070784

AMA Style

Martino L, Llorente F, Curbelo E, López-Santiago J, Míguez J. Automatic Tempered Posterior Distributions for Bayesian Inversion Problems. Mathematics. 2021; 9(7):784. https://doi.org/10.3390/math9070784

Chicago/Turabian Style

Martino, Luca, Fernando Llorente, Ernesto Curbelo, Javier López-Santiago, and Joaquín Míguez. 2021. "Automatic Tempered Posterior Distributions for Bayesian Inversion Problems" Mathematics 9, no. 7: 784. https://doi.org/10.3390/math9070784

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop