Skip to main content
Log in

Bayesian marginal inference via candidate's formula

  • Published:
Statistics and Computing Aims and scope Submit manuscript

Abstract

Computing marginal probabilities is an important and fundamental issue in Bayesian inference. We present a simple method which arises from a likelihood identity for computation. The likelihood identity, called Candidate's formula, sets the marginal probability as a ratio of the prior likelihood to the posterior density. Based on Markov chain Monte Carlo output simulated from the posterior distribution, a nonparametric kernel estimate is used to estimate the posterior density contained in that ratio. This derived nonparametric Candidate's estimate requires only one evaluation of the posterior density estimate at a point. The optimal point for such evaluation can be chosen to minimize the expected mean square relative error. The results show that the best point is not necessarily the posterior mode, but rather a point compromising between high density and low Hessian. For high dimensional problems, we introduce a variance reduction approach to ease the tension caused by data sparseness. A simulation study is presented.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Besag J.E. 1989. A candidate's formula: A curious result in Bayesian prediction. Biometrika 76: 183.

    Google Scholar 

  • Chen M.H. and Shao Q.M. 1997. On Monte Carlo methods for estimating ratios of normalizing constants. The Annals of Statistics 25: 1563-1594.

    Google Scholar 

  • Chib S. 1995. Marginal likelihood from the Gibbs output. Journal of the American Statistical Association 90: 1313-1321.

    Google Scholar 

  • Cowling A. and Hall P. 1996. On pseudodata methods for removing boundary effects in kernel density estimation. Journal of the Royal Statistical Society, Ser., B 58: 551-563.

    Google Scholar 

  • DiCiccio T.J., Kass R.E., Raftery A., and Wasserman L. 1997. Computing Bayes factors by combining simulation and asymptotic approximations. Journal of the American Statistical Association 92: 903-915.

    Google Scholar 

  • Erkanli A. 1994. Laplace approximations for posterior expectations when the mode occurs at the boundary of the parameter space. Journal of the American Statistical Association 89: 250-258.

    Google Scholar 

  • Gasser T. and Müller H.G. 1979. Kernel estimation of regression functions. In: Gasser and Rosenblatt (Ed.), Smoothing Techniques for Curve Estimation. Lecture Notes in Mathematics, 757, Springer-Verlag.

  • Gelfand A.E. and Dey D.K. 1994. Bayesian model choice: Asymptotics and exact calculations. Journal of the Royal Statistical Society, Ser. B 56: 501-514.

    Google Scholar 

  • Gelfand A.E. and Smith A.F.M. 1990. Sampling-based approaches to calculating marginal densities. Journal of the American Statistical Association 85: 398-409.

    Google Scholar 

  • Gelman A. and Meng X.L. 1998. Simulating normalizing constants: From importance sampling to bridge sampling to path sampling. Statistical Science 13: 163-185.

    Google Scholar 

  • Geman S. and Geman D. 1984. Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images. IEEE Transactions of Pattern Analysis and Machine Intelligence 6: 721-741.

    Google Scholar 

  • Geweke J. 1989. Bayesian inference in econometric models using Monte Carlo integration. Econometrica 57: 1317-1340.

    Google Scholar 

  • Geyer C.J. 1992. Practical Markov chain Monte Carlo” (with comments). Statistical Science 7: 473-451.

    Google Scholar 

  • Gilks W.R., Richardson S., and Spiegelhalter D.J. 1996. Markov Chain Monte Carlo in Practice. Chapman & Hall, London, UK.

    Google Scholar 

  • Hastings W.K. 1970. Monte Carlo sampling methods using Markov chains and their applications. Biometrika 57: 97-109.

    Google Scholar 

  • Hsiao C.K. 1997. Approximate Bayes factors when a mode occurs on the boundary. Journal of the American Statistical Association 92: 656-663.

    Google Scholar 

  • Huang S.Y., Hsiao C.K., and Chang C.W. 2003. Optimal volumecorrected Laplace-Metropolis method. Annals of the Institute of Statistical Mathematics 55: 655-670.

    Google Scholar 

  • Lewis S.M. and Raftery A.E. 1997. Estimating Bayes factors via posterior simulation with the Laplace-Metropolis estimator. Journal of the American Statistical Association 92: 648-655.

    Google Scholar 

  • Meng X.L. and Wong W.H. 1996. Simulating ratios of normalizing constants via a simple identity:Atheoretical exploration. Statistica Sinica 6: 831-860.

    Google Scholar 

  • Mosteller F. and Wallace D.L. 1964. Applied Bayesian and Classical Inference, 1st ed. Reprinted in 1984 by Springer-Verlag, NewYork.

  • Müller H.G. 1991. Smooth optimum kernel estimators near endpoints. Biometrika 78: 521-530.

    Google Scholar 

  • Müller H.G. 1993. On the boundary kernel method for nonparametric curve estimation near endpoints. Scandinavian Journal of Statistics 20: 313-328.

    Google Scholar 

  • Müller H.G. and Stadtmüller U. 1999. Multivariate boundary kernels and a continuous least squares principle. Journal of the Royal Statistical Society, Ser. B 61: 439-458.

    Google Scholar 

  • Newton M.A. and Raftery A.E. 1994. Approximate Bayesian inference by the weighted likelihood bootstrap” (with discussion). Journal of the Royal Statistical Society, Ser. B 56: 3-48.

    Google Scholar 

  • Parzen E. 1962. On the estimation of probability density function and mode. The Annals of Mathematical Statistics 33: 1065-1076.

    Google Scholar 

  • Pauler D.K., Wakefield J.C., and Kass R.E. 1999. Bayes factors and approximations for variance component models. Journal of the American Statistical Association 94: 1242-1253.

    Google Scholar 

  • Rice J. 1984. Boundary modification for kernel regression. Communications in Statistics, Part A 13: 893-900.

    Google Scholar 

  • Scott D.W. 1992. Multivariate Density Estimation. JohnWiley & Sons, Inc, New York, NY.

    Google Scholar 

  • Simonoff J.S. 1996. Smoothing Methods in Statistics. Springer, New York, NY.

    Google Scholar 

  • Silverman B.W. 1986. Density Estimation. Chapman & Hall, London.

    Google Scholar 

  • Terrell G.R. 1990. The maximal smoothing principle in density estimation. Journal of the American Statistical Association 85: 470-477.

    Google Scholar 

  • Tierney L. and Kadane J.B. 1986. Accurate approximations for posterior moments and marginal densities. Journal of the American Statistical Association 81: 82-86.

    Google Scholar 

  • West M. 1993. Approximating posterior distributions by mixtures. Journal of the Royal Statistical Society, Ser. B 55: 409-422.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Hsiao, C.K., Huang, SY. & Chang, CW. Bayesian marginal inference via candidate's formula. Statistics and Computing 14, 59–66 (2004). https://doi.org/10.1023/B:STCO.0000009416.78796.78

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/B:STCO.0000009416.78796.78

Navigation