Abstract
This chapter uses the standard normal \(\mathcal{N}(\mu,{\sigma }^{2})\) distribution as an easy entry to generic Bayesian inferential methods. As in every subsequent chapter, we start with a description of the data used as a chapter benchmark for illustrating new methods and for testing assimilation of the techniques. We then propose a corresponding statistical model centered on the normal distribution and consider specific inferential questions to address at this level, namely parameter estimation, model choice, and outlier detection, once set the description of the Bayesian resolution of inferential problems. As befits a first chapter, we also introduce here general computational techniques known as Monte Carlo methods.
This was where the work really took place.
—Ian Rankin, Knots & Crosses.—
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
We will omit the reference to Student in the subsequent uses of this distribution, as is the rule in anglo-saxon textbooks.
- 2.
Estimators are functions of the data \(\mathcal{D}_{n}\), while estimates are values taken by those functions. In most cases, we will denote them with a “hat” symbol, the dependence on \(\mathcal{D}_{n}\) being implicit.
- 3.
Harold Jeffreys was an English geophysicist who developed and formalized Bayesian methods in the 1930s in order to analyze geophysical data. He ended up writing an influential treatise on Bayesian statistics entitled Theory of Probability.
- 4.
There is nothing special about 0.05 when compared with, say, 0.87 or 0.12. It is just that the famous 5 % level is accepted by most as an acceptable level of error. If the context of the analysis tells a different story, another value for α (including one that may even depend on the data) should be chosen!
- 5.
In the sense of producing the smallest possible volume with a given coverage.
- 6.
This method is named in reference to the central district of Monaco, where the famous Monte-Carlo casino lies.
References
Chen, M., Shao, Q., and Ibrahim, J. (2000). Monte Carlo Methods in Bayesian Computation. Springer-Verlag, New York.
Gelman, A. and Meng, X. (1998). Simulating normalizing constants: From importance sampling to bridge sampling to path sampling. Statist. Science, 13:163–185.
Marin, J. and Robert, C. (2010). Importance sampling methods for Bayesian discrimination between embedded models. In Chen, M.-H., Dey, D., Müller, P., Sun, D., and Ye, K., editors, Frontiers of Statistical Decision Making and Bayesian Analysis. Springer-Verlag, New York.
Marin, J.-M. and Robert, C. (2007). Bayesian Core. Springer-Verlag, New York.
Robert, C. (2007). The Bayesian Choice. Springer-Verlag, New York, paperback edition.
Robert, C. and Casella, G. (2004). Monte Carlo Statistical Methods. Springer-Verlag, New York, second edition.
Robert, C. and Casella, G. (2009). Introducing Monte Carlo Methods with R. Use R! Springer-Verlag, New York.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer Science+Business Media New York
About this chapter
Cite this chapter
Marin, JM., Robert, C.P. (2014). Normal Models. In: Bayesian Essentials with R. Springer Texts in Statistics. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-8687-9_2
Download citation
DOI: https://doi.org/10.1007/978-1-4614-8687-9_2
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4614-8686-2
Online ISBN: 978-1-4614-8687-9
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)