Skip to main content

Degrees of belief, random worlds, and maximum entropy

  • Conference paper
  • First Online:
Discovery Science (DS 2000)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1967))

Included in the following conference series:

  • 358 Accesses

Abstract

Consider a doctor with a knowledge base KB consisting of first-order information (such as “All patients with hepatitis have jaun- dice”), statistical information (such as “80have hepatitis”), and default information (such as “patients with pneumonia typically have fever”). The doctor may want to make decisions regarding a particular patient, using the KB in some principled way. To do this, it is often useful for the doctor to assign a numerical “degree of belief” to measure the strength of her belief in a given statement A. I focus on one principled method for doing so. The method, called the random worlds method, is a natu- ral one: For any given domain size N, we can look at the proportion of models satisfying A among models of size N satisfying KB. If we don’t know the domain size N, but know that it is large, we can approximate the degree of belief in A given KB by taking the limit of this fraction as N goes to infinity. In many cases that arise in practice, the answers we get using this method can be shown to match heuristic assumptions made in many standard AI systems. I also show that when the language is restricted to unary predicates (for example, symptoms and diseases, but not relations such as “Taller than”), the answer provided by the random worlds method can often be computed using maximum entropy. On the other hand, if the language includes binary predicates, all connections to maximum entropy seem to disappear. Moreover, almost all the questions one might want to ask can be shown to be highly undecidable. I conclude with some general discussion of the problem of finding reason- able methods to do inductive reasoning of the sort considered here, and the relevance of these ideas to data mining and knowledge discovery. The talk covers joint work with Fahiem Bacchus, Adam Grove and Daphne Koller [1][2].

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. F. Bacchus, A. J. Grove, J. Y. Halpern, and D. Koller, From statistical knowledge bases to degrees of belief, Artificial Intelligence 87:1-2, 1996, pp. 75–143.

    Article  MathSciNet  Google Scholar 

  2. F. Bacchus, A. J. Grove, J. Y. Halpern, and D. Koller, From statistics to belief, Proceedings of AAAI-92 (Proceedings of the Tenth National Conference on Artificial Intelligence, 1992, pp. 602–608.

    Google Scholar 

  3. A. J. Grove, J. Y. Halpern, and D. Koller, Random worlds and maximum entropy, Journal of AI Research 2, 1994, pp. 33–88.

    MATH  MathSciNet  Google Scholar 

  4. A. J. Grove, J. Y. Halpern, and D. Koller, Asymptotic conditional probabilities: the unary case, SIAM Journal on Computing, 25:1, pp. 1–51, 1996.

    Article  MATH  MathSciNet  Google Scholar 

  5. A. J. Grove, J. Y. Halpern, and D. Koller,Asymptotic conditional probabilities: the non-unary case, Journal of Symbolic Logic, 61:1, 1996, pp. 250–275.

    Article  MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Halpern, J.Y. (2000). Degrees of belief, random worlds, and maximum entropy. In: Arikawa, S., Morishita, S. (eds) Discovery Science. DS 2000. Lecture Notes in Computer Science(), vol 1967. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44418-1_2

Download citation

  • DOI: https://doi.org/10.1007/3-540-44418-1_2

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-41352-3

  • Online ISBN: 978-3-540-44418-3

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics