skip to main content
10.1145/1082473.1082522acmconferencesArticle/Chapter ViewAbstractPublication PagesaamasConference Proceedingsconference-collections
Article

Approximating state estimation in multiagent settings using particle filters

Published:25 July 2005Publication History

ABSTRACT

State estimation consists of updating an agent's belief given executed actions and observed evidence to date. In single agent environments, the state estimation can be formalized using the Bayes filter. Exact estimation can be performed in simple cases, but approximate techniques, like particle filtering, have been used in more realistic cases. This paper extends the particle filter to multiagent settings resulting in the interactive particle filter. The main difficulty we tackle is that to fully represent an agent's beliefs in such environments, one has to specify probability distributions over the physical state and over the beliefs of other agents. This leads to interactive hierarchical belief systems first developed in game theory. Since the update of such beliefs proceeds recursively, the interactive particle filter samples and propagates on all levels of the belief hierarchy. We present algorithms, discuss some of their properties, and illustrate the performance of our implementation using simple examples.

References

  1. R. J. Aumann and A. Heifetz. Handbook of Game Theory with Economic Applications, volume 3. Elsevier Science, 2002.]]Google ScholarGoogle Scholar
  2. P. Battigalli and G. Bonanno. Recent results on belief, knowledge, and the epistemic foundations of game theory. Research in Economics, 53:149--225, 1999.]]Google ScholarGoogle ScholarCross RefCross Ref
  3. A. Brandenburger and E. Dekel. Hierarchies of beliefs and common knowledge. Journal of Economic Theory, 59:189--198, 1993.]]Google ScholarGoogle ScholarCross RefCross Ref
  4. A. Doucet, N. D. Freitas, and N. Gordon. Sequential Monte Carlo Methods in Practice. Springer Verlag, 2001.]]Google ScholarGoogle ScholarCross RefCross Ref
  5. R. Fagin, J. Halpern, Y. Moses, and M. Vardi. Reasoning about Knowledge. MIT Press, 1995.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. D. Fox, W. Burgard, H. Kruppa, and S. Thrun. A probabilistic approach to collaborative multi-robot localization. Autonomous Robots on Heterogenous Multi-Robot Systems, 8(3), 2000.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. P. Gmytrasiewicz and P. Doshi. Interactive pomdps: Properties and preliminary results. In AAMAS, pages 1374--1375, NYC, NY, 2004.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. P. Gmytrasiewicz and P. Doshi. A framework for sequential planning in multiagent settings. Journal of AI Research, 23, 2005.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. N. Gordon, D. Salmond, and A. Smith. Novel approach to non-linear/non-gaussian bayesian state estimation. IEEE Proceedings-F, 140(2):107--113, 1993.]]Google ScholarGoogle Scholar
  10. J. C. Harsanyi. Games with incomplete information played by 'bayesian' players. Mgmt. Science, 14(3):159--182, 1967.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. A. Heifetz and D. Samet. Topology-free typology of beliefs. Journal of Economic Theory, 82:324--341, 1998.]]Google ScholarGoogle ScholarCross RefCross Ref
  12. L. Kaelbling, M. Littman, and A. Cassandra. Planning and acting in partially observable stochastic domains. Artificial Intelligence, 2, 1998.]]Google ScholarGoogle Scholar
  13. M. Li and P. Vitanyi. An Introduction to Kolmogorov Complexity and its Applications. Springer, 1997.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. J. Mertens and S. Zamir. Formulation of bayesian analysis for games with incomplete information. International Journal of Game Theory, 14:1--29, 1985.]]Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. S. Russell and P. Norvig. Artificial Intelligence: A Modern Approach (Second Edition). Prentice Hall, 2003.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. R. Smallwood and E. Sondik. The optimal control of partially observable markov decision processes over a finite horizon. Operations Research, 21:1071--1088, 1973.]]Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. S. Thrun. Monte carlo pomdps. In NIPS 12, pages 1064--1070, 2000.]]Google ScholarGoogle Scholar

Index Terms

  1. Approximating state estimation in multiagent settings using particle filters

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      AAMAS '05: Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
      July 2005
      1407 pages
      ISBN:1595930930
      DOI:10.1145/1082473

      Copyright © 2005 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 25 July 2005

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • Article

      Acceptance Rates

      Overall Acceptance Rate1,155of5,036submissions,23%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader