Skip to main content
Log in

Cobot in LambdaMOO: An Adaptive Social Statistics Agent

  • Published:
Autonomous Agents and Multi-Agent Systems Aims and scope Submit manuscript

Abstract

We describe our development of Cobot, a novel software agent who lives in LambdaMOO, a popular virtual world frequented by hundreds of users. Cobot’s goal was to become an actual part of that community. Here, we present a detailed discussion of the functionality that made him one of the objects most frequently interacted with in LambdaMOO, human or artificial. Cobot’s fundamental power is that he has the ability to collect social statistics summarizing the quantity and quality of interpersonal interactions. Initially, Cobot acted as little more than a reporter of this information; however, as he collected more and more data, he was able to use these statistics as models that allowed him to modify his own behavior. In particular, cobot is able to use this data to “self-program,” learning the proper way to respond to the actions of individual users, by observing how others interact with one another. Further, Cobot uses reinforcement learning to proactively take action in this complex social environment, and adapts his behavior based on multiple sources of human reward. Cobot represents a unique experiment in building adaptive agents who must live in and navigate social spaces.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Barbuceanu, M., Fox, M.S. (1995). The architecture of an agent building shell. In M. Wooldridge, J. P. Mueller & M. Tambe (Eds), Intelligent agents II: Agent theories, architectures and languages, Vol. 1037 of Lecture Notes in Artificial Intelligence (pp. 235–250). Springer Verlag.

  2. Bates J. (1994). The role of emotion in believable agents. Communications of the ACM 37(7): 122–125.

    Article  Google Scholar 

  3. Brooks R.A. (1986). A robust layered control system for a mobile robot. IEEE Journal of robotics and automation 1(1): 14–23.

    Google Scholar 

  4. Brooks R.A. (1991). Intelligence without representation. Artificial Intelligence 47: 139–159.

    Article  Google Scholar 

  5. Cherny L. (1999). Conversation and Community: Discourse in a Social MUD, CLFI Publications.

  6. Curtis, P. (1997). The lambdaMOO programmer’s manual v1.8.0p6, ftp://ftp.research.att.com/dist/eostrom/MOO/html/ProgrammersManual _toc.html.

  7. Dibbell J. (1999). My Tiny Life: Crime and Passion in a Virtual World. Henry & Company, Holt.

    Google Scholar 

  8. Etzioni O. (1993). Intelligence without robots: a reply to brooks. AI Magazine 14(4): 7–13.

    Google Scholar 

  9. Etzioni O. (1994). A softbot-based interface to internet. Communications of the ACM 37(7): 72–76.

    Article  Google Scholar 

  10. Etzioni O. (1997). Moving up the information food chain: deploying softbots on the world wide web. AI Magazine 18(2): 11–18.

    Google Scholar 

  11. Etzioni, O., Golder, K., & Weld, D. (1994). Tractable closed-world reasoning with updates, In Proceedings of KR-94.

  12. Finin T., Labrou Y. and Mayfield J. (1997). KQML as an agent communication language. In: Bradshaw, J. (eds) Software Agents, pp 291–316. MIT Press, Cambridge.

    Google Scholar 

  13. Foner L. (1993). What’s an agent, anyway? a sociological case study. Technical report, MIT Media Lab.

    Google Scholar 

  14. Foner, L. (1997). Entertaining agents: a sociological case study, In Proceedings of the first international conference on autonomous agents.

  15. Genesereth M.R. and Ketchpel S. (1994). Software agents. Communications of the ACM 37(7): 48–53.

    Article  Google Scholar 

  16. Isbell, C. L., Kearns, M., Kormann, D., Singh, S., & Stone, P. (2000). Cobot in LambdaMOO: A Social Statistics Agent, Proceedings of AAAI-2000.

  17. Isbell, C. L., Shelton, C., Kearns, M., Singh, S., & Stone, P. (2001). A social reinforcement learning agent, Proceedings of Agents.

  18. Jennings N., Sycara K. and Wooldridge M. (1999). A roadmap of agent research and development. Autonomous Agents and Multi-Agent Systems 1(1): 7–38.

    Article  Google Scholar 

  19. Lesperance, Y., Levesque, H., Lin, F., Marcu, D., Reiter, R., & Scherl, R. (1995). Foundations of a logical approach to agent programming, Intelligent Agents II, 331–346.

  20. Lesser V.R. (1998). Reflections on the nature of multi-agent coordination and its implications for an agent architecture. Autonomous Agents and Multi-Agent Systems 1(1): 89–112.

    Article  Google Scholar 

  21. Maes P. (1994). Agents that reduce work and information overload. Communications of the ACM 37(7): 30–40.

    Article  Google Scholar 

  22. Mauldin, F. (1994). Chatterbots, TinyMUDs, and the Turing Test: Entering the Loebner Prize Competition, In Proceedings of the twelfth national conference on artificial intelligence.

  23. McCabe, F. G., & Clark, K. L. (1995). April – agent process interaction language, Intelligent Agents: theories, architectures, and languages.

  24. Mitchell T., Caruana R., Freitag D., McDermott J. and Zabowski D. (1994). Experience with a learning personal assistant. Communications of the ACM 37(7): 80–91.

    Article  Google Scholar 

  25. Nwana H., Ndumu D., Lee L. and Collis J. (1999). Zeus: A tool-kit for building distributed multi-agent systems. Applied Artifical Intelligence Journal 13(1): 129–186.

    Article  Google Scholar 

  26. Shelton, C. R. (2000). Balancing multiple sources of reward in reinforcement learning, Advances in Neural Information Processing Systems, pp. 1082–1088.

  27. Singh, S., Kearns, M., Littman, D., & Walker, M. (2000). Empirical evaluation of a reinforcement learning dialogue system, In Proceedings of the seventeenth national conference on Artificial intelligence (AAAI), pp. 645–651.

  28. Sutton R.S. and Barto A.G. (1998). Reinforcement Learning: An Introduction. MIT Press, Cambridge MA.

    Google Scholar 

  29. Sutton, R. S., McAllester, D., Singh, S., & Mansour, Y. (1999). Policy gradient methods for reinforcement learning with function approximation., In Neural Information Processing Systems-1999.

  30. Sycara K., Pannu A., Williamson M. and Zeng D. (1996). Distributed intelligent agents. IEE Expert 11(6): 36–46.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Charles Lee Isbell Jr..

Rights and permissions

Reprints and permissions

About this article

Cite this article

Isbell, C.L., Kearns, M., Singh, S. et al. Cobot in LambdaMOO: An Adaptive Social Statistics Agent. Auton Agent Multi-Agent Syst 13, 327–354 (2006). https://doi.org/10.1007/s10458-006-0005-z

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10458-006-0005-z

Keywords

Navigation