Skip to main content

Explaining the Behavior of Reinforcement Learning Agents Using Association Rules

  • Conference paper
  • First Online:
Learning and Intelligent Optimization (LION 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14286))

Included in the following conference series:

  • 438 Accesses

Abstract

Deep reinforcement learning algorithms are increasingly used to drive decision-making systems. However, there exists a known tension between the efficiency of a machine learning algorithm and its level of explainability. Generally speaking, increased efficiency comes with the cost of decisions that are harder to explain. This concern is related to explainable artificial intelligence, which is a hot topic in the research community. In this paper, we propose to explain the behaviour of a black-box sequential decision process, built with a deep reinforcement learning algorithm, thanks to standard data mining tools, i.e. association rules. We apply this idea to the design of playing bots, which is ubiquitous in the video game industry. To do so, we designed three agents trained with a deep Q-learning algorithm for the game Street FighterTurbo II. Each agent has a specific playing style and the data mining algorithm aims to find rules maximizing the lift, while ensuring a minimum threshold for the confidence and the support. Experiments show that association rules can provide insights on the behavior of each agent, and reflect their specific playing style. We believe that this work is a next step towards the explanation of complex models in deep reinforcement learning.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/TASEmulators/BizHawk.

  2. 2.

    https://github.com/mhahsler/arules/.

References

  1. Agrawal, R., Imieliński, T., Swami, A.: Mining association rules between sets of items in large databases. SIGMOD Rec. 22(2), 207–216 (1993). https://doi.org/10.1145/170036.170072

    Article  Google Scholar 

  2. Agrawal, R., Srikant, R., et al.: Fast algorithms for mining association rules. In: Proceedings of 20th International Conference on Very Large Data Bases, VLDB, vol. 1215, pp. 487–499. Citeseer (1994)

    Google Scholar 

  3. Atzmueller, M.: Subgroup discovery. Wiley Interdisc. Rev. Data Min. Knowl. Discovery 5(1), 35–49 (2015)

    Article  Google Scholar 

  4. Berner, C., et al.: Dota 2 with large scale deep reinforcement learning. CoRR abs/1912.06680 (2019). http://arxiv.org/abs/1912.06680

  5. Burkart, N., Huber, M.F.: A survey on the explainability of supervised machine learning. J. Artif. Intell. Res. 70, 245–317 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  6. Došilović, F.K., Brčić, M., Hlupić, N.: Explainable artificial intelligence: a survey. In: 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), pp. 0210–0215. IEEE (2018)

    Google Scholar 

  7. Fletcher, A.: How we built an AI to play Street Fighter II - can you beat it? https://medium.com/gyroscopesoftware/how-we-built-an-ai-to-play-street-fighter-ii-can-you-beat-it-9542ba43f02b. Accessed 18 Nov 2022

  8. Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. In: Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pp. 315–323. JMLR Workshop and Conference Proceedings (2011)

    Google Scholar 

  9. Guss, W.H., et al.: The minerl competition on sample efficient reinforcement learning using human priors. CoRR abs/1904.10079 (2019). http://arxiv.org/abs/1904.10079

  10. Haarnoja, T., Zhou, A., Abbeel, P., Levine, S.: Soft actor-critic: off-policy maximum entropy deep reinforcement learning with a stochastic actor. In: International Conference on Machine Learning, pp. 1861–1870. PMLR (2018)

    Google Scholar 

  11. Hong, T.P., Kuo, C.S., Chi, S.C.: Mining association rules from quantitative data. Intell. Data Anal. 3(5), 363–376 (1999)

    MATH  Google Scholar 

  12. Jovanoski, V., Lavrač, N.: Classification rule learning with APRIORI-C. In: Brazdil, P., Jorge, A. (eds.) EPIA 2001. LNCS (LNAI), vol. 2258, pp. 44–51. Springer, Heidelberg (2001). https://doi.org/10.1007/3-540-45329-6_8

    Chapter  Google Scholar 

  13. Kempka, M., Wydmuch, M., Runc, G., Toczek, J., Jaskowski, W.: Vizdoom: a doom-based AI research platform for visual reinforcement learning. CoRR abs/1605.02097 (2016). http://arxiv.org/abs/1605.02097

  14. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  15. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)

    Article  Google Scholar 

  16. LeCun, Y., Bengio, Y., et al.: Convolutional networks for images, speech, and time series. Handb. Brain Theory Neural Netw. 3361(10), 1995 (1995)

    Google Scholar 

  17. Lillicrap, T.P., et al.: Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971 (2015)

  18. Madumal, P., Miller, T., Sonenberg, L., Vetere, F.: Explainable reinforcement learning through a causal lens. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 2493–2500 (2020)

    Google Scholar 

  19. Mnih, V., et al.: Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602 (2013)

  20. Osa, T., et al.: An algorithmic perspective on imitation learning. Found. Trends® Rob. 7(1–2), 1–179 (2018)

    Google Scholar 

  21. Pawar, U., O’Shea, D., Rea, S., O’Reilly, R.: Explainable AI in healthcare. In: 2020 International Conference on Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), pp. 1–2. IEEE (2020)

    Google Scholar 

  22. Peake, G., Wang, J.: Explanation mining: post hoc interpretability of latent factor models for recommendation systems. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 2060–2069 (2018)

    Google Scholar 

  23. Puiutta, E., Veith, E.M.: Explainable reinforcement learning: a survey (2020)

    Google Scholar 

  24. Riedmiller, M., et al.: Learning by playing solving sparse reward tasks from scratch. In: International Conference on Machine Learning, pp. 4344–4353. PMLR (2018)

    Google Scholar 

  25. Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017)

  26. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT press, Cambridge (2018)

    MATH  Google Scholar 

  27. Wu, H., Lu, Z., Pan, L., Xu, R., Jiang, W.: An improved apriori-based algorithm for association rules mining. In: 2009 Sixth International Conference on Fuzzy Systems and Knowledge Discovery, vol. 2, pp. 51–55. IEEE (2009)

    Google Scholar 

  28. Yuan, X.: An improved apriori algorithm for mining association rules. In: AIP Conference Proceedings, vol. 1820, p. 080005. AIP Publishing LLC (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zahra Parham .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Parham, Z., de Lille, V.T., Cappart, Q. (2023). Explaining the Behavior of Reinforcement Learning Agents Using Association Rules. In: Sellmann, M., Tierney, K. (eds) Learning and Intelligent Optimization. LION 2023. Lecture Notes in Computer Science, vol 14286. Springer, Cham. https://doi.org/10.1007/978-3-031-44505-7_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-44505-7_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-44504-0

  • Online ISBN: 978-3-031-44505-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics