Skip to main content

Explainable Recommendations in Intelligent Systems: Delivery Methods, Modalities and Risks

  • Conference paper
  • First Online:
Book cover Research Challenges in Information Science (RCIS 2020)

Part of the book series: Lecture Notes in Business Information Processing ((LNBIP,volume 385))

Included in the following conference series:

Abstract

With the increase in data volume, velocity and types, intelligent human-agent systems have become popular and adopted in different application domains, including critical and sensitive areas such as health and security. Humans’ trust, their consent and receptiveness to recommendations are the main requirement for the success of such services. Recently, the demand on explaining the recommendations to humans has increased both from humans interacting with these systems so that they make an informed decision and, also, owners and systems managers to increase transparency and consequently trust and users’ retention. Existing systematic reviews in the area of explainable recommendations focused on the goal of providing explanations, their presentation and informational content. In this paper, we review the literature with a focus on two user experience facets of explanations; delivery methods and modalities. We then focus on the risks of explanation both on user experience and their decision making. Our review revealed that explanations delivery to end-users is mostly designed to be along with the recommendation in a push and pull styles while archiving explanations for later accountability and traceability is still limited. We also found that the emphasis was mainly on the benefits of recommendations while risks and potential concerns, such as over-reliance on machines, is still a new area to explore.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Al-Taie, M.Z., Kadry, S.: Visualization of explanations in recommender systems. J. Adv. Manag. Sci. 2(2), 140–144 (2014)

    Article  Google Scholar 

  2. Andreou, A., Venkatadri, G., Goga, O., Gummadi, K., Loiseau, P., Mislove, A.: Investigating ad transparency mechanisms in social media: a case study of Facebook’s explanations (2018)

    Google Scholar 

  3. Arioua, A., Buche, P., Croitoru, M.: Explanatory dialogues with argumentative faculties over inconsistent knowledge bases. Expert Syst. Appl. 80, 244–262 (2017)

    Article  Google Scholar 

  4. Bader, R., Woerndl, W., Karitnig, A., Leitner, G.: Designing an explanation interface for proactive recommendations in automotive scenarios. In: Ardissono, L., Kuflik, T. (eds.) UMAP 2011. LNCS, vol. 7138, pp. 92–104. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-28509-7_10

    Chapter  Google Scholar 

  5. Barria-Pineda, J., Akhuseyinoglu, K., Brusilovsky, P.: Explaining need-based educational recommendations using interactive open learner models. In: Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization, pp. 273–277. ACM (2019)

    Google Scholar 

  6. Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., Shadbolt, N.: ‘It’s reducing a human being to a percentage’: perceptions of justice in algorithmic decisions. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, p. 377. ACM (2018)

    Google Scholar 

  7. Biran, O., McKeown, K.R.: Human-centric justification of machine learning predictions. In: IJCAI, pp. 1461–1467 (2017)

    Google Scholar 

  8. Blake, J.N., Kerr, D.V., Gammack, J.G.: Streamlining patient consultations for sleep disorders with a knowledge-based cdss. Inf. Syst. 56, 109–119 (2016)

    Article  Google Scholar 

  9. Bostandjiev, S., O’Donovan, J., Höllerer, T.: TasteWeights: a visual interactive hybrid recommender system. In: Proceedings of the sixth ACM Conference on Recommender systems, pp. 35–42. ACM (2012)

    Google Scholar 

  10. Brooks, M., Amershi, S., Lee, B., Drucker, S.M., Kapoor, A., Simard, P.: FeatureInsight: visual support for error-driven feature ideation in text classification. In: 2015 IEEE Conference on Visual Analytics Science and Technology (VAST), pp. 105–112. IEEE (2015)

    Google Scholar 

  11. Bunt, A., Lount, M., Lauzon, C.: Are explanations always important?: A study of deployed, low-cost intelligent interactive systems. In: Proceedings of the 2012 ACM International Conference on Intelligent User Interfaces, pp. 169–178. ACM (2012)

    Google Scholar 

  12. Bussone, A., Stumpf, S., O’Sullivan, D.: The role of explanations on trust and reliance in clinical decision support systems. In: 2015 International Conference on Healthcare Informatics, pp. 160–169. IEEE (2015)

    Google Scholar 

  13. Cai, C.J., Jongejan, J., Holbrook, J.: The effects of example-based explanations in a machine learning interface. In: Proceedings of the 24th International Conference on Intelligent User Interfaces, pp. 258–262. ACM (2019)

    Google Scholar 

  14. Chromik, M., Eiband, M., Völkel, S.T., Buschek, D.: Dark patterns of explainability, transparency, and user control for intelligent systems. In: IUI Workshops (2019)

    Google Scholar 

  15. Coba, L., Zanker, M., Rook, L., Symeonidis, P.: Exploring users’ perception of collaborative explanation styles. In: 2018 IEEE 20th Conference on Business Informatics (CBI), vol. 1, pp. 70–78. IEEE (2018)

    Google Scholar 

  16. Díaz-Agudo, B., Recio-Garcia, J.A., Jimenez-Díaz, G.: Data explanation with CBR. In: ICCBR 2018, p. 64 (2018)

    Google Scholar 

  17. Dodge, J., Liao, Q.V., Zhang, Y., Bellamy, R.K., Dugan, C.: Explaining models: an empirical study of how explanations impact fairness judgment. In: Proceedings of the 24th International Conference on Intelligent User Interfaces, pp. 275–285. ACM (2019)

    Google Scholar 

  18. Dominguez, V., Messina, P., Donoso-Guzmán, I., Parra, D.: The effect of explanations and algorithmic accuracy on visual recommender systems of artistic images. In: Proceedings of the 24th International Conference on Intelligent User Interfaces, pp. 408–416. ACM (2019)

    Google Scholar 

  19. Du Toit, E.: Constructive feedback as a learning tool to enhance students’ self-regulation and performance in higher education. Perspect. Educ. 30(2), 32–40 (2012)

    Google Scholar 

  20. Ehrlich, K., Kirk, S.E., Patterson, J., Rasmussen, J.C., Ross, S.I., Gruen, D.M.: Taking advice from intelligent systems: the double-edged sword of explanations. In: Proceedings of the 16th International Conference on Intelligent User Interfaces, pp. 125–134. ACM (2011)

    Google Scholar 

  21. Eiband, M., Buschek, D., Kremer, A., Hussmann, H.: The impact of placebic explanations on trust in intelligent systems. In: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, p. LBW0243. ACM (2019)

    Google Scholar 

  22. Eiband, M., Schneider, H., Buschek, D.: Normative vs. pragmatic: two perspectives on the design of explanations in intelligent systems. In: IUI Workshops (2018)

    Google Scholar 

  23. Eiband, M., Völkel, S.T., Buschek, D., Cook, S., Hussmann, H.: When people and algorithms meet: user-reported problems in intelligent everyday applications. In: Proceedings of the 24th International Conference on Intelligent User Interfaces, pp. 96–106. ACM (2019)

    Google Scholar 

  24. Elahi, M., Ge, M., Ricci, F., Fernández-Tobías, I., Berkovsky, S., David, M.: Interaction design in a mobile food recommender system. In: CEUR Workshop Proceedings, CEUR-WS (2015)

    Google Scholar 

  25. Eslami, M., Krishna Kumaran, S.R., Sandvig, C., Karahalios, K.: Communicating algorithmic process in online behavioral advertising. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, p. 432. ACM (2018)

    Google Scholar 

  26. Galindo, J.A., Dupuy-Chessa, S., Mandran, N., Céret, E.: Using user emotions to trigger UI adaptation. In: 2018 12th International Conference on Research Challenges in Information Science (RCIS), pp. 1–11. IEEE (2018)

    Google Scholar 

  27. Gedikli, F., Jannach, D., Ge, M.: How should I explain? A comparison of different explanation types for recommender systems. Int. J. Hum Comput. Stud. 72(4), 367–382 (2014)

    Article  Google Scholar 

  28. Goodman, B., Flaxman, S.: Eu regulations on algorithmic decision-making and a ‘right to explanation’. In: ICML Workshop on Human Interpretability in Machine Learning (WHI 2016), New York (2016)

    Google Scholar 

  29. Gretarsson, B., O’Donovan, J., Bostandjiev, S., Hall, C., Höllerer, T.: SmallWorlds: visualizing social recommendations. In: Computer Graphics Forum, vol. 29, pp. 833–842. Wiley Online Library (2010)

    Google Scholar 

  30. Gutiérrez, F., Charleer, S., De Croon, R., Htun, N.N., Goetschalckx, G., Verbert, K.: Explaining and exploring job recommendations: a user-driven approach for interacting with knowledge-based job recommender systems. In: Proceedings of the 13th ACM Conference on Recommender Systems, pp. 60–68 (2019)

    Google Scholar 

  31. Hagras, H.: Toward human-understandable, explainable AI. Computer 51(9), 28–36 (2018)

    Article  Google Scholar 

  32. ter Hoeve, M., Heruer, M., Odijk, D., Schuth, A., de Rijke, M.: Do news consumers want explanations for personalized news rankings. In: FATREC Workshop on Responsible Recommendation Proceedings (2017)

    Google Scholar 

  33. Holliday, D., Wilson, S., Stumpf, S.: The effect of explanations on perceived control and behaviors in intelligent systems. In: CHI 2013 Extended Abstracts on Human Factors in Computing Systems, pp. 181–186. ACM (2013)

    Google Scholar 

  34. Hosseini, M., Shahri, A., Phalp, K., Taylor, J., Ali, R.: Crowdsourcing: a taxonomy and systematic mapping study. Comput. Sci. Rev. 17, 43–69 (2015)

    Article  MathSciNet  Google Scholar 

  35. Hu, J., Zhang, Z., Liu, J., Shi, C., Yu, P.S., Wang, B.: RecExp: a semantic recommender system with explanation based on heterogeneous information network. In: Proceedings of the 10th ACM Conference on Recommender Systems, pp. 401–402. ACM (2016)

    Google Scholar 

  36. Huang, S.H., Bhatia, K., Abbeel, P., Dragan, A.D.: Establishing appropriate trust via critical states. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3929–3936. IEEE (2018)

    Google Scholar 

  37. Hussein, T., Neuhaus, S.: Explanation of spreading activation based recommendations. In: Proceedings of the 1st International Workshop on Semantic Models for Adaptive Interactive Systems, SEMAIS, vol. 10, pp. 24–28. Citeseer (2010)

    Google Scholar 

  38. Kang, B., Tintarev, N., Höllerer, T., O’Donovan, J.: What am I not seeing? An interactive approach to social content discovery in microblogs. In: Spiro, E., Ahn, Y.-Y. (eds.) SocInfo 2016. LNCS, vol. 10047, pp. 279–294. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-47874-6_20

    Chapter  Google Scholar 

  39. Karga, S., Satratzemi, M.: Using explanations for recommender systems in learning design settings to enhance teachers’ acceptance and perceived experience. Educ. Inf. Technol. 24, 1–22 (2019)

    Article  Google Scholar 

  40. Katarya, R., Jain, I., Hasija, H.: An interactive interface for instilling trust and providing diverse recommendations. In: 2014 International Conference on Computer and Communication Technology (ICCCT), pp. 17–22. IEEE (2014)

    Google Scholar 

  41. Kleinerman, A., Rosenfeld, A., Kraus, S.: Providing explanations for recommendations in reciprocal environments. In: Proceedings of the 12th ACM Conference on Recommender Systems, pp. 22–30. ACM (2018)

    Google Scholar 

  42. Knijnenburg, B.P., Kobsa, A.: Making decisions about privacy: information disclosure in context-aware recommender systems. ACM Trans. Interact. Intell. Syst. (TiiS) 3(3), 20 (2013)

    Google Scholar 

  43. Krause, J., Perer, A., Bertini, E.: A user study on the effect of aggregating explanations for interpreting machine learning models. In: ACM KDD Workshop on Interactive Data Exploration and Analytics (2018)

    Google Scholar 

  44. Kroll, J.A., Barocas, S., Felten, E.W., Reidenberg, J.R., Robinson, D.G., Yu, H.: Accountable algorithms. U. Pa. L. Rev. 165, 633 (2016)

    Google Scholar 

  45. Kulesza, T., Burnett, M., Wong, W.K., Stumpf, S.: Principles of explanatory debugging to personalize interactive machine learning. In: Proceedings of the 20th International Conference on Intelligent User Interfaces, pp. 126–137. ACM (2015)

    Google Scholar 

  46. Kulesza, T., Stumpf, S., Burnett, M., Kwan, I.: Tell me more?: The effects of mental model soundness on personalizing an intelligent agent. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1–10. ACM (2012)

    Google Scholar 

  47. Kulesza, T., Stumpf, S., Burnett, M., Yang, S., Kwan, I., Wong, W.K.: Too much, too little, or just right? Ways explanations impact end users’ mental models. In: 2013 IEEE Symposium on Visual Languages and Human Centric Computing, pp. 3–10. IEEE (2013)

    Google Scholar 

  48. Lai, V., Tan, C.: On human predictions with explanations and predictions of machine learning models: a case study on deception detection, pp. 29–38 (2019)

    Google Scholar 

  49. Lamche, B., Adıgüzel, U., Wörndl, W.: Interactive explanations in mobile shopping recommender systems. In: Joint Workshop on Interfaces and Human Decision Making in Recommender Systems, p. 14 (2014)

    Google Scholar 

  50. Langley, P., Meadows, B., Sridharan, M., Choi, D.: Explainable agency for intelligent autonomous systems. In: Twenty-Ninth IAAI Conference (2017)

    Google Scholar 

  51. Le Bras, P., Robb, D.A., Methven, T.S., Padilla, S., Chantler, M.J.: Improving user confidence in concept maps: exploring data driven explanations. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, p. 404. ACM (2018)

    Google Scholar 

  52. Leon, P.G., Cranshaw, J., Cranor, L.F., Graves, J., Hastak, M., Xu, G.: What do online behavioral advertising disclosures communicate to users? (cmu-cylab-12-008) (2012)

    Google Scholar 

  53. Lepri, B., Oliver, N., Letouzé, E., Pentland, A., Vinck, P.: Fair, transparent, and accountable algorithmic decision-making processes. Philos. Technol. 31(4), 611–627 (2018)

    Article  Google Scholar 

  54. Li, T., Convertino, G., Tayi, R.K., Kazerooni, S.: What data should I protect?: Recommender and planning support for data security analysts. In: IUI, pp. 286–297 (2019)

    Google Scholar 

  55. Lim, B.Y., Dey, A.K.: Assessing demand for intelligibility in context-aware applications. In: Proceedings of the 11th International Conference on Ubiquitous Computing, pp. 195–204. ACM (2009)

    Google Scholar 

  56. Loepp, B., Herrmanny, K., Ziegler, J.: Blended recommending: integrating interactive information filtering and algorithmic recommender techniques. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 975–984. ACM (2015)

    Google Scholar 

  57. Millecamp, M., Htun, N.N., Conati, C., Verbert, K.: To explain or not to explain: the effects of personal characteristics when explaining music recommendations. In: IUI, pp. 397–407 (2019)

    Google Scholar 

  58. Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2018)

    Article  MathSciNet  Google Scholar 

  59. Moher, D., Liberati, A., Tetzlaff, J., Altman, D.G.: Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann. Int. Med. 151(4), 264–269 (2009)

    Article  Google Scholar 

  60. Muhammad, K., Lawlor, A., Rafter, R., Smyth, B.: Great explanations: opinionated explanations for recommendations. In: Hüllermeier, E., Minor, M. (eds.) ICCBR 2015. LNCS (LNAI), vol. 9343, pp. 244–258. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24586-7_17

    Chapter  Google Scholar 

  61. Naiseh, M., Jiang, N., Ma, J., Ali, R.: Personalising explainable recommendations: literature and conceptualisation. In: WorldCist 2020 - 8th World Conference on Information Systems and Technologies. Springer, Heidelberg (2020)

    Google Scholar 

  62. Narayanan, M., Chen, E., He, J., Kim, B., Gershman, S., Doshi-Velez, F.: How do humans understand explanations from machine learning systems? An evaluation of the human-interpretability of explanation (2018)

    Google Scholar 

  63. Nguyen, T.N., Ricci, F.: A chat-based group recommender system for tourism. In: Schegg, R., Stangl, B. (eds.) Information and Communication Technologies in Tourism 2017, pp. 17–30. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-51168-9_2

    Chapter  Google Scholar 

  64. Nunes, I., Jannach, D.: A systematic review and taxonomy of explanations in decision support and recommender systems. User Model. User-Adap. Inter. 27(3–5), 393–444 (2017)

    Article  Google Scholar 

  65. Paraschakis, D.: Towards an ethical recommendation framework. In: 2017 11th International Conference on Research Challenges in Information Science (RCIS), pp. 211–220. IEEE (2017)

    Google Scholar 

  66. Parra, D., Brusilovsky, P., Trattner, C.: See what you want to see: visual user-driven approach for hybrid recommendation. In: Proceedings of the 19th International Conference on Intelligent User Interfaces, pp. 235–240. ACM (2014)

    Google Scholar 

  67. Poursabzi-Sangdeh, F., Goldstein, D.G., Hofman, J.M., Vaughan, J.W., Wallach, H.: Manipulating and measuring model interpretability (2018)

    Google Scholar 

  68. Ramachandran, D., et al.: A TV program discovery dialog system using recommendations. In: Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pp. 435–437 (2015)

    Google Scholar 

  69. Rosenfeld, A., Richardson, A.: Explainability in human-agent systems. Auton. Agent. Multi-Agent Syst. 33(6), 673–705 (2019)

    Article  Google Scholar 

  70. Ruiz-Iniesta, A., Melgar, L., Baldominos, A., Quintana, D.: Improving childrens’ experience on a mobile EdTech platform through a recommender system. Mob. Inf. Syst. 2018 (2018)

    Google Scholar 

  71. Samek, W., Wiegand, T., Müller, K.R.: Explainable artificial intelligence: understanding, visualizing and interpreting deep learning models (2017)

    Google Scholar 

  72. Sato, M., Ahsan, B., Nagatani, K., Sonoda, T., Zhang, Q., Ohkuma, T.: Explaining recommendations using contexts. In: 23rd International Conference on Intelligent User Interfaces, pp. 659–664. ACM (2018)

    Google Scholar 

  73. Schäfer, H., et al.: Towards health (aware) recommender systems. In: Proceedings of the 2017 International Conference on Digital Health, pp. 157–161. ACM (2017)

    Google Scholar 

  74. Schaffer, J., Giridhar, P., Jones, D., Höllerer, T., Abdelzaher, T., O’donovan, J.: Getting the message?: A study of explanation interfaces for microblog data analysis. In: Proceedings of the 20th International Conference on Intelligent User Interfaces, pp. 345–356. ACM (2015)

    Google Scholar 

  75. Schaffer, J., O’Donovan, J., Michaelis, J., Raglin, A., Höllerer, T.: I can do better than your AI: expertise and explanations. In: IUI, pp. 240–251 (2019)

    Google Scholar 

  76. Springer, A., Whittaker, S.: Progressive disclosure: empirically motivated approaches to designing effective transparency, pp. 107–120 (2019)

    Google Scholar 

  77. Stumpf, S., et al.: Interacting meaningfully with machine learning systems: three experiments. Int. J. Hum. Comput. Stud. 67(8), 639–662 (2009)

    Article  Google Scholar 

  78. Stumpf, S., Skrebe, S., Aymer, G., Hobson, J.: Explaining smart heating systems to discourage fiddling with optimized behavior. In: CEUR Workshop Proceedings, vol. 2068 (2018)

    Google Scholar 

  79. Svrcek, M., Kompan, M., Bielikova, M.: Towards understandable personalized recommendations: hybrid explanations. Comput. Sci. Inf. Syst. 16(1), 179–203 (2019)

    Article  Google Scholar 

  80. Tamagnini, P., Krause, J., Dasgupta, A., Bertini, E.: Interpreting black-box classifiers using instance-level visual explanations. In: Proceedings of the 2nd Workshop on Human-In-the-Loop Data Analytics, p. 6. ACM (2017)

    Google Scholar 

  81. Tsai, C.H., Brusilovsky, P.: Providing control and transparency in a social recommender system for academic conferences. In: Proceedings of the 25th Conference on User Modeling, Adaptation and Personalization, pp. 313–317. ACM (2017)

    Google Scholar 

  82. Tsai, C.H., Brusilovsky, P.: Explaining recommendations in an interactive hybrid social recommender. In: Proceedings of the 24th International Conference on Intelligent User Interfaces, pp. 391–396. ACM (2019)

    Google Scholar 

  83. Verbert, K., Parra, D., Brusilovsky, P., Duval, E.: Visualizing recommendations to support exploration, transparency and controllability. In: Proceedings of the 2013 International Conference on Intelligent User Interfaces, pp. 351–362. ACM (2013)

    Google Scholar 

  84. Wiebe, M., Geiskkovitch, D.Y., Bunt, A.: Exploring user attitudes towards different approaches to command recommendation in feature-rich software. In: Proceedings of the 21st International Conference on Intelligent User Interfaces, pp. 43–47. ACM (2016)

    Google Scholar 

  85. Zanker, M., Ninaus, D.: Knowledgeable explanations for recommender systems. In: 2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology, vol. 1, pp. 657–660. IEEE (2010)

    Google Scholar 

  86. Zanker, M., Schoberegger, M.: An empirical study on the persuasiveness of fact-based explanations for recommender systems. In: Joint Workshop on Interfaces and Human Decision Making in Recommender Systems, vol. 1253, pp. 33–36 (2014)

    Google Scholar 

  87. Zhao, G., et al.: Personalized reason generation for explainable song recommendation. ACM Trans. Intell. Syst. Technol. (TIST) 10(4), 41 (2019)

    Google Scholar 

Download references

Acknowledgments

This work is partially funded by iQ HealthTech and Bournemouth university PGR development fund.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mohammad Naiseh .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Naiseh, M., Jiang, N., Ma, J., Ali, R. (2020). Explainable Recommendations in Intelligent Systems: Delivery Methods, Modalities and Risks. In: Dalpiaz, F., Zdravkovic, J., Loucopoulos, P. (eds) Research Challenges in Information Science. RCIS 2020. Lecture Notes in Business Information Processing, vol 385. Springer, Cham. https://doi.org/10.1007/978-3-030-50316-1_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-50316-1_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-50315-4

  • Online ISBN: 978-3-030-50316-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics