Abstract
In real-world applications when deploying Machine Learning (ML) models, initial model development includes close analysis of the model results and behavior by a data scientist. Once trained, however, models may need to be retrained with new data or updated to adhere to new rules or regulations. This presents two challenges. First, how to communicate how a model is making its decisions before and after retraining, and second how to support model editing to take into account new requirements. To address these needs, we built AIMEE (AI Model Explorer and Editor), a tool created to address these challenges by providing interactive methods to explain, visualize, and modify model decision boundaries using rules. Rules should benefit model builders by providing a layer of abstraction for understanding and manipulating the model and reduces the need to modify individual rows of data directly. To evaluate if this was the case, we conducted a pair of user studies totaling 23 participants to evaluate AIMEE's rules-based approach for model explainability and editing. We found that participants correctly interpreted rules and report on their perspectives of how rules are beneficial (and not), ways that rules could support collaboration, and provide a usability evaluation of the tool.
- Behnoush Abdollahi and Olfa Nasraoui. 2018. Transparency in Fair Machine Learning: the Case of Explainable Recommender Systems. Springer International Publishing, Cham, 21--35. https://doi.org/10.1007/978--3--319--90403-0_2Google ScholarCross Ref
- Amina Adadi and Mohammed Berrada. 2018. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access, Vol. 6 (2018), 52138--52160. https://doi.org/10.1109/ACCESS.2018.2870052Google ScholarCross Ref
- Oznur Alkan, Dennis Wei, Massimiliano Mattetti, Rahul Nair, Elizabeth Daly, and Diptikalyan Saha. 2022. FROTE: Feedback Rule-Driven Oversampling for Editing Models. In Proceedings of Machine Learning and Systems 2022, MLSys 2022, Santa Clara, CA, USA, August 29 - September 1, 2022. mlsys.org.Google Scholar
- Saleema Amershi, Andrew Begel, Christian Bird, Robert DeLine, Harald Gall, Ece Kamar, Nachiappan Nagappan, Besmira Nushi, and Thomas Zimmermann. 2019. Software engineering for machine learning: A case study. In 2019 IEEE/ACM 41st International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP). IEEE, 291--300.Google ScholarDigital Library
- Saleema Amershi, Maya Cakmak, William Bradley Knox, and Todd Kulesza. 2014. Power to the people: The role of humans in interactive machine learning. Ai Magazine, Vol. 35, 4 (2014), 105--120.Google ScholarDigital Library
- Ehsan Amid and Manfred K Warmuth. 2019. TriMap: Large-scale dimensionality reduction using triplets. arXiv preprint arXiv:1910.00204 (2019).Google Scholar
- Sule Anjomshoae, Amro Najjar, Davide Calvaresi, and Kary Fr"amling. 2019. Explainable Agents and Robots: Results from a Systematic Literature Review. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems (Montreal QC, Canada) (AAMAS '19). International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 1078--1088.Google Scholar
- Nuno Antunes, Leandro Marinho, Flavio Figueiredo, Nuno Lourenço, Wagner Meira Jr, and Walter Santos. 2018. Fairness and Transparency of Machine Learning for Trustworthy Cloud Services. 188--193. https://doi.org/10.1109/DSN-W.2018.00063Google ScholarCross Ref
- Matthew Arnold, Rachel KE Bellamy, Michael Hind, Stephanie Houde, Sameep Mehta, Aleksandra Mojsilović, Ravi Nair, K Natesan Ramamurthy, Alexandra Olteanu, David Piorkowski, et al. 2019. FactSheets: Increasing trust in AI services through supplier's declarations of conformity. IBM Journal of Research and Development, Vol. 63, 4/5 (2019), 6--1.Google ScholarCross Ref
- Vijay Arya, Rachel KE Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C Hoffman, Stephanie Houde, Q Vera Liao, Ronny Luss, Aleksandra Mojsilović, et al. 2019. One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques. arXiv preprint arXiv:1909.03012 (2019).Google Scholar
- Anil Aswani, Philip Kaminsky, Yonatan Mintz, Elena Flowers, and Yoshimi Fukuoka. 2018. Behavioral Modeling in Weight Loss Interventions. European Journal of Operational Research, Vol. 272 (07 2018). https://doi.org/10.1016/j.ejor.2018.07.011Google ScholarCross Ref
- Hamsa Bastani and Osbert Bastani. 2017. Interpreting Predictive Models for Human-in-the-Loop Analytics. In FATML Workshop.Google Scholar
- Victoria Bellotti and Keith Edwards. 2001. Intelligibility and Accountability: Human Considerations in Context-Aware Systems. HumanComputer Interaction, Vol. 16, 2--4 (2001), 193--212.Google Scholar
- Anant Bhardwaj, Souvik Bhattacherjee, Amit Chavan, Amol Deshpande, Aaron J. Elmore, Samuel Madden, and Aditya G. Parameswaran. 2014. DataHub: Collaborative Data Science Dataset Version Management at Scale. https://doi.org/10.48550/ARXIV.1409.0798Google ScholarCross Ref
- Mayernik Matthew S. Borgman Christine L., Wallis Jillian. C. 2012. Who's Got the Data? Interdependencies in Science and Technology Collaboration. (2012).Google Scholar
- Bertjan Broeksema, Alexandru C Telea, and Thomas Baudel. 2013. Visual Analysis of Multi-Dimensional Categorical Data Sets. In Computer Graphics Forum, Vol. 32. Wiley Online Library, 158--169.Google Scholar
- Ángel Alexander Cabrera, Marco Tulio Ribeiro, Bongshin Lee, Robert Deline, Adam Perer, and Steven M Drucker. 2023. What Did My AI Learn? How Data Scientists Make Sense of Model Behavior. ACM Transactions on Computer-Human Interaction, Vol. 30, 1 (2023), 1--27.Google ScholarDigital Library
- Maya Cakmak, Crystal Chao, and Andrea L Thomaz. 2010. Designing interactions for robot active learners. IEEE Transactions on Autonomous Mental Development, Vol. 2, 2 (2010), 108--118.Google ScholarDigital Library
- Alexandra Chouldechova. 2017. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data, Vol. 5 2 (2017), 153--163.Google Scholar
- European Commission. 2021. Proposal for a Regulation laying down harmonised rules on artificial intelligence.Google Scholar
- Roberto Confalonieri, Fermín Prado, Sebastia Agramunt, Daniel Malagarriga, Daniele Faggion, Tillman Weyde, and Tarek Besold. 2019. An Ontology-based Approach to Explaining Artificial Neural Networks.Google Scholar
- Mark Craven and Jude Shavlik. 1995. Extracting Tree-Structured Representations of Trained Networks. In Advances in Neural Information Processing Systems, D. Touretzky, M.C. Mozer, and M. Hasselmo (Eds.), Vol. 8. MIT Press. https://proceedings.neurips.cc/paper/1995/file/45f31d16b1058d586fc3be7207b58053-Paper.pdfGoogle Scholar
- Elizabeth M. Daly, Massimiliano Mattetti, Öznur Alkan, and Rahul Nair. 2021. User Driven Model Adjustment via Boolean Rule Explanations. In Conference on Artificial Intelligence, Vol. 35. 5896--5904.Google ScholarCross Ref
- Sanjeeb Dash, Oktay Gunluk, and Dennis Wei. 2018. Boolean Decision Rules via Column Generation. Advances in Neural Information Processing Systems, Vol. 31 (2018), 4655--4665.Google Scholar
- Fred D. Davis. 1989. Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Q., Vol. 13 (1989), 319--340.Google ScholarDigital Library
- Houtao Deng. 2019. Interpreting Tree Ensembles with inTrees. arXiv:1408.5456, Vol. 7 (06 2019). https://doi.org/10.1007/s41060-018-0144--8Google ScholarCross Ref
- Central Digital and Data Office. 2021. UK government publishes pioneering standard for algorithmic transparency. https://www.gov.uk/government/news/uk-government-publishes-pioneering-standard-for-algorithmic-transparency.Google Scholar
- Filip Karlo Do?ilovi?, Mario Br?i?, and Nikica Hlupi?. 2018. Explainable artificial intelligence: A survey. In 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO). 0210--0215. https://doi.org/10.23919/MIPRO.2018.8400040Google ScholarCross Ref
- Jerry Alan Fails and Dan R Olsen Jr. 2003. Interactive machine learning. In Proceedings of the 8th international conference on Intelligent user interfaces. 39--45.Google ScholarDigital Library
- Rebecca Fiebrink, Perry R. Cook, and Dan Trueman. 2011. Human Model Evaluation in Interactive Supervised Learning. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Vancouver, BC, Canada) (CHI '11). Association for Computing Machinery, New York, NY, USA, 147--156. https://doi.org/10.1145/1978942.1978965Google ScholarDigital Library
- Simson Garfinkel, Jeanna Matthews, Stuart Shapiro, and Jonathan Smith. 2017. Toward algorithmic transparency and accountability. Commun. ACM, Vol. 60 (08 2017), 5--5. https://doi.org/10.1145/3125780Google ScholarDigital Library
- Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé Iii, and Kate Crawford. 2021. Datasheets for datasets. Commun. ACM, Vol. 64, 12 (2021), 86--92.Google ScholarDigital Library
- John C Gower. 1971. A general coefficient of similarity and some of its properties. Biometrics (1971), 857--871.Google Scholar
- Corrado Grappiolo, Emile van Gerwen, Jack Verhoosel, and Lou Somers. 2019. The Semantic Snake Charmer Search Engine: A Tool to Facilitate Data Science in High-Tech Industry Domains. In Proceedings of the 2019 Conference on Human Information Interaction and Retrieval (Glasgow, Scotland UK) (CHIIR '19). Association for Computing Machinery, New York, NY, USA, 355--359. https://doi.org/10.1145/3295750.3298915Google ScholarDigital Library
- Edouard Grave, Armand Joulin, and Quentin Berthet. 2019. Unsupervised alignment of embeddings with wasserstein procrustes. In The 22nd International Conference on Artificial Intelligence and Statistics. PMLR, 1880--1890.Google Scholar
- Riccardo Guidotti, Anna Monreale, Fosca Giannotti, Dino Pedreschi, Salvatore Ruggieri, and Franco Turini. 2019. Factual and Counterfactual Explanations for Black Box Decision Making. IEEE Intelligent Systems, Vol. 34, 6 (2019), 14--23. https://doi.org/10.1109/MIS.2019.2957223Google ScholarCross Ref
- David Gunning and David Aha. 2019. DARPA's Explainable Artificial Intelligence (XAI) Program. AI Magazine, Vol. 40, 2 (Jun. 2019), 44--58. https://doi.org/10.1609/aimag.v40i2.2850Google ScholarDigital Library
- Karthik S Gurumoorthy, Amit Dhurandhar, Guillermo Cecchi, and Charu Aggarwal. 2019. Efficient data representation by selecting prototypes with importance weights. In 2019 IEEE International Conference on Data Mining (ICDM). IEEE, 260--269.Google ScholarCross Ref
- Chen He, Denis Parra, and Katrien Verbert. 2016. Interactive recommender systems: A survey of the state of the art and future research challenges and opportunities. Expert Systems with Applications, Vol. 56 (2016), 9--27. https://doi.org/10.1016/j.eswa.2016.02.013Google ScholarDigital Library
- Xin He, Kaiyong Zhao, and Xiaowen Chu. 2021. AutoML: A survey of the state-of-the-art. Knowledge-Based Systems, Vol. 212 (2021), 106622.Google ScholarCross Ref
- Florian Heimerl, Steffen Koch, Harald Bosch, and Thomas Ertl. 2012. Visual Classifier Training for Text Document Retrieval. IEEE Transactions on Visualization and Computer Graphics, Vol. 18, 12 (2012), 2839--2848. https://doi.org/10.1109/TVCG.2012.277Google ScholarDigital Library
- Sarah Holland, Ahmed Hosny, Sarah Newman, Joshua Joseph, and Kasia Chmielinski. 2020. The dataset nutrition label. Data Protection and Privacy, Volume 12: Data Protection and Democracy, Vol. 12 (2020), 1.Google ScholarCross Ref
- Illinois General Assembly. [n.,d.]. HB2557. https://www.ilga.gov/legislation/fulltext.asp?DocName=&SessionId=108&GA=101&DocTypeId=HB&DocNum=2557&GAID=15&LegID=118664&SpecSess=&Session=Google Scholar
- P. N. Johnson-Laird. 1986. Mental Models: Towards a Cognitive Science of Language, Inference, and Consciousness. Harvard University Press, USA.Google ScholarDigital Library
- Hamid Karimi and Jiliang Tang. 2020. Decision Boundary of Deep Neural Networks: Challenges and Opportunities. In Proceedings of the 13th International Conference on Web Search and Data Mining (Houston, TX, USA) (WSDM '20). Association for Computing Machinery, New York, NY, USA, 919--920. https://doi.org/10.1145/3336191.3372186Google ScholarDigital Library
- Harmanpreet Kaur, Harsha Nori, Samuel Jenkins, Rich Caruana, Hanna Wallach, and Jennifer Wortman Vaughan. 2020. Interpreting interpretability: understanding data scientists' use of interpretability tools for machine learning. In Proceedings of the 2020 CHI conference on human factors in computing systems. 1--14.Google ScholarDigital Library
- Miryung Kim, Thomas Zimmermann, Robert DeLine, and Andrew Begel. 2018. Data Scientists in Software Teams: State of the Art and Challenges. IEEE Transactions on Software Engineering, Vol. 44, 11 (2018), 1024--1038. https://doi.org/10.1109/TSE.2017.2754374Google ScholarCross Ref
- Laura Koesten, Emilia Kacprzak, Jeni Tennison, and Elena Simperl. 2019. Collaborative Practices with Structured Data: Do Tools Support What Users Need?. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI '19). Association for Computing Machinery, New York, NY, USA, 1--14. https://doi.org/10.1145/3290605.3300330Google ScholarDigital Library
- Todd Kulesza, Margaret Burnett, Weng-Keen Wong, and Simone Stumpf. 2015. Principles of explanatory debugging to personalize interactive machine learning. In International Conference on Intelligent User Interfaces. 126--137.Google ScholarDigital Library
- Isaac Lage and Finale Doshi-Velez. 2020. Learning Interpretable Concept-Based Models with Human Feedback. arXiv preprint arXiv:2012.02898 (2020).Google Scholar
- Himabindu Lakkaraju, Stephen H. Bach, and Jure Leskovec. 2016. Interpretable Decision Sets: A Joint Framework for Description and Prediction. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (San Francisco, California, USA) (KDD '16). Association for Computing Machinery, New York, NY, USA, 1675--1684. https://doi.org/10.1145/2939672.2939874Google ScholarDigital Library
- Piyawat Lertvittayakumjorn and Francesca Toni. 2021. Explanation-based human debugging of nlp models: A survey. Transactions of the Association for Computational Linguistics, Vol. 9 (2021), 1508--1528.Google ScholarCross Ref
- Pantelis Linardatos, Vasilis Papastefanopoulos, and Sotiris B. Kotsiantis. 2021. Explainable AI: A Review of Machine Learning Interpretability Methods. Entropy, Vol. 23 (2021).Google Scholar
- Arnold Lund. 2001. Measuring Usability with the USE Questionnaire. Usability and User Experience Newsletter of the STC Usability SIG, Vol. 8 (01 2001).Google Scholar
- Joseph B. Lyons. 2013. Being Transparent about Transparency. In AAAI Spring Symposium Series, (pp.48--53). Palo Alto, CL: Association for the Advancement of Artificial Intelligence. | .Google Scholar
- Francisco Caio M. Rodrigues, Roberto Hirata, and Alexandru Cristian Telea. 2018. Image-Based Visualization of Classifier Decision Boundaries. In 2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI). 353--360. https://doi.org/10.1109/SIBGRAPI.2018.00052Google ScholarCross Ref
- Michael A Madaio, Luke Stark, Jennifer Wortman Vaughan, and Hanna Wallach. 2020. Co-designing checklists to understand organizational challenges and opportunities around fairness in AI. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1--14.Google ScholarDigital Library
- Yaoli Mao, Dakuo Wang, Michael Muller, Kush R. Varshney, Ioana Baldini, Casey Dugan, and Aleksandra Mojsilović. 2019. How Data Scientists Work Together With Domain Experts in Scientific Collaborations: To Find The Right Answer Or To Ask The Right Question?, Vol. 3, GROUP, Article 237 (dec 2019), 23 pages. https://doi.org/10.1145/3361118Google ScholarDigital Library
- Nora McDonald, Sarita Schoenebeck, and Andrea Forte. 2019. Reliability and inter-rater reliability in qualitative research: Norms and guidelines for CSCW and HCI practice. Proceedings of the ACM on human-computer interaction, Vol. 3, CSCW (2019), 1--23.Google ScholarDigital Library
- Leland McInnes, John Healy, and James Melville. 2018. Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018).Google Scholar
- M. Migut, Marcel Worring, and Cor Veenman. 2013. Visualizing multi-dimensional decision boundaries in 2D. Data Mining and Knowledge Discovery, Vol. 29 (01 2013). https://doi.org/10.1007/s10618-013-0342-xGoogle ScholarDigital Library
- Yao Ming, Huamin Qu, and Enrico Bertini. 2019. RuleMatrix: Visualizing and Understanding Classifiers with Rules. IEEE Transactions on Visualization and Computer Graphics, Vol. 25, 1 (2019), 342--352. https://doi.org/10.1109/TVCG.2018.2864812Google ScholarDigital Library
- Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model cards for model reporting. In Proceedings of the conference on fairness, accountability, and transparency. 220--229.Google ScholarDigital Library
- Brent Mittelstadt, Chris Russell, and Sandra Wachter. 2019. Explaining Explanations in AI. In Proceedings of the Conference on Fairness, Accountability, and Transparency (Atlanta, GA, USA) (FAT* '19). Association for Computing Machinery, New York, NY, USA, 279--288. https://doi.org/10.1145/3287560.3287574Google ScholarDigital Library
- Christoph Molnar. 2019. Interpretable Machine Learning. https://christophm.github.io/interpretable-ml-book/.Google Scholar
- Michael J. Muller, Ingrid Lange, Dakuo Wang, David Piorkowski, Jason Tsay, Qingzi Vera Liao, Casey Dugan, and Thomas Erickson. 2019. How Data Science Workers Work with Data: Discovery, Capture, Curation, Design, Creation. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (2019).Google ScholarDigital Library
- Rahul Nair, Massimiliano Mattetti, Elizabeth Daly, Dennis Wei, Öznur Alkan, and Yunfeng Zhang. 2021. What Changed? Interpretable Model Comparison. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, $$IJCAI-21$$. 2855--2861.Google ScholarCross Ref
- New York City Council. 2021. Automated employment decision tools. https://legistar.council.nyc.gov/LegislationDetail.aspx?ID=4344524&GUID=B051915D-A9AC-451E-81F8--6596032FA3F9Google Scholar
- Kexin Pei, Yinzhi Cao, Junfeng Yang, and Suman Jana. 2019. DeepXplore: Automated Whitebox Testing of Deep Learning Systems. Commun. ACM, Vol. 62, 11 (oct 2019), 137--145. https://doi.org/10.1145/3361566Google ScholarDigital Library
- David Piorkowski, Soya Park, April Yi Wang, Dakuo Wang, Michael Muller, and Felix Portnoy. 2021. How ai developers overcome communication challenges in a multidisciplinary team: A case study. Proceedings of the ACM on Human-Computer Interaction, Vol. 5, CSCW1 (2021), 1--25.Google ScholarDigital Library
- Peter Pirolli and Stuart Card. 2005. The sensemaking process and leverage points for analyst technology as identified through cognitive task analysis. In Proceedings of international conference on intelligence analysis, Vol. 5. McLean, VA, USA, 2--4.Google Scholar
- Paulo E Rauber, Samuel G Fadel, Alexandre X Falcao, and Alexandru C Telea. 2016. Visualizing the hidden activity of artificial neural networks. IEEE transactions on visualization and computer graphics, Vol. 23, 1 (2016), 101--110.Google Scholar
- Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016a. "Why Should I Trust You?": Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (San Francisco, California, USA) (KDD '16). Association for Computing Machinery, New York, NY, USA, 1135--1144. https://doi.org/10.1145/2939672.2939778Google ScholarDigital Library
- Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016b. "Why Should I Trust You?": Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (San Francisco, California, USA) (KDD '16). Association for Computing Machinery, New York, NY, USA, 1135--1144. https://doi.org/10.1145/2939672.2939778Google ScholarDigital Library
- Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Anchors: High-Precision Model-Agnostic Explanations. Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 32, 1 (Apr. 2018). https://doi.org/10.1609/aaai.v32i1.11491Google ScholarCross Ref
- Burr Settles. 2009. Active Learning Literature Survey. Computer Sciences Technical Report 1648. University of Wisconsin--Madison. http://axon.cs.byu.edu/ martinez/classes/778/Papers/settles.activelearning.pdfGoogle Scholar
- Pannaga Shivaswamy and Thorsten Joachims. 2015. Coactive Learning. J. Artif. Int. Res., Vol. 53, 1 (may 2015), 1--40.Google ScholarDigital Library
- Stefano Teso and Kristian Kersting. 2019. Explanatory interactive machine learning. In Conference on AI, Ethics, and Society. 239--245.Google ScholarDigital Library
- Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of machine learning research, Vol. 9, 11 (2008).Google Scholar
- Jasper van der Waa, Elisabeth Nieuwburg, Anita Cremers, and Mark Neerincx. 2021. Evaluating XAI: A comparison of rule-based and example-based explanations. Artificial Intelligence, Vol. 291 (2021), 103404. https://doi.org/10.1016/j.artint.2020.103404Google ScholarCross Ref
- Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2017. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harv. JL & Tech., Vol. 31 (2017), 841.Google Scholar
- Dakuo Wang, Justin D. Weisz, Michael Muller, Parikshit Ram, Werner Geyer, Casey Dugan, Yla Tausczik, Horst Samulowitz, and Alexander Gray. 2019. Human-AI Collaboration in Data Science: Exploring Data Scientists' Perceptions of Automated AI., Vol. 3, CSCW, Article 211 (nov 2019), 24 pages. https://doi.org/10.1145/3359313Google ScholarDigital Library
- Fulton Wang and Cynthia Rudin. 2015. Falling Rule Lists. In Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics (Proceedings of Machine Learning Research, Vol. 38), Guy Lebanon and S. V. N. Vishwanathan (Eds.). PMLR, San Diego, California, USA, 1013--1022. https://proceedings.mlr.press/v38/wang15a.htmlGoogle Scholar
- Daniel S Weld and Gagan Bansal. 2019. The challenge of crafting intelligible intelligence. Commun. ACM, Vol. 62, 6 (2019), 70--79.Google ScholarDigital Library
- Hongyu Yang, Cynthia Rudin, and Margo Seltzer. 2017. Scalable Bayesian Rule Lists. In Proceedings of the 34th International Conference on Machine Learning - Volume 70 (Sydney, NSW, Australia) (ICML'17). JMLR.org, 3921--3930.Google Scholar
- Qian Yang, Jina Suh, Nan-Chen Chen, and Gonzalo Ramos. 2018. Grounding Interactive Machine Learning Tool Design in How Non-Experts Actually Build Models. https://doi.org/10.1145/3196709.3196729Google ScholarDigital Library
- Amy X. Zhang, Michael Muller, and Dakuo Wang. 2020. How Do Data Science Workers Collaborate? Roles, Workflows, and Tools., Vol. 4, CSCW1, Article 22 (may 2020), 23 pages. https://doi.org/10.1145/3392826Google ScholarDigital Library
Index Terms
- AIMEE: An Exploratory Study of How Rules Support AI Developers to Explain and Edit Models
Recommendations
What Are People Doing About XAI User Experience? A Survey on AI Explainability Research and Practice
Design, User Experience, and Usability. Design for Contemporary Interactive EnvironmentsAbstractExplainability is a hot topic nowadays for artificial intelligent (AI) systems. The role of machine learning (ML) models on influencing human decisions shed light on the back-box of computing systems. AI based system are more than just ML models. ...
AI based intelligent system on the EDISON platform
AICCC '18: Proceedings of the 2018 Artificial Intelligence and Cloud Computing ConferenceIn recent years, artificial intelligence (AI) has become a trend all over the world. This trend has led to the application and development of intelligent system that apply AI. In this paper, we describe a system architecture that uses AI, on a platform ...
Comparing Explanations from Glass-Box and Black-Box Machine-Learning Models
Computational Science – ICCS 2022AbstractExplainable Artificial Intelligence (XAI) aims at introducing transparency and intelligibility into the decision-making process of AI systems. In recent years, most efforts were made to build XAI algorithms that are able to explain black-box ...
Comments