Skip to main content

HistoMapr: An Explainable AI (xAI) Platform for Computational Pathology Solutions

  • Chapter
  • First Online:
Book cover Artificial Intelligence and Machine Learning for Digital Pathology

Abstract

Pathologists are adopting whole slide images (WSIs) for diagnostic purposes. While doing so, pathologists should have all the information needed to make best diagnoses rapidly, while supervising computational pathology tools in real-time. Computational pathology has great potential for augmenting pathologists’ accuracy and efficiency, but concern exists regarding trust for ‘black-box AI’ solutions. Explainable AI (xAI) can reveal underlying reasons for its results, to promote safety, reliability, and accountability for critical tasks such as pathology diagnosis. Built on a hierarchy of computational and traditional image analysis algorithms, we present the development of our proprietary xAI software platform, HistoMapr, for pathologists to improve their efficiency and accuracy when viewing WSIs. HistoMapr and xAI represent a powerful and transparent alternative to ‘black-box’ AI. HistoMapr previews WSIs then presents key diagnostic areas first in an interactive, explainable fashion. Pathologists can access xAI features via a “Why?” button in the interface. Furthermore, two critical early application examples are presented: 1) Intelligent triaging that involves xAI estimation of difficulty for new cases to be forwarded to subspecialists or generalist pathologists; 2) Retrospective quality assurance entails detection of potential discrepancies between finalized results and xAI reviews. Finally, a prototype is presented for atypical ductal hyperplasia, a diagnostic challenge in breast pathology, where xAI descriptions were based on computational pipeline image results.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Food and Drug Administration, U.S.A.: Intellisite3 pathology solution (pips, Philips medical systems) (2017)

    Google Scholar 

  2. Food and Drug Administration, U.S.A.: Aperio AT2 DX system (2019)

    Google Scholar 

  3. Pantanowitz, L., Sharma, A., Carter, A.B., Kurc, T., Sussman, A., Saltz, J.: Twenty years of digital pathology: an overview of the road travelled, what is on the horizon, and the emergence of vendor-neutral archives. J. Pathol. Inf. 9 (2018, online)

    Google Scholar 

  4. Louis, D.N., et al.: Computational pathology: a path ahead. Arch. Pathol. Lab. Med. 140(1), 41–50 (2016)

    Article  Google Scholar 

  5. Fuchs, T.J., Buhmann, J.M.: Computational pathology: challenges and promises for tissue analysis. Comput. Med. Imaging Graph. 35(7–8), 515–530 (2011)

    Article  Google Scholar 

  6. Kumar, N., Verma, R., Sharma, S., Bhargava, S., Vahadane, A., Sethi, A.: A dataset and a technique for generalized nuclear segmentation for computational pathology. IEEE Trans. Med. Imaging 36(7), 1550–1560 (2017)

    Article  Google Scholar 

  7. Eisses, J.F., et al.: A computer-based automated algorithm for assessing acinar cell loss after experimental pancreatitis. PloS One 9(10) (2014, online)

    Google Scholar 

  8. Mercan, E., Mehta, S., Bartlett, J., Shapiro, L.G., Weaver, D.L., Elmore, J.G.: Assessment of machine learning of breast pathology structures for automated differentiation of breast cancer and high-risk proliferative lesions. JAMA Netw. Open 2(8), e198777 (2019)

    Article  Google Scholar 

  9. Tosun, A.B., Sokmensuer, C., Gunduz-Demir, C.: Unsupervised tissue image segmentation through object-oriented texture. In: 2010 20th International Conference on Pattern Recognition, pp. 2516–2519. IEEE (2010)

    Google Scholar 

  10. Li, H., Whitney, J., Bera, K., Gilmore, H., Thorat, M.A., Badve, S., Madabhushi, A.: Quantitative nuclear histomorphometric features are predictive of oncotype DX risk categories in ductal carcinoma in situ: preliminary findings. Breast Cancer Res. 21(1), 114 (2019)

    Article  Google Scholar 

  11. Huang, H., et al.: Cancer diagnosis by nuclear morphometry using spatial information. Pattern Recogn. Lett. 42, 115–121 (2014)

    Article  Google Scholar 

  12. Dong, F., et al.: Computational pathology to discriminate benign from malignant intraductal proliferations of the breast. PloS One 9(12) (2014, online)

    Google Scholar 

  13. Nawaz, S., Yuan, Y.: Computational pathology: exploring the spatial dimension of tumor ecology. Cancer Lett. 380(1), 296–303 (2016)

    Article  Google Scholar 

  14. Fuchs, T.J., Wild, P.J., Moch, H., Buhmann, J.M.: Computational pathology analysis of tissue microarrays predicts survival of renal clear cell carcinoma patients. In: Metaxas, D., Axel, L., Fichtinger, G., Székely, G. (eds.) MICCAI 2008. LNCS, vol. 5242, pp. 1–8. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-85990-1_1

    Chapter  Google Scholar 

  15. Tosun, A.B., Yergiyev, O., Kolouri, S., Silverman, J.F., Rohde, G.K.: Detection of malignant mesothelioma using nuclear structure of mesothelial cells in effusion cytology specimens. Cytometry Part A 87(4), 326–333 (2015)

    Article  Google Scholar 

  16. Farahani, N., Liu, Z., Jutt, D., Fine, J.L.: Pathologists’ computer-assisted diagnosis: a mock-up of a prototype information system to facilitate automation of pathology sign-out. Arch. Pathol. Lab. Med. 141(10), 1413–1420 (2017)

    Article  Google Scholar 

  17. Fine, J.L.: 21st century workflow: a proposal. J. Pathol. Inf. 5 (2014, online)

    Google Scholar 

  18. Tosun, A.B., et al.: Histological detection of high-risk benign breast lesions from whole slide images. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10434, pp. 144–152. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66185-8_17

    Chapter  Google Scholar 

  19. Li, C., Wang, X., Liu, W., Latecki, L.J.: DeepMitosis: mitosis detection via deep detection, verification and segmentation networks. Med. Image Anal. 45, 121–133 (2018)

    Article  Google Scholar 

  20. Janowczyk, A., Madabhushi, A.: Deep learning for digital pathology image analysis: a comprehensive tutorial with selected use cases. J. Pathol. Inf. 7 (2016, online)

    Google Scholar 

  21. Aresta, G., et al.: BACH: grand challenge on breast cancer histology images. Med. Image Anal. 56, 122–139 (2019)

    Article  Google Scholar 

  22. Liu, Y., Gadepalli, K., et al.: Detecting cancer metastases on gigapixel pathology images. arXiv preprint arXiv:1703.02442 (2017)

  23. Bejnordi, B.E., et al.: Context-aware stacked convolutional neural networks for classification of breast carcinomas in whole-slide histopathology images. J. Med. Imaging (Bellingham) 4(4), 044504 (2017)

    Google Scholar 

  24. Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1(5), 206–215 (2019)

    Article  Google Scholar 

  25. Gunning, D.: Explainable artificial intelligence (xAI). Defense Advanced Research Projects Agency (DARPA), nd Web 2 (2017)

    Google Scholar 

  26. Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., Yang, G.Z.: XAI–explainable artificial intelligence. Sci. Robot. 4(37) (2019, online)

    Google Scholar 

  27. Hoffman, R.R., Mueller, S.T., Klein, G., Litman, J.: Metrics for explainable AI: challenges and prospects. arXiv preprint arXiv:1812.04608 (2018)

  28. Samek, W., Wiegand, T., Müller, K.R.: Explainable artificial intelligence: understanding, visualizing and interpreting deep learning models. arXiv preprint arXiv:1708.08296 (2017)

  29. Uttam, S., et al.: Spatial domain analysis predicts risk of colorectal cancer recurrence and infers associated tumor microenvironment networks. bioRxiv (2019)

    Google Scholar 

  30. USCAP: United States and Canadian academy of pathology (USCAP) annual meeting

    Google Scholar 

  31. DPA: Pathology visions conference

    Google Scholar 

  32. Elmore, J.G., et al.: Diagnostic concordance among pathologists interpreting breast biopsy specimens. JAMA 313(11), 1122–1132 (2015)

    Article  Google Scholar 

  33. Montalto, M.C.: An industry perspective: an update on the adoption of whole slide imaging. J. Pathol. Inf. 7 (2016, online)

    Google Scholar 

  34. Jones, T., Nguyen, L., Torun, A.B., Chennubhotla, S., Fine, J.L.: Computational pathology versus manual microscopy: comparison based on workflow simulations of breast core biopsies. In: Laboratory Investigation, vol. 97, Nature Publishing Group 75 Varick St, 9th Flr, New York, NY, 10013-1917 USA, pp. 398A–398A (2017)

    Google Scholar 

  35. Simpson, J.F., Boulos, F.I.: Differential diagnosis of proliferative breast lesions. Surg. Pathol. Clin. 2(2), 235–246 (2009)

    Article  Google Scholar 

  36. Onega, T., et al.: The diagnostic challenge of low-grade ductal carcinoma in situ. Eur. J. Cancer 80, 39–47 (2017)

    Article  Google Scholar 

  37. Nguyen, L., Tosun, A.B., Fine, J.L., Taylor, D.L., Chennubhotla, S.C.: Architectural patterns for differential diagnosis of proliferative breast lesions from histopathological images. In: IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), pp. 152–155. IEEE (2017)

    Google Scholar 

  38. Nguyen, L., Tosun, A.B., Fine, J.L., Lee, A.V., Taylor, D.L., Chennubhotla, S.C.: Spatial statistics for segmenting histological structures in H&E stained tissue images. IEEE Trans. Med. Imaging 36(7), 1522–1532 (2017)

    Article  Google Scholar 

  39. Nguyen, A., Yosinski, J., Clune, J.: Deep neural networks are easily fooled: high confidence predictions for unrecognizable images. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 427–436 (2015)

    Google Scholar 

  40. Tizhoosh, H.R., Pantanowitz, L.: Artificial intelligence and digital pathology: challenges and opportunities. J. Pathol. Inf. 9 (2018, online)

    Google Scholar 

  41. Hudec, M., Bednárová, E., Holzinger, A.: Augmenting statistical data dissemination by short quantified sentences of natural language. J. Off. Stat. 34(4), 981–1010 (2018)

    Article  Google Scholar 

  42. European Commission: Ethics guidelines for trustworthy AI (European commission, 2019) (2019)

    Google Scholar 

  43. US: The white house, executive office of the president of the United States, national artificial intelligence research and development strategic plan (2019)

    Google Scholar 

  44. Holzinger, A., Biemann, C., Pattichis, C.S., Kell, D.B.: What do we need to build explainable AI systems for the medical domain? arXiv preprint arXiv:1712.09923 (2017)

  45. Floridi, L.: Establishing the rules for building trustworthy AI. Nat. Mach. Intell. 1(6), 261–262 (2019)

    Article  Google Scholar 

  46. Evans, A.J., et al.: Us food and drug administration approval of whole slide imaging for primary diagnosis: a key milestone is reached and new questions are raised. Arch. Pathol. Lab. Med. 142(11), 1383–1387 (2018)

    Article  Google Scholar 

  47. Ribeiro, M.T., Singh, S., Guestrin, C.: Model-agnostic interpretability of machine learning. arXiv preprint arXiv:1606.05386 (2016)

  48. Miller, T., Howe, P., Sonenberg, L.: Explainable AI: beware of inmates running the asylum or: how i learnt to stop worrying and love the social and behavioural sciences. arXiv preprint arXiv:1712.00547 (2017)

  49. Montavon, G., Samek, W., Müller, K.R.: Methods for interpreting and understanding deep neural networks. Digit. Signal Proc. 73, 1–15 (2018)

    Article  MathSciNet  Google Scholar 

  50. Core, M.G., Lane, H.C., Van Lent, M., Gomboc, D., Solomon, S., Rosenberg, M.: Building explainable artificial intelligence systems. In: AAAI, pp. 1766–1773 (2006)

    Google Scholar 

  51. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?” Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)

    Google Scholar 

  52. Holzinger, A., Carrington, A., Müller, H.: Measuring the quality of explanations: the system causability scale (SCS). KI-Künstliche Intell. 34(2), 193–198 (2020)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jeffrey L. Fine .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Tosun, A.B., Pullara, F., Becich, M.J., Taylor, D.L., Chennubhotla, S.C., Fine, J.L. (2020). HistoMapr: An Explainable AI (xAI) Platform for Computational Pathology Solutions. In: Holzinger, A., Goebel, R., Mengel, M., Müller, H. (eds) Artificial Intelligence and Machine Learning for Digital Pathology. Lecture Notes in Computer Science(), vol 12090. Springer, Cham. https://doi.org/10.1007/978-3-030-50402-1_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-50402-1_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-50401-4

  • Online ISBN: 978-3-030-50402-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics