Skip to main content

Continuous Argument Engineering: Tackling Uncertainty in Machine Learning Based Systems

  • Conference paper
  • First Online:
Book cover Computer Safety, Reliability, and Security (SAFECOMP 2018)

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 11094))

Included in the following conference series:

Abstract

Components or systems implemented by using machine learning techniques have intrinsic difficulties caused by uncertainty. Specifically, it is impossible to logically or deductively conclude what they can(not) do or how they behave for untested inputs. In addition, such systems are often applied to the real world, which has uncertain requirements and environments. In this paper, we discuss what becomes difficult or even impossible in the use of arguments or assurance cases for machine learning based systems. We then propose an approach for continuously analyzing, managing, and updating arguments while accepting uncertainty as intrinsic in nature.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://www.theguardian.com/technology/2018/jan/12/google-racism-ban-gorilla-black-people.

  2. 2.

    From an example of goal-oriented requirements analysis [10].

References

  1. Baresi, L., Ghezzi, C.: The disappearing boundary between development-time and run-time. In: FSE/SDP Workshop on Future of Software Engineering Research, pp. 17–22, November 2010

    Google Scholar 

  2. Blair, G., Bencomo, N., France, R.B.: Models@ run.time. IEEE Comput. 42(10), 22–27 (2009)

    Article  Google Scholar 

  3. Breck, E., Cai, S., Nielsen, E., Salib, M., Sculley, D.: What’s your ML test score? a rubric for ML production systems. In: NIPS 2016 Workshop on Reliable Machine Learning in the Wild, December 2017

    Google Scholar 

  4. Fujita, H., Matsuno, Y., Hanawa, T., Sato, M., Kato, S., Ishikawa, Y.: DS-Bench Toolset: Tools for dependability benchmarking with simulation and assurance. In: IEEE/IFIP International Conference on Dependable Systems and Networks (DSN 2012), pp. 1–8, June 2012

    Google Scholar 

  5. Burton, S., Gauerhof, L., Heinzemann, C.: Making the case for safety of machine learning in highly automated driving. In: Tonetta, S., Schoitsch, E., Bitsch, F. (eds.) SAFECOMP 2017. LNCS, vol. 10489, pp. 5–16. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66284-8_1

    Chapter  Google Scholar 

  6. Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (ICLR), May 2015

    Google Scholar 

  7. Gunning, D.: Explainable artificial intelligence (XAI). In: IJCAI 2016 Workshop on Deep Learning for Artificial Intelligence (DLAI), July 2016

    Google Scholar 

  8. Huang, X., Kwiatkowska, M., Wang, S., Wu, M.: Safety verification of deep neural networks. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 3–29. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63387-9_1

    Chapter  Google Scholar 

  9. Kelly, T., Weaver, R.: The Goal Structuring Notation - a safety argument notation. In: Dependable Systems and Networks 2004 Workshop on Assurance Cases, July 2004

    Google Scholar 

  10. van Lamsweerde, A.: Requirements Engineering: From System Goals to UML Models to Software Specifications. Wiley, January 2009

    Google Scholar 

  11. Pei, K., Cao, Y., Yang, J., Jana, S.: Deepxplore: automated whitebox testing of deep learning systems. In: The 26th Symposium on Operating Systems Principles (SOSP 2017), pp. 1–18, October 2017

    Google Scholar 

  12. Ribeiro, M.T., Singh, S., Guestrin, C.: “why should i trust you?” explaining the predictions of any classifier. In: The 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2016), pp. 1135–1144, August 2016

    Google Scholar 

  13. Sawyer, P., Bencomo, N., Whittle, J., Letier, E., Finkelstein, A.: Requirements-aware systems: A research agenda for re for self-adaptive systems. In: The 18th IEEE International Requirements Engineering Conference (RE 2010), pp. 95–103, September 2010

    Google Scholar 

  14. Sculley, D., Holt, G., Golovin, D., Davydov, E., Phillips, T., Ebner, D., Chaudhary, V., Young, M.: Machine learning: the high interest credit card of technical debt. In: NIPS 2014 Workshop on Software Engineering for Machine Learning (SE4ML), December 2014

    Google Scholar 

  15. Seshia, S.A., Sadigh, D., Sastry, S.S.: Towards verified artificial intelligence (v3), October 2017. https://arxiv.org/abs/1606.08514

  16. Tokuda, H., Yonezawa, T., Nakazawa, J.: Monitoring dependability of city-scale iot using d-case. In: 2014 IEEE World Forum on Internet of Things (WF-IoT), pp. 371–372, March 2014

    Google Scholar 

  17. Zinkevich, M.: Rules for reliable machine learning: Best practices for ML engineering. NIPS 2016 Workshop on Reliable Machine Learning in the Wild, December 2017

    Google Scholar 

Download references

Acknowledgments

This work is partially supported by ERATO HASUO Metamathematics for Systems Design Project (No. JPMJER1603), JST. We are thankful to the industry researchers and engineers who gave deep insight into the difficulties of engineering for cyber-physical systems and machine learning systems.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fuyuki Ishikawa .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ishikawa, F., Matsuno, Y. (2018). Continuous Argument Engineering: Tackling Uncertainty in Machine Learning Based Systems. In: Gallina, B., Skavhaug, A., Schoitsch, E., Bitsch, F. (eds) Computer Safety, Reliability, and Security. SAFECOMP 2018. Lecture Notes in Computer Science(), vol 11094. Springer, Cham. https://doi.org/10.1007/978-3-319-99229-7_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-99229-7_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-99228-0

  • Online ISBN: 978-3-319-99229-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics