Abstract
Components or systems implemented by using machine learning techniques have intrinsic difficulties caused by uncertainty. Specifically, it is impossible to logically or deductively conclude what they can(not) do or how they behave for untested inputs. In addition, such systems are often applied to the real world, which has uncertain requirements and environments. In this paper, we discuss what becomes difficult or even impossible in the use of arguments or assurance cases for machine learning based systems. We then propose an approach for continuously analyzing, managing, and updating arguments while accepting uncertainty as intrinsic in nature.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
- 2.
From an example of goal-oriented requirements analysis [10].
References
Baresi, L., Ghezzi, C.: The disappearing boundary between development-time and run-time. In: FSE/SDP Workshop on Future of Software Engineering Research, pp. 17–22, November 2010
Blair, G., Bencomo, N., France, R.B.: Models@ run.time. IEEE Comput. 42(10), 22–27 (2009)
Breck, E., Cai, S., Nielsen, E., Salib, M., Sculley, D.: What’s your ML test score? a rubric for ML production systems. In: NIPS 2016 Workshop on Reliable Machine Learning in the Wild, December 2017
Fujita, H., Matsuno, Y., Hanawa, T., Sato, M., Kato, S., Ishikawa, Y.: DS-Bench Toolset: Tools for dependability benchmarking with simulation and assurance. In: IEEE/IFIP International Conference on Dependable Systems and Networks (DSN 2012), pp. 1–8, June 2012
Burton, S., Gauerhof, L., Heinzemann, C.: Making the case for safety of machine learning in highly automated driving. In: Tonetta, S., Schoitsch, E., Bitsch, F. (eds.) SAFECOMP 2017. LNCS, vol. 10489, pp. 5–16. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66284-8_1
Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (ICLR), May 2015
Gunning, D.: Explainable artificial intelligence (XAI). In: IJCAI 2016 Workshop on Deep Learning for Artificial Intelligence (DLAI), July 2016
Huang, X., Kwiatkowska, M., Wang, S., Wu, M.: Safety verification of deep neural networks. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 3–29. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63387-9_1
Kelly, T., Weaver, R.: The Goal Structuring Notation - a safety argument notation. In: Dependable Systems and Networks 2004 Workshop on Assurance Cases, July 2004
van Lamsweerde, A.: Requirements Engineering: From System Goals to UML Models to Software Specifications. Wiley, January 2009
Pei, K., Cao, Y., Yang, J., Jana, S.: Deepxplore: automated whitebox testing of deep learning systems. In: The 26th Symposium on Operating Systems Principles (SOSP 2017), pp. 1–18, October 2017
Ribeiro, M.T., Singh, S., Guestrin, C.: “why should i trust you?” explaining the predictions of any classifier. In: The 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2016), pp. 1135–1144, August 2016
Sawyer, P., Bencomo, N., Whittle, J., Letier, E., Finkelstein, A.: Requirements-aware systems: A research agenda for re for self-adaptive systems. In: The 18th IEEE International Requirements Engineering Conference (RE 2010), pp. 95–103, September 2010
Sculley, D., Holt, G., Golovin, D., Davydov, E., Phillips, T., Ebner, D., Chaudhary, V., Young, M.: Machine learning: the high interest credit card of technical debt. In: NIPS 2014 Workshop on Software Engineering for Machine Learning (SE4ML), December 2014
Seshia, S.A., Sadigh, D., Sastry, S.S.: Towards verified artificial intelligence (v3), October 2017. https://arxiv.org/abs/1606.08514
Tokuda, H., Yonezawa, T., Nakazawa, J.: Monitoring dependability of city-scale iot using d-case. In: 2014 IEEE World Forum on Internet of Things (WF-IoT), pp. 371–372, March 2014
Zinkevich, M.: Rules for reliable machine learning: Best practices for ML engineering. NIPS 2016 Workshop on Reliable Machine Learning in the Wild, December 2017
Acknowledgments
This work is partially supported by ERATO HASUO Metamathematics for Systems Design Project (No. JPMJER1603), JST. We are thankful to the industry researchers and engineers who gave deep insight into the difficulties of engineering for cyber-physical systems and machine learning systems.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Ishikawa, F., Matsuno, Y. (2018). Continuous Argument Engineering: Tackling Uncertainty in Machine Learning Based Systems. In: Gallina, B., Skavhaug, A., Schoitsch, E., Bitsch, F. (eds) Computer Safety, Reliability, and Security. SAFECOMP 2018. Lecture Notes in Computer Science(), vol 11094. Springer, Cham. https://doi.org/10.1007/978-3-319-99229-7_2
Download citation
DOI: https://doi.org/10.1007/978-3-319-99229-7_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-99228-0
Online ISBN: 978-3-319-99229-7
eBook Packages: Computer ScienceComputer Science (R0)